AI News

OpenAI launches new Council for Well Being & Responsible AI

OpenAI's first detailed public study reveals who uses its chatbot most and what they ask it to do.

OpenAI has announced the formation of a new Expert Council on Well-being and AI, bringing together psychologists, ethicists, and public health professionals to study the social and psychological impact of its large language models like ChatGPT. The move signals an attempt by the company to proactively address the societal effects of its rapidly developing technology.

What’s the OpenAI Council for?

The council is tasked with advising OpenAI on how its AI models affect human well-being, spanning areas from mental health and self-esteem to social connection and cognitive function. Its primary mission is to identify both the positive and negative consequences of human-AI interaction and recommend product design changes and policy safeguards.

OpenAI has stated the council will focus on three key areas:

  1. Measuring Impact: Developing methods to measure the psychological and social effects of AI use.
  2. Mitigation: Recommending ways to design AI to reduce risks like loneliness, self-harm, and addiction, and how to counter manipulative or harmful content.
  3. Enhancement: Exploring how AI can be used to positively enhance human life, such as supporting mental health or improving social skills.

The council is composed of independent experts from various fields. While OpenAI’s blog post did not name all members, it emphasized that the group includes specialists in clinical psychology, developmental science, and technology ethics. The council will operate independently and provide non-binding recommendations directly to OpenAI’s research and product teams.

The Team Behind Council

  • David Bickham, Ph.D.—Research Director at the Digital Wellness Lab at Boston Children’s Hospital and Assistant Professor at Harvard Medical School. His work looks at how young people’s social media use affects their mental health and development.
  • Mathilde Cerioli, Ph.D.—Chief Scientific Officer at everyone.AI, a nonprofit helping people understand the opportunities and risks of AI for children. With a Ph.D. in Cognitive Neuroscience and a Master’s Degree in Psychology, her research focuses on how AI intersects with child cognitive and emotional development.
  • Munmun De Choudhury, Ph.D.—J. Z. Liang Professor of Interactive Computing at Georgia Tech. She harnesses computational approaches to better understand the role of online technologies in shaping and improving mental health.
  • Tracy Dennis-Tiwary, Ph.D.—Professor of Psychology at Hunter College and co-founder and CSO at Arcade Therapeutics. She creates digital games for mental health and explores interactions between technology and emotional wellbeing. 
  • Sara Johansen, M.D.—Clinical Assistant Professor at Stanford University and founder of Stanford’s Digital Mental Health Clinic. Her work explores how digital platforms can support mental health and well-being.
  • David Mohr, Ph.D.—Professor at Northwestern University and Director of the Center for Behavioral Intervention Technologies. He studies how technology can help prevent and treat common mental health conditions such as depression and anxiety.
  • Andrew K. Przybylski, Ph.D.—Professor of Human Behaviour and Technology at the University of Oxford. He studies how social media and video games shape motivation and well-being.
  • Robert K. Ross, M.D.—A national leader and expert in health philanthropy, public health, and community-based health initiatives. He began his career as a pediatrician and is formerly the president and CEO of The California Endowment. 

A Proactive Step Amid Safety Scrutiny

The formation of this council follows a period of intense scrutiny regarding the safety and ethical implications of generative AI. The public launch of chatbots has led to widespread concerns about their potential to encourage self-harm, disseminate misinformation, and alter human behavior. The company recently announced new safety measures for its chatbots aimed at protecting teenagers and children, following a lawsuit.

By establishing an external expert body, OpenAI is attempting to signal its commitment to responsible development. This strategy allows the company to leverage outside academic and clinical expertise to inform its safety protocols, potentially anticipating future regulatory action and public concern. The council’s work is intended to bridge the gap between AI development speed and the typically slower pace of psychological and ethical research.

Abhijay Singh Rawat
Abhijay is the News Editor at TimesofAI, who loves to follow up on the latest tech and AI trends. After office hours, you would find him either grinding competitive ranked games, or trek up his way in the hills of Uttarakhand.
You may also like
More in:AI News