AI News

After Teen’s Suicide, U.S. Man Kills Mother, Self After Reinforced Fears from ChatGPT

After California Teen's Suicide, ChatGPT May Now Be Involved In Suicide-Murder Case

Key Highlights – 

  • Authorities are investigating a tragic murder suicide in Old Greenwich Connecticut, involving a man and his mother.
  • Preliminary reports link the incident to man’s excessive use of ChatGPT.
  • The man who had a history of mental health issues, allegedly found validation from talking to OpenAI’s chatbot.

In a tragic event of murder-suicide in Old Greenwich, Connecticut has reignited the debate over the role of AI in mental health crises. Authorities report that 56 year old Stein-Erik Soelberg, a former tech worker, who had been struggling with a severe paranoia and mental illness, killed his mother, before taking his own life.

The case has gained widespread attention and coverage from multiple news publications, after police discovered evidence suggesting the man had been engaging in extensive conversations with ChatGPT. The OpenAI chatbot is appeared to have reinforced his paranoia fears and delusions, ultimately adding to the homicide-suicide taking place.

This follows a high-profile lawsuit filed by the parents of a California teenager who died by suicide, alleging the chatbot provided their son with detailed instructions.

What Really Happened?

Given his paranoia, Soelberg started relying on ChatGPT, naming it “Bobby,” before sharing his beliefs how everyone around him was secretly spying on him. Soelberg had a long history of alcoholism and suicide attempts, and when he started relying on the AI chatbot, it only affirmed to his inhibitions stating he was “sane” and even described his suspicions as “justified.”

In one of the final parts of the conversations, this is what he told the AI –

We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever.”

Three weeks from this message, Soelberg and his mother were no more.

How Reliable Is AI For Mental Support

AI all the while may prove its usefulness at various fields, but since they are designed to mimic empathy and agreement, such traits can be dangerous for vulnerable individuals struggling to get proper mental health. A clinical psychiatrist shared his two cents on the issue saying, “AI tools don’t diagnose, and they don’t push back against false beliefs in the way a therapist would. For people already struggling with mental challenges, the risk of worsening their delusions is very real.”

While this maybe the first known case where an AI chatbot is linked to a murder-suicide, it is far from being one in a very long time. Recently parents of Adam Raine, a 16 year old have filed a lawsuit against OpenAI and Sam Altman claiming that ChatGPT supplied methods of suicide to him.

Such instances where AI, instead of combating loneliness, feeds into user’s detachment from reality is termed as “AI psychosis.” Adding to the exact term, Microsoft AI Chief, Mustafa Suleyman had taken to his personal blog, sharing his concerns upon what he calls “Seemingly Conscious AI” or SCAI.

And while most AI leaders would focus on showing the bright side of AI, Suleyman warned users of an inevitable question – What happens when an AI becomes so good at mimicking consciousness that we start to believe it is real?

He insists that instead of trying to create an AI that acts as a person, the focus should shift on building AI to serve people. In regards to the California suicide case, OpenAI offered condolences and added that ChatGPT does indeed include safegaurds to “direct people to crisis helplines.” Adding to this, they shared that in longer conversations, the “model’s safety training may degrade.”

This response is scary for all of us relying endlessly on AI, and raises important questions such as – How safe is AI acting as a mental support/therapist to us users?

Abhijay Singh Rawat
Abhijay is the News Editor at TimesofAI and TimesofGames, who loves to follow up on the latest tech and AI trends. After office hours, you would find him either grinding competitive ranked games, or trek up his way in the hills of Uttarakhand.
More in:AI News