Key Highlights
- In an official legal response, OpenAI refutes all the allegations, and says that Adam Raine repeatedly bypassed safety rules and ignored crisis warnings from ChatGPT.
- The family argues OpenAI rushed GPT-4o and is dodging responsibility in the whole matter.
- The case raises broader questions about AI safety and accountability.
While OpenAI is thriving in the AI industry, its success has been repeatedly dented by multiple lawsuits accusing ChatGPT of playing a direct role in the tragic suicides of multiple teens. In a new lawsuit, the company has formally responded to a case filed by the parents of 16-year-old Adam Rain and rejected claims that the chatbot acted as a “suicide coach.” The company has argued that the teen misused the system, violated its rules, and tricked the built-in safety guardrails.
OpenAI’s legal response denies claims made by Adam Raine’s family
OpenAI filed a legal response yesterday in California Superior Court about the allegations that have been circulating since August, when Riney’s parents filed a lawsuit against OpenAI CEO Sam Altman, alleging that the company created a defective product. The parents of the teen also alleged that OpenAI rushed GPT-4o to market and failed to protect vulnerable users. As pointed out in the lawsuit, chat logs show that GPT-4o discouraged Riney from seeking help and assisted him in drafting a suicide note and even advised him to set up a noose.
However, OpenAI says that the whole tragedy can’t be pinpointed on ChatGPT alone. As mentioned earlier, the company has said the teen repeatedly violated clear rules that had been laid out for the use of ChatGPT. OpenAI has also pointed out that despite ChatGPT providing suicide hotline numbers and crisis resources more than 100 times, Riney ignored those warnings by tweaking his questions as harmless role-play or character building. Here’s what OpenAI said in the court filing:
To the extent that any ‘cause’ can be attributed to this tragic event. “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.
Also read: OpenAI Finally Admits Why AI Models Hallucinate
Raine family isn’t buying OpenAI’s argument
On the other hand, the Raine family isn’t quite convinced by OpenAI’s argument in this case. J. Elderson, the lead attorney who is representing the Raine family, has blasted the company’s stance, calling it disturbing. He further said that OpenAI is dodging accountability and trying to use terms-of-service loopholes to make everything look in place.
Elderson also claims that OpenAI fast-tracked GPT-4o’s launch without adequate testing and tweaked its internal model twice in ways the public never knew. He also criticized OpenAI for saying Riney was at fault for not heeding warnings, and argued the company is essentially blaming a distressed 16-year-old for using “ChatGPT in the very way it was programmed to act.”
It’s worth noting that the lawsuit also mentions conversations from the final hours before Riney’s death on April 11. The family alleges that ChatGPT motivated him, encouraged secrecy, and helped improve the suicide note. OpenAI refutes that claims in its legal response and said that the lawsuit cherry-picks chat excerpts without context and omits the teen’s longstanding mental health struggles. For now, the court has been given a full transcript under seal due to the sensitive nature of the matter.
OpenAI’s stance and questions about responsibility
All that said, OpenAI is trying to rebuild its image of responsibility. In a blog post published on Tuesday, the company said it wants to address the lawsuit with care, transparency, and respect. It also noted that it has introduced more safeguards since April, including parental controls and an expert advisory council focused on safety and guardrails. OpenAI has also argued that GPT-4o underwent extensive mental-health-oriented evaluations before launch, indirectly pushing back on claims that it was rushed out the door.
This isn’t the only lawsuit OpenAI is facing related to GPT-4o safety. At least seven additional cases have been lodged this month alone. At this point, the whole matter raises a huge question the industry can’t dodge anymore: What does safety look like when AI can respond emotionally, personally, and persistently? Where does user responsibility end and company accountability begin—and who should be held liable when technology doesn’t just talk, but convinces?









