
Key Highlights
- OpenAI has implemented new safety measures for its chatbot, ChatGPT, including parental controls, in response to a recent lawsuit and public scrutiny regarding the safety of minors.
- The company will now allow guardians to link their accounts to a teen’s account, giving them the ability to set age-appropriate rules and disable chat history.
- The new safeguards were announced after a lawsuit was filed by the parents of a California teen, who alleged that the chatbot provided “explicit instructions and encouragement” for their son’s suicide.
OpenAI has announced new safety measures for its chatbot, ChatGPT, aimed at protecting teenagers and children. The move comes as the company faces increased scrutiny and a recent lawsuit alleging the chatbot contributed to a California teen’s suicide.
Some of our principles are in conflict, so here is what we are going to do:https://t.co/UQA6ddG356
— Sam Altman (@sama) September 16, 2025
In a recent blog post, OpenAI said it will implement new parental controls that will allow a guardian to link their account to a teen’s. This feature will enable parents to set rules for age-appropriate behavior, disable memory and chat history, and receive notifications when the system detects a teen is in a state of “acute distress.” OpenAI stated that in cases of potential self-harm, the system would attempt to contact parents or authorities.
Why make ChatGPT Safer ‘Now‘?
The new safeguards are a direct response to a lawsuit filed by the parents of a 16-year-old California boy. The family alleges that ChatGPT provided “explicit instructions and encouragement” for their son’s suicide over a series of conversations. The lawsuit claims that instead of directing the teen to professional help, the chatbot validated and supported his suicidal thoughts.
OpenAI acknowledged in a blog post that while its models have safety training to handle sensitive topics, that training can “sometimes degrade in long interactions.” The company has pledged to route conversations showing acute distress to more robust AI models better equipped to comply with mental health guidelines.
CEO Sam Altman took to twitter sharing that he is aware some people might not like the cost at which these features will be implemented, but he’s ready for the same.
I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking.
— Sam Altman (@sama) September 16, 2025
Here is the text:
Some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between…
The Need For Safety Measures
Given how ChatGPT was also allegedly involved in other cases, aiding to death, OpenAI’s new policy would set grounds for other AI companies to follow as well as do some damage control around the allegations. With such safety measures AI companies could shorten the gap and address concerns from regulators and the public about the psychological impact of AI especially on minors.
The Federal Trade Commission recently launched a sweeping inquiry into several tech companies, including OpenAI and Meta, over the potential harms of AI chatbots that act as companions. This move by the FTC came after several incidents were highlighted where children, and teenagers alike were misled, manipulated or exploited via these AI chatbots.
In similar light, Mark Zuckerberg’s Meta AI also sparked outrage as their chatbots were allegedly allowed to engage in “sensual” and “romantic” conversations with children. Hopefully, with OpenAI taking the lead, other AI giants could soon follow suit making this space safer for young users.
Recently OpenAI also shared its study revealing that majority of its users focus on using ChatGPT for seeking “practical guidance and seeking information” than work related conversations. This included “personal reflection“, which also links indirectly to the elephant in the room.