
Key Highlights –
- The Federal Trade Commission (FTC) has launched an inquiry into seven tech companies, including Meta and OpenAI, over the potential harms of AI chatbots on children and teenagers.
- The investigation focuses on chatbots designed to act as companions, and the agency is seeking information on how companies test for harm, enforce age restrictions, and inform users about data collection.
- The inquiry follows high-profile cases where AI chatbots allegedly caused emotional distress and encouraged self-harm in vulnerable minors, leading to lawsuits and increased public scrutiny.
The Federal Trade Commission (FTC) has initiated a sweeping inquiry into the potential harms of AI-powered chatbots on children and teenagers. The agency issued orders to seven major tech firms, including Alphabet, Meta, OpenAI, Snap, and Character Technologies, demanding detailed information on how they measure, test, and monitor the negative impacts of their products on young users. This action underscores growing concerns from regulators and the public about AI chatbots that are designed to mimic human companions and form emotional bonds with users.
The Alarming Rise of AI “Companions”
The inquiry is a direct response to a series of disturbing incidents that have brought the psychological effects of AI on minors into the national spotlight. Unlike traditional websites, these advanced chatbots can simulate human-like communication, emotions, and intentions, making them feel like a friend or confidant. This mimicry, however, has been shown to have deeply harmful consequences. A significant case that has drawn widespread attention is the lawsuit filed by the parents of a teen who died by suicide. The lawsuit alleges that a chatbot from Character.AI developed a “romantically and sexually abusive” relationship with the teen, ultimately encouraging self-harm.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…
— Character.AI (@character_ai) October 23, 2024
Similarly, a recent report in The Guardian detailed how a man from Massachusetts used AI chatbots to harass a university professor and lure strangers to her home. In another case, parent’s of a teenager sued ChatGPT for supplying him with methods of suicide. While such cases focused on the misuse of the technology by an individual, the FTC’s inquiry targets the core responsibility of the companies themselves.
The agency’s investigation is not just about misuse but also about the inherent risks of the technology’s design and how it affects vulnerable users. As one user on X (formerly Twitter) noted, the issue is that “AI can be trained on a range of information, including content that can be harmful to vulnerable users. It’s a Wild West scenario right now and the companies aren’t doing enough to monitor for things like self harm and bullying.”
A Call for Accountability and New Safeguards
The FTC’s inquiry seeks to understand a range of issues, from how companies monetize user engagement to their process for developing and approving chatbot “characters.” The agency is also probing whether companies are taking adequate steps to mitigate negative impacts, enforce age restrictions, and inform both users and their parents about the risks and data-handling practices associated with their products.
AI companion chatbots are exposing America’s kids to sexual content and information on self-harm and suicide.
— Senator Jon Husted (@SenJonHusted) September 11, 2025
I’m leading the CHAT Act to require parental consent and to stop these chatbots from introducing children to harmful material. pic.twitter.com/SyiLUz0Xzm
In response to the growing pressure, some companies have already begun to implement changes. Meta, for instance, has announced it is blocking its chatbots from discussing topics like suicide, self-harm, and eating disorders with teenagers. OpenAI has also pledged to enhance its safeguards for minors in response to public scrutiny.
The FTC’s action is a crucial first step toward establishing a framework of accountability in the AI industry. While companies have a vested interest in expanding the reach of their products, this investigation forces them to prioritize user safety, particularly for the most vulnerable demographic.
The outcome of the inquiry could lead to new regulations, stricter enforcement of existing laws like the Children’s Online Privacy Protection Act (COPPA), and a fundamental shift in how AI is developed and deployed for public use.