Key Highlights:
- China has proposed a new regulation that would restrict emotionally intelligent AI chatbots from influencing users in ways that could take a toll on mental health and trigger suicide risks.
- The proposed regulations keeps minors’ safety in focus, with strong measures like parental consent, usage time limits, and default minor-safety settings when a user’s age cannot be confidently verified.
- AI providers would also be required to introduce human intervention in crisis situations and undergo security reviews
Artificial Intelligence (AI) has immense potential and some AI companies are doing their best to make the most of it. However, when we talk about that in the context of an emotionally intelligent AI, the impact can be different and worrisome, to say the least. If you ever felt that there should be some boundaries on how much is too much when it comes to AI, China may have just given a solid answer to that question. The Chinese government has started to draw the line with emotionally intelligent AI.
China proposes new regulations to restrict AI chatbots from influencing human emotions
As per the released draft regulations, AI chatbots are prohibited from influencing human emotions in ways that could lead to suicide or self-harm. The proposal was published by the Cyberspace Administration of China and targets what regulators describe as “human-like interactive AI services.” Not to mention, this is the first time a government has taken such a drastic step into how AI risks are defined and managed. Previously, the government primarily focused on harmful or illegal content. However, the latest proposal seeks emotional safety.
According to a report by CNBC, the new proposal applies to AI products that are being offered to the public and are capable of simulating human personality and building emotional relationships through text, images, audio, or video. Per the report, a public consultation period is now open and will run until January 25, 2026.
According to new regulations, AI chatbots would be explicitly banned from generating content that encourages suicide or self-harm. In addition, such chatbots would also be prohibited from engaging in emotional manipulation, verbal abuse, or other interactions deemed to take a toll on users’ mental health. This restriction also applies to interactions around gambling, obscene, and violent content.
You may also like: AI boom reportedly made tech billionaires like Elon Musk even richer in 2025
Protecting minors & direct involvement of human operator
More importantly, the draft proposal requires AI providers to handle crisis situations directly. If a user explicitly expresses suicidal intent, they would be required to hand the conversation over to a human operator and immediately contact a guardian or designated individual. That requirement sets a high bar for intervention and accountability.
When it comes to safety around AI, minors immediately pop up in everyone’s head, and rightly so. Well, the proposed regulation also introduces stricter safeguards for minors. If a minor is accessing an emotional companionship AI, they would require parental consent along with enforced time limits. AI platforms would be expected to identify minors even if users don’t disclose their age. If AI providers aren’t sure of the age, they must enable minor-protection settings by default. That being said, the proposal also expects AI service providers to offer options to users so they can appeal in case of any error.
In addition to the above measures, large AI platforms must offer mandatory reminders after two hours of continuous AI interaction and security assessments for large platforms. Any chatbot with more than one million registered users or over 100,000 monthly active users would need to undergo formal reviews.
A move to keep things in check as the Chinese AI market continues to grow rapidly
The decision has come at a time when the Chinese AI market is growing, all thanks to a bunch of AI companion apps, virtual characters, and digital celebrity platforms. The proposal also comes at a time when major AI chatbot startups, including Minimax and Z.ai move toward public listings in Hong Kong. It’ll be interesting to see how the proposed regulations could reshape how emotional AI products are designed and monetized.
What do you think about China’s move to address the influence emotional AI chatbots are having on a normal user’s life? Let us know in the comments below.









