China prepares world’s first ’emotional safety’ rules for AI
China is drafting new rules for human-like AI services: authorities want to limit the emotional impact of chatbots on users, introduce protections for minors and oblige companies to prevent addiction and psychological harm.
Chinese authorities have drafted new requirements for artificial intelligence that could make the country the first in the world to officially regulate the emotional risks of communicating with AI. We are talking about services that can mimic human personality and build emotional relationships with users. The initiative aims to control not only content security, but also so-called emotional security.
The initiative is aimed at controlling not only content security, but also so-called emotional security.
The draft document was developed by the Cyberspace Administration of China. The text emphasizes that the rapid growth of AI companions requires separate supervision due to the risk of psychological addiction, especially among children and teenagers. The new rules would apply to any interactive AI system that engages the user emotionally through text, images, audio or video.
Content restrictions and access by minors
The draft provides for mandatory confirmation of the age of users. For minors, the use of AI companions will only be possible with the consent of legal representatives. Regulators call this point a basic measure to protect children from potential harm.
AI chatbots will be banned from generating content related to gambling, pornography and violence. A separate ban on discussing suicide, self-harm and other topics that could negatively impact a user’s mental health is also included. Such dialogs should be blocked at the system level.
Addiction control and participation of live moderators
One of the key elements of the project was to combat emotional dependency on AI. Developers will be required to monitor for signs of addictive behavior and excessive user attachment to chatbots.
Addictive behavior and excessive user attachment to chatbots will be monitored.
Companies will have to implement escalation protocols. If the system detects an emotional crisis or potentially dangerous conversation, the communication must be escalated to a live moderator. In some cases, the provider must notify the user’s guardians of risky conversations.
Regulators explicitly state that the purpose of such measures is to prevent situations in which AI companions substitute for real human communication or exacerbate psychological problems.
International Regulatory Context
The measures proposed in China overlap with recent initiatives in the US. California passed SB 243 in October, which imposes tighter restrictions on AI services, requires users to be reminded that they are not communicating with a human, and provides emergency protocols for suicide conversations. Some experts, however, consider these measures insufficient to fully protect minors.
On the federal level in the U.S., further regulation of AI has been slowed. Donald Trump’s administration is pushing the idea of a unified national AI safety framework and is holding back initiatives by individual states. US authorities attribute this to concerns over the pace of innovation and competition with China in the global AI market.







