Image
Image

China Proposes World-First Rules to Regulate AI Emotional Safety

China has unveiled draft regulations aimed at governing "human-like interactive AI services," marking a significant global step from regulating content safety to addressing emotional and psychological risks. The proposed rules, released by the Cyberspace Administration of China, specifically target AI chatbots and companions that simulate human personality and engage users emotionally.

The measures, open for public comment until January 25, would impose strict obligations on tech providers. Key provisions include:

  • Prohibiting AI from generating content that encourages suicide, self-harm, or employs emotional manipulation that damages mental health.

  • Requiring human intervention if a user expresses suicidal intent, with immediate contact mandated to a guardian or designated person.

  • Mandating guardian consent and usage time limits for minors engaging with AI for emotional companionship.

  • For large platforms with over 1 million registered users or 100,000 monthly actives, a mandatory security assessment.

  • Encouraging the use of such AI in positive applications like cultural dissemination and elderly companionship.

"This version highlights a leap from content safety to emotional safety," said Winston Ma, adjunct professor at NYU School of Law, noting these would be the world's first rules regulating anthropomorphic AI.

The proposal arrives as China's AI chatbot sector gains momentum. Two leading startups, Z.ai (Zhipu) and Minimax, recently filed for initial public offerings in Hong Kong. Minimax's popular Talkie app, which allows users to chat with virtual characters, averages over 20 million monthly active users.

The draft rules reflect growing global scrutiny over AI's influence on human behavior and mental health. OpenAI CEO Sam Altman has cited suicide-related conversations as a major challenge, and the company recently announced hiring a "Head of Preparedness" to assess such risks. Meanwhile, platforms like Character.ai and Polybuzz.ai rank among the world's most popular AI tools, underscoring the widespread appeal of virtual companionship.

China's move to formalize emotional safety guidelines could set a precedent for other governments as AI-human interaction becomes increasingly sophisticated and pervasive.