China Drafts Sweeping Rules for AI “Digital Humans” — Bans Virtual Relationships for Minors

Summary

China’s Cyberspace Administration (CAC) has released draft regulations titled “Provisional Measures on the Administration of Human-like Interactive AI Services,” targeting the rapidly growing market of AI-powered digital humans. The rules, open for public comment until May 6, 2026, represent one of the most comprehensive attempts by any government to regulate AI avatars and virtual beings.

The draft regulations cover several critical areas. Personal consent is now explicitly required before using anyone’s likeness, voice, or personal data to create a digital human. Using virtual humans to bypass identity verification systems is banned outright. For minors, the rules are especially strict: virtual intimate relationships (including virtual family members or romantic partners), services that encourage harmful behavior, and features designed to induce excessive spending are all prohibited.

The regulations also mandate clear labeling of digital humans throughout their display, prohibit content involving sexual innuendo, violence, or discrimination, and ban the use of digital humans for false advertising, manipulative marketing, or telecom fraud. Notably, the framework extends accountability beyond platforms to include technology providers and end users — a “full-chain governance” approach.

Source

The Straits Times — Why China Wants to Regulate AI-Generated Humans
China Daily — Draft Rules for AI Humans

Commentary

China continues to be the world’s most aggressive AI regulator, and these rules address a genuinely thorny problem that Western regulators haven’t seriously touched yet. AI “digital humans” are already being used as virtual influencers, customer service agents, and companions — and the potential for abuse, especially targeting minors, is enormous. The ban on virtual romantic partners for children is a direct response to an already-documented problem in China’s app ecosystem.

The “full-chain governance” model is the most interesting part: holding technology providers, platform operators, and end users all accountable. It’s a sharp contrast to Western approaches that tend to dump all responsibility on the platform. Whether these rules are effectively enforceable is another question, but the framework itself is worth studying. Expect the EU and others to eventually draft something similar.

You May Have Missed