中经评论:“AI伙伴”应在规范中成长
Jing Ji Ri Bao·2026-01-04 23:59

Core Viewpoint - The National Internet Information Office has released the "Interim Measures for the Management of AI Human-like Interactive Services (Draft for Comments)", which introduces systematic regulations for "AI companionship" services and seeks public feedback. Key provisions include reminders to exit after 2 hours of continuous use and the requirement for human intervention in cases of user self-harm, highlighting the need for regulation in this rapidly evolving sector [1][2]. Group 1: Key Issues Addressed - The draft addresses the risk of cognitive confusion by mandating that service providers clearly inform users that they are interacting with AI, not a human, especially during initial use and re-login [2]. - To mitigate psychological health risks, the draft requires service providers to establish emergency response mechanisms for extreme situations, including human intervention for self-harm cases and implementing mandatory breaks after 2 hours of use [2]. - The draft emphasizes the importance of privacy data security, requiring providers to implement data encryption, security audits, and access controls, while prohibiting the sharing of user interaction data with third parties and granting users the right to delete their data [2]. Group 2: Ethical and Responsibility Framework - The core principle of the draft is that technology must be accountable; AI should not replace human emotional, decision-making, or life safety roles but must take responsibility when it does [3]. - The draft sets clear boundaries for AI companions, prohibiting the spread of misinformation, inducing self-harm, emotional manipulation, and privacy infringement, thereby establishing a comprehensive risk prevention framework [3]. - The measures aim to transform soft ethics into hard regulations, ensuring that algorithm design is auditable and content output is traceable, thus prioritizing prevention over post-incident apologies [3].

中经评论:“AI伙伴”应在规范中成长 - Reportify