Core Viewpoint - The National Internet Information Office has released a draft regulation for the management of AI humanoid interactive services, marking the first systematic guidelines for "AI companionship" services, which include measures such as mandatory exit reminders after two hours of continuous use and human intervention in cases of user self-harm [1][2]. Group 1: Key Issues Addressed - The regulation addresses the risk of cognitive confusion by requiring service providers to clearly indicate to users that they are interacting with AI rather than a human, with reminders at critical points such as first use and re-login [2]. - To mitigate psychological health risks, the regulation mandates the establishment of emergency response mechanisms, including human intervention in extreme situations like self-harm, and the implementation of a two-hour mandatory break to prevent addiction [2]. - Privacy data security is emphasized, with requirements for data encryption, security audits, and access controls to protect user interaction data, prohibiting sharing with third parties and granting users the right to delete their data [2]. Group 2: Responsibilities and Ethical Considerations - The core principle of the regulation is that technology must be accountable; AI must not only avoid spreading misinformation and inducing self-harm but also respect user privacy and emotional integrity [3]. - The regulation sets clear boundaries for AI companions, establishing a risk prevention framework that includes auditability of algorithm design and traceability of content output, aiming to shift from reactive to proactive measures [3]. - The introduction of these guidelines is seen as a way to ensure that AI tools remain safe, controllable, and beneficial for users, emphasizing the need for AI to grow within a regulated environment to enhance quality of life [3].
“AI伙伴”应在规范中成长
Jing Ji Ri Bao·2026-01-04 22:14