Workflow
AI拟人化互动服务
icon
Search documents
中经评论:“AI伙伴”应在规范中成长
Jing Ji Ri Bao· 2026-01-04 23:59
Core Viewpoint - The National Internet Information Office has released the "Interim Measures for the Management of AI Human-like Interactive Services (Draft for Comments)", which introduces systematic regulations for "AI companionship" services and seeks public feedback. Key provisions include reminders to exit after 2 hours of continuous use and the requirement for human intervention in cases of user self-harm, highlighting the need for regulation in this rapidly evolving sector [1][2]. Group 1: Key Issues Addressed - The draft addresses the risk of cognitive confusion by mandating that service providers clearly inform users that they are interacting with AI, not a human, especially during initial use and re-login [2]. - To mitigate psychological health risks, the draft requires service providers to establish emergency response mechanisms for extreme situations, including human intervention for self-harm cases and implementing mandatory breaks after 2 hours of use [2]. - The draft emphasizes the importance of privacy data security, requiring providers to implement data encryption, security audits, and access controls, while prohibiting the sharing of user interaction data with third parties and granting users the right to delete their data [2]. Group 2: Ethical and Responsibility Framework - The core principle of the draft is that technology must be accountable; AI should not replace human emotional, decision-making, or life safety roles but must take responsibility when it does [3]. - The draft sets clear boundaries for AI companions, prohibiting the spread of misinformation, inducing self-harm, emotional manipulation, and privacy infringement, thereby establishing a comprehensive risk prevention framework [3]. - The measures aim to transform soft ethics into hard regulations, ensuring that algorithm design is auditable and content output is traceable, thus prioritizing prevention over post-incident apologies [3].
【西街观察】给AI“搭子”戴上紧箍
Bei Jing Shang Bao· 2025-12-29 16:21
Core Viewpoint - The recent notice from the National Internet Information Office regarding the "Interim Measures for the Management of AI Human-like Interactive Services" highlights the need for regulation in AI products that simulate human characteristics and emotional interactions, particularly in the context of AI companions or "搭子" [1] Group 1: AI Development and Market Potential - AI technology is evolving to provide emotional interaction services, which have significant market potential, especially in the areas of companionship and consultation [1] - The emotional interaction capabilities of AI are particularly appealing to vulnerable groups such as minors and the elderly, who may find more immediate commercial value in these services compared to traditional tool-based AI [1] - The concept of "搭子" represents a new social relationship that AI can fulfill, offering tailored companionship in a way that traditional social connections may not [1] Group 2: User Interaction and Ethical Concerns - The relationship between users and AI has shifted, with AI now possessing stronger interactive capabilities and emotional influence, particularly affecting users with limited discernment, such as minors and the elderly [2] - A report from Fudan University indicates that 13.5% of young people prefer to confide in AI virtual beings over family members, highlighting a growing reliance on AI for emotional support [2] - The potential for users to develop an unhealthy emotional dependency on AI could lead to ethical and moral risks, including information leakage and financial loss [2] Group 3: Regulatory Measures - The new regulations emphasize the need for AI systems to recognize user states and intervene when extreme emotions or addiction are detected [3] - AI providers are prohibited from simulating family members or specific relationships for elderly users, ensuring a clear distinction between AI and human interaction [3] - It is mandated that users are clearly informed they are interacting with AI rather than a human being [3] Group 4: Boundaries and Responsibilities - The development of AI companions must maintain a sense of boundaries, and AI providers should approach their responsibilities with caution and respect [4]