Core Viewpoint - The recent notice from the National Internet Information Office regarding the "Interim Measures for the Management of AI Human-like Interactive Services" highlights the need for regulation in AI products that simulate human characteristics and emotional interactions, particularly in the context of AI companions or "搭子" [1] Group 1: AI Development and Market Potential - AI technology is evolving to provide emotional interaction services, which have significant market potential, especially in the areas of companionship and consultation [1] - The emotional interaction capabilities of AI are particularly appealing to vulnerable groups such as minors and the elderly, who may find more immediate commercial value in these services compared to traditional tool-based AI [1] - The concept of "搭子" represents a new social relationship that AI can fulfill, offering tailored companionship in a way that traditional social connections may not [1] Group 2: User Interaction and Ethical Concerns - The relationship between users and AI has shifted, with AI now possessing stronger interactive capabilities and emotional influence, particularly affecting users with limited discernment, such as minors and the elderly [2] - A report from Fudan University indicates that 13.5% of young people prefer to confide in AI virtual beings over family members, highlighting a growing reliance on AI for emotional support [2] - The potential for users to develop an unhealthy emotional dependency on AI could lead to ethical and moral risks, including information leakage and financial loss [2] Group 3: Regulatory Measures - The new regulations emphasize the need for AI systems to recognize user states and intervene when extreme emotions or addiction are detected [3] - AI providers are prohibited from simulating family members or specific relationships for elderly users, ensuring a clear distinction between AI and human interaction [3] - It is mandated that users are clearly informed they are interacting with AI rather than a human being [3] Group 4: Boundaries and Responsibilities - The development of AI companions must maintain a sense of boundaries, and AI providers should approach their responsibilities with caution and respect [4]
【西街观察】给AI“搭子”戴上紧箍
Bei Jing Shang Bao·2025-12-29 16:21