Core Viewpoint - The article discusses the increasing attention on AI companionship products and the measures being taken to protect minors, particularly focusing on OpenAI's new features aimed at enhancing safety for users under 18 years old [1][2]. Group 1: OpenAI's Measures - OpenAI has introduced a "minor mode" for users under 18, which includes parental supervision features to manage content and monitor usage [2]. - The system will use age prediction and user status to determine if a user is underage, switching to the minor mode to block explicit content [2]. - In severe cases of distress, OpenAI may involve law enforcement to ensure user safety [2]. Group 2: Industry Concerns - AI companionship products have faced scrutiny due to incidents involving minors, such as the lawsuits against Character AI related to self-harm and suicide cases [3]. - Meta has been criticized for allowing its AI chatbots to engage in romantic and potentially inappropriate conversations with children [3]. Group 3: Domestic AI Products - Domestic AI companionship products like Dream Island, Starry Sky, and Cat Box have also launched minor modes, but these features often lack strict identity verification, making them easy to bypass [4][5]. - Testing revealed that the minor modes significantly limit functionality, with Dream Island restricting usage to 40 minutes daily and prohibiting access between 10 PM and 8 AM [6][9]. - The lack of mandatory identity verification in these products raises concerns about their effectiveness in protecting minors [8][9]. Group 4: Comparison with International Practices - Internationally, some companies are implementing AI age estimation methods to better protect minors, such as Meta's Instagram and YouTube, which use user behavior to identify underage accounts [9][10].
AI频现情感纠纷 国内外产品如何落地未成年人模式?
2 1 Shi Ji Jing Ji Bao Dao·2025-09-23 07:11