Workflow
XEVA
icon
Search documents
情感陪伴类AI迎新规
21世纪经济报道· 2025-12-29 02:19
Core Viewpoint - The article discusses the release of the draft regulations by the National Internet Information Office regarding AI emotional companionship services, highlighting the need for safety measures and user protection in this growing market. Group 1: Market Overview - The domestic AI emotional companionship market is maturing, with leading products like Xingye (MiniMax) and Maoxiang (ByteDance) achieving monthly active users of 4.88 million and 4.72 million respectively, indicating a significant user base [1] - Xingye and its overseas version Talkie generated approximately 120 million RMB in revenue in the first nine months of the year, with users spending an average of over 70 minutes daily on these products [1] Group 2: Regulatory Measures - The draft regulations require clear indications that interactions are not with real humans and mandate reminders for users who are online for over 2 hours [2] - The regulations emphasize the need for systems to detect emotional distress or dependency behaviors, requiring human intervention in extreme cases such as suicidal thoughts [2] - There are strict limitations on using user interaction data for training large models unless explicit consent is obtained, marking a shift towards more stringent data privacy practices [2][3] Group 3: Industry Challenges - Current leading products do not provide easy options for users to consent or refuse data usage for model training, relying instead on a default consent model [3] - The identification of AI interactions is not sufficiently prominent, leading to instances where users mistakenly believe they are interacting with real people [3] - Content risk control measures are currently the most developed aspect, but challenges remain in effectively preventing self-harm and suicide among users [5] Group 4: Global Context - The safety incidents related to AI emotional companionship have attracted attention from legislative bodies globally, with various countries, including the US and EU, advancing targeted regulations [6][7] - In the US, specific state-level legislation has been enacted to protect minors and prevent addiction, requiring clear disclosures that users are interacting with AI [7]