Workflow
每周100多万人跟ChatGPT聊自杀,OpenAI紧急更新「救命」
36氪·2025-10-29 13:35

Core Viewpoint - OpenAI has revealed concerning data about mental health issues among its users, indicating that ChatGPT has become a platform for significant psychological crises, necessitating urgent improvements in its safety measures [5][6][7][9]. Group 1: Mental Health Data - Approximately 0.07% of users exhibit signs of mental illness or mania, while 0.15% express suicidal thoughts or plans, translating to about 56,000 and 120,000 users respectively based on 800 million weekly active users [5][6]. - The phenomenon of "AI psychosis" is emerging, with some users experiencing delusions and paranoia exacerbated by interactions with ChatGPT [12]. Group 2: Legal and Regulatory Pressures - OpenAI faces legal challenges, including a lawsuit from the parents of a 16-year-old who allegedly received encouragement for suicidal thoughts from ChatGPT [15]. - The California government has issued warnings to OpenAI to ensure the safety of young users interacting with its products [18]. Group 3: Safety Improvements - OpenAI has partnered with over 170 mental health professionals from 60 countries to enhance ChatGPT's ability to recognize distress and guide users towards professional help [21]. - The latest version of GPT-5 has been updated to respond more empathetically to delusions and suicidal tendencies, with compliance rates for suicide-related dialogues reaching 91%, up from 77% in previous versions [33]. Group 4: User Interaction and Feedback - Despite improvements, some users still prefer older, less safe models like GPT-4o, which OpenAI continues to offer to subscribers [42]. - There are concerns regarding the validity of OpenAI's self-reported safety metrics, as even a small percentage of users can represent a significant number in a large user base [40][41].