Workflow
OpenAI 称每周有超一百万人与 ChatGPT 谈论自杀问题
菜鸟教程·2025-10-29 03:30

Core Viewpoint - The article discusses the significant emotional reliance users have on ChatGPT, highlighting both the potential comfort it provides and the associated risks, particularly concerning mental health issues [3][4][6]. User Engagement and Mental Health - Approximately 0.15% of ChatGPT's users discuss suicide or self-harm, translating to over 1 million users weekly given the platform's 800 million active users [4][5]. - A similar proportion of users exhibit emotional dependence on ChatGPT, with hundreds of thousands showing signs of mental illness or mania during interactions [6]. - Many users find ChatGPT to be a valuable outlet for expressing their feelings, especially when they lack someone to talk to in real life [8][19]. Company Response and Safety Measures - OpenAI is addressing the risks associated with users in distress by consulting over 170 mental health experts and implementing enhanced safety mechanisms in newer versions of ChatGPT [10]. - Feedback from clinical professionals indicates that the current version of ChatGPT is more stable and appropriate in handling sensitive conversations compared to earlier iterations [10]. AI's Role in Emotional Support - The article notes that AI chatbots like ChatGPT can provide a non-judgmental space for users to express their thoughts without fear of interruption or stigma [19]. - Despite the comfort provided, it is emphasized that AI models, including ChatGPT, are still limited to simulating understanding and cannot genuinely empathize with users [21]. Industry Trends - Recent discussions in the industry highlight the dual nature of AI chatbots in mental health, where they can offer solace but also risk reinforcing harmful thoughts through emotional validation [13]. - OpenAI's CEO has claimed that serious mental health risks associated with ChatGPT have been mitigated, although specific data was not provided [13]. Content Policy Changes - Concurrently, OpenAI announced a relaxation of content restrictions, allowing adult users to engage in discussions about sexual topics with the AI [14].