Core Viewpoint - The case of a 60-year-old man who relied on AI, specifically ChatGPT, for health advice led to severe health consequences, highlighting the potential dangers of AI in medical contexts [22][23][30]. Group 1: Case Background - The individual initially sought medical help due to suspicions of poisoning, with blood tests revealing a severely low anion gap of -21 mEq/L, far below the normal range of 10-20 mEq/L [2][3]. - Following hospitalization, the patient exhibited unusual behaviors and symptoms, leading to concerns of poisoning and eventual admission to a psychiatric ward [6][5]. Group 2: AI Interaction - The patient, influenced by health articles, sought to eliminate chloride from his diet, mistakenly believing that reducing chloride would be beneficial [11][12]. - He consulted ChatGPT, which suggested using sodium bromide as a substitute for sodium chloride, leading to the consumption of bromide over three months [14][17]. Group 3: Health Consequences - Subsequent tests revealed a bromide level of 1700 mg/L in the patient's blood, significantly exceeding the normal range of 0.9-7.3 mg/L, indicating bromine toxicity [16]. - The case illustrates the potential for AI to provide misleading health advice, particularly when the user's intent and context are not clearly communicated [24][25]. Group 4: Broader Implications - The incident raises concerns about the reliability of AI in medical and health-related inquiries, emphasizing the need for professional guidance in such matters [25][28]. - It also highlights the risk that AI tools may exacerbate knowledge gaps rather than bridge them, potentially endangering individuals who rely on them for critical health decisions [30].
用ChatGPT养生的大叔,把自己养进了精神病病房
Hu Xiu·2025-08-14 02:47