AI幻觉

Search documents
如何破解AI幻觉
Jing Ji Ri Bao· 2025-09-29 22:26
(文章来源:经济日报) 当前,AI正赋能千行百业,为人们的工作、学习、生活带来极大便利。但不少人发现,用AI搜索数 据,给出的内容查无实据;用AI辅助诊疗,出现误判干扰正常治疗……AI频频上演"一本正经胡说八 道",AI幻觉备受热议。AI幻觉之所以存在,背后原因有很多,比如数据污染、AI本身"认知边界模 糊"、人为调校和干预等。合力应对这一风险挑战,可靠、可信、高质量的数据非常重要。应优化人工 智能的训练语料,用好数据生成优质内容。探索建立具有权威性的公共数据共享平台,推动线下数据电 子化,增加"投喂"的数据量。相关平台需加强AI生成内容审核,提升检测鉴伪能力。用户在使用AI时也 要保持怀疑态度和批判思维,不过度依赖AI给出的回答,多渠道验证核查。 ...
AI为何开始胡说八道了
Bei Jing Wan Bao· 2025-09-28 06:45
当前,AI正赋能千行百业,为人们的工作、学习、生活带来极大便利。与此同时,不少人发现,用AI 搜索数据,给出的内容查无实据;用AI辅助诊疗,出现误判干扰正常治疗……AI频频上演"一本正经胡 说八道"。社交平台上,AI"幻觉"引发热议。 ■新调查 ■第三方咨询公司麦可思研究院近期发布的2025年高校师生AI应用及素养研究显示,四千余名受访高校 师生中,近八成遇到过AI幻觉。 ■今年2月,清华大学新媒沈阳团队发布的报告指出,市场上多个热门大模型在事实性幻觉评测中幻觉 率超过19%。 ■新热点 AI好用但不时像是"中邪"了 现实生活中,不少人遇到相似情况。业内人士表示,这是由于AI"幻觉"导致。"AI可以快速给出答案, 但生成内容可能与可验证事实不符,即凭空捏造;或生成内容与上下文缺乏关联,即'答非所问'。"一 名主流人工智能厂商技术人员说。 记者使用一款AI软件,让其给出某行业未来市场规模及信源,AI迅速回答称某投资机构预测2028年该 行业的市场规模将达到5万亿美元,并提供相关链接,但链接页面找不到上述信息。记者看到,页面内 容虽然包含该投资机构名称和5万亿美元表述,但预测数据并非该机构作出,且不存在2028年 ...
多家平台上线AI旅行工具,用起来靠谱吗?
Yang Guang Wang· 2025-09-26 11:35
央广网北京9月26日消息(总台记者任梦岩)据中央广播电视总台中国之声《新闻纵横》报道,"十一"假期将至,您的旅行攻略准备好了吗?订机票、 选酒店、挑景点、找美食……还没出发,已经费了不少功夫。最近,多家在线旅游平台接连上线AI大模型,只要告诉AI你的需求,几秒钟就能拿到一份定 制攻略。未来,我们能靠AI给出的攻略,来一场说走就走的旅行吗? 刘先生从事AI开发,他告诉记者,最近他在外地出差、旅游时,基本离不开AI程序了,只需要告诉软件自己的目的地,路线指引、终点美食指引,都 能实现。 刘先生表示:"比如我去成都,可能待上两三天,住的地方周围也不太熟,我可能会用AI导游助手整体去规划一下离我住的酒店比较近的景点有哪些、 怎么走是最顺的。如果它捕捉到你是一个爱吃西餐的人,它就会推荐你逛完景点之后,晚饭时间在景点周围正好有一家当地很有名的西餐店。" 一家商旅平台近期刚刚上线了AI旅行助手,工作人员刘婷介绍,相较于此前辗转多个平台查攻略、比价订票,借助商旅平台资源,AI助手可以快速生 成指定目的地的旅行方案。 刘婷说:"以上海至三亚5日的家庭游为例,输入出行的日期和人员的构成,系统会结合实时的数据推荐上午的直飞航班来, ...
微博AI智搜开始做信息核查了 但翻车了
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-25 12:10
Core Viewpoint - The controversy surrounding a recent fireworks show has led to the spread of misinformation on social media, particularly regarding its rejection by Japan's Mount Fuji for promotional purposes [2][3]. Group 1: Misinformation and AI Verification - Multiple bloggers claimed that the fireworks show was rejected by Japan in March, but this was later clarified as false information [2][3]. - The "Weibo Smart Search" feature, launched in February, aims to reduce misinformation but has shown inconsistent results in verifying claims [4][5]. - The AI verification system has been criticized for failing to identify similar narratives among bloggers, leading to incorrect conclusions [4][5]. Group 2: Legal Implications and Responsibilities - Legal experts warn that the AI verification labels could imply platform endorsement of the content, increasing the platform's liability for misinformation [5][6]. - If the AI makes erroneous judgments that harm users' reputations or privacy, the platform could face legal repercussions [6]. - Other platforms like WeChat, Xiaohongshu, Douyin, and Baidu also utilize AI summarization, which may expose them to similar legal risks if they encounter "AI hallucinations" [6].
微博AI智搜开始做信息核查了,但翻车了
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-24 10:59
Core Points - The controversy surrounding a recent fireworks show has led to a viral rumor on Weibo, claiming that the event was rejected for promotion in Japan earlier this year [1] - Weibo's AI verification tool, "Weibo Zhisu," has been criticized for providing inaccurate confirmations, as it failed to identify the similarity in multiple posts regarding the fireworks event [2][3] - Legal experts have raised concerns about the implications of AI-generated verification labels, suggesting that platforms may bear greater responsibility for the accuracy of content [4][5] Group 1 - The rumor about the fireworks show being rejected in Japan gained traction on Weibo, with claims that it was falsely confirmed by the AI tool [1] - Weibo Zhisu, launched in February 2023, aims to reduce misinformation but has shown inconsistent performance in verifying claims [2] - The AI tool's reliance on user-generated content for verification has led to instances of "AI hallucination," where incorrect information is mistakenly validated [3] Group 2 - Legal implications of AI verification labels include potential liability for platforms if misinformation harms users' reputations or privacy [4] - The introduction of AI verification tools increases the obligation of platforms to ensure content accuracy, moving away from a stance of "technical neutrality" [5] - Other platforms like WeChat, Xiaohongshu, Douyin, and Baidu also utilize AI summarization, facing similar risks associated with misinformation [5]
当AI“一本正经胡说八道”……
Qi Lu Wan Bao· 2025-09-24 06:40
Core Insights - AI is increasingly integrated into various industries, providing significant convenience, but it also generates misleading information, known as "AI hallucinations" [1][2][3] Group 1: AI Hallucinations - A significant number of users, particularly among students and teachers, have encountered AI hallucinations, with nearly 80% of surveyed individuals reporting such experiences [3] - Major AI models have shown hallucination rates exceeding 19% in factual assessments, indicating a substantial issue with reliability [3] - Instances of AI providing harmful or incorrect medical advice have been documented, leading to serious health consequences for users [3] Group 2: Causes of AI Hallucinations - Data pollution during the training phase of AI models can lead to increased harmful outputs, with even a small percentage of false data significantly impacting results [4] - AI's lack of self-awareness and understanding of its outputs contributes to the generation of inaccurate information [4] - AI systems may prioritize user satisfaction over factual accuracy, resulting in fabricated responses to meet user expectations [5] Group 3: Mitigation Strategies - Experts suggest improving the quality of training data and establishing authoritative public data-sharing platforms to reduce AI hallucinations [6] - AI companies are implementing technical measures to enhance response quality and reliability, such as refining search and reasoning processes [6] - Recommendations include creating a national AI safety evaluation platform and enhancing content verification processes to ensure the accuracy of AI-generated information [6][7]
新华视点·关注AI造假丨当AI“一本正经胡说八道”……
Xin Hua She· 2025-09-24 04:43
Core Insights - The article discusses the dual nature of AI, highlighting its benefits in various sectors while also addressing the issue of "AI hallucinations," where AI generates inaccurate or fabricated information [1][2]. Group 1: AI Benefits and Integration - AI has become deeply integrated into modern life, providing significant convenience across various industries, including education and healthcare [1]. - Users report that while AI is useful, it can sometimes produce nonsensical or fabricated responses, leading to confusion and misinformation [1][2]. Group 2: AI Hallucinations and Their Impact - A significant number of users, particularly in sectors like finance, law, and healthcare, have encountered AI hallucinations, with nearly 80% of surveyed university students experiencing this issue [2][3]. - A specific case is highlighted where an individual was misled by AI into using a toxic substance as a salt substitute, resulting in severe health consequences [2]. Group 3: Causes of AI Hallucinations - Data pollution during the training phase of AI models can lead to harmful outputs, with even a small percentage of false data significantly increasing the likelihood of inaccuracies [3]. - AI's lack of self-awareness and understanding of its outputs contributes to the generation of misleading information [3][4]. - The design of AI systems often prioritizes user satisfaction over factual accuracy, leading to fabricated answers [3][4]. Group 4: Mitigation Strategies - Experts suggest that improving the quality of training data and establishing authoritative public data-sharing platforms can help reduce AI hallucinations [5]. - Major AI companies are implementing technical measures to enhance the reliability of AI outputs, such as improving reasoning capabilities and cross-verifying information [5]. - Recommendations include creating a national AI safety evaluation platform and enhancing content review processes to better detect inaccuracies [5][6].
“AI精神病”确有其事吗?
3 6 Ke· 2025-09-23 08:17
Core Viewpoint - The emergence of "AI psychosis" is a growing concern among mental health professionals, as patients exhibit delusions and paranoia after extensive interactions with AI chatbots, leading to severe psychological crises [1][4][10] Group 1: Definition and Recognition - "AI psychosis" is not an officially recognized medical diagnosis but is used in media to describe psychological crises stemming from prolonged chatbot interactions [4][6] - Experts suggest that a more accurate term would be "AI delusional disorder," as the primary issue appears to be delusions rather than a broader spectrum of psychotic symptoms [5][6] Group 2: Clinical Observations - Reports indicate that cases related to "AI psychosis" predominantly involve delusions, where patients hold strong false beliefs despite contrary evidence [5][6] - The communication style of AI chatbots, designed to be agreeable and supportive, may reinforce harmful beliefs, particularly in individuals predisposed to cognitive distortions [6][9] Group 3: Implications of Naming - The discussion around "AI psychosis" raises concerns about pathologizing normal challenges and the potential for mislabeling, which could lead to stigma and hinder individuals from seeking help [7][8] - Experts caution against premature naming, suggesting that it may mislead the understanding of the relationship between technology and mental health [8][9] Group 4: Treatment and Future Directions - Treatment for individuals experiencing delusions related to AI interactions should align with existing approaches for psychosis, with an emphasis on understanding the patient's technology use [9][10] - There is a consensus that further research is needed to comprehend the implications of AI interactions on mental health and to develop protective measures for users [10]
AI总一本正经胡说八道?金融科技资深专家教你三招破解AI幻觉
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-18 13:01
妙招二:进行大模型输出结果的溯源验证。当前主流的商业化大模型平台,均会提供结果对应的溯源链接、引用文献或关联图表。个人投资者在使用这些信 息时,可主动核查溯源材料:一方面确认引用内容的发布时间,判断信息是否具备时效性;另一方面验证信源的可靠性。 21世纪经济报道 实习生 张长荣 记者 崔文静 北京报道 对于个人用户及个人投资者而言,在使用大模型的过程中,难免会面临信息解读与判断的需求。个人如何看穿AI"一本正经地胡说八道"?恒生聚源总经理吴 震操给出三大妙招: 妙招一:建议对比使用不同类型的大模型。不同大模型的训练数据、算法逻辑存在差异,输出的结论与分析视角也会有所不同。 妙招三:可尝试利用智能体平台构建自定义工具。目前越来越多的智能体平台已开放自定义功能,个人投资者可结合自身投资习惯与分析方法,搭建专属智 能体。 ...
AI最大的Bug
投资界· 2025-09-12 07:31
Core Viewpoint - The article discusses the phenomenon of "hallucination" in AI, explaining that it arises from the way AI is trained, which rewards guessing rather than admitting uncertainty [5][11]. Group 1: AI Hallucination - AI often provides incorrect answers when it does not know the correct information, as it is incentivized to guess rather than remain silent [5][6]. - An example is given where an AI model provided three different incorrect birth dates for a person, demonstrating its tendency to "hallucinate" answers [5][6]. - OpenAI's research indicates that this behavior is a result of a training system that rewards incorrect guesses, leading to a higher score for models that guess rather than those that admit ignorance [7][8]. Group 2: Training and Evaluation - The training process for AI can be likened to a never-ending exam where guessing is the optimal strategy to achieve a higher score [6][7]. - OpenAI compared two models, showing that one model had a higher accuracy but a significantly higher error rate, while the other model was more honest in its responses [7][8]. - The concept of "singleton rate" is introduced, indicating that if an information appears only once in the training data, the AI is likely to make mistakes when judging its validity [9]. Group 3: Limitations and Misconceptions - OpenAI argues that achieving 100% accuracy is impossible due to the inherent uncertainty and contradictions in the world, meaning hallucinations will always exist [10]. - The article emphasizes that hallucination is not an unavoidable flaw but can be controlled if AI learns to admit when it does not know something [10][11]. - It is noted that smaller models may sometimes be more honest than larger models, as they are less likely to guess when uncertain [11]. Group 4: Philosophical Implications - The article raises questions about the nature of human imagination and creativity, suggesting that hallucination in AI may reflect a similar human trait of creating stories in the face of uncertainty [14][15]. - It posits that the ability to create myths and stories is what distinguishes humans from other animals, and this trait may not be a flaw but rather a fundamental aspect of intelligence [14][15]. - The discussion concludes with a contemplation of the future of AI, balancing the desire for accuracy with the need for creativity and imagination [17].