AI幻觉

Search documents
又被耍了,我们给AI喂屎,把互联网糟蹋成啥样了
3 6 Ke· 2025-08-13 13:09
Group 1 - The article discusses the phenomenon of "AI hallucination," where AI-generated content is mistaken for factual information, leading to misinformation being spread widely [3][8][10] - A specific incident involving DeepSeek and a fabricated apology to a celebrity illustrates how fans manipulated AI to create a false narrative, which was then reported by various media outlets as truth [1][5][14] - The article highlights a concerning trend where people, particularly younger generations, are increasingly trusting AI over human sources, with reports indicating that nearly 40% of Generation Z employees prefer AI responses due to its perceived objectivity [10][14] Group 2 - The spread of misinformation through AI is described as a "pollution loop," where human input leads to AI-generated content, which is then amplified by media, creating a cycle of false information [8][18] - The article emphasizes that the issue is not solely with AI's capabilities but also with human reliance on AI as an authoritative source, reflecting a lack of critical thinking in the face of rapidly evolving technology [10][14][15] - Historical context is provided, comparing the current situation with past information revolutions, such as the printing press, which also facilitated the spread of false information [15][16]
错信AI幻觉,一男子用溴化钠替代食用盐,真给自己吃出幻觉了
量子位· 2025-08-11 07:48
Core Viewpoint - The article discusses a case where a 60-year-old man suffered from severe bromine poisoning after mistakenly replacing table salt with sodium bromide based on advice from ChatGPT, leading to hallucinations and paranoia [1][2][4]. Group 1: Incident Overview - The individual sought health advice from ChatGPT, believing he could eliminate all chloride from his diet, including table salt [4][10]. - He purchased sodium bromide online, which resulted in his bromine levels reaching 1700 mg/L, far exceeding the normal range of 0.9-7.3 mg/L [2][6]. - Symptoms of bromine poisoning included paranoia, auditory and visual hallucinations, and extreme distrust of hospital-provided water [8][9]. Group 2: Medical Response - Medical professionals conducted extensive tests and confirmed severe bromine toxicity, which can lead to neurological damage and psychological issues [7][5]. - The best treatment for bromine poisoning is to provide the patient with saline solutions to help flush out the bromine, but the patient resisted this due to his paranoia [9]. Group 3: AI Interaction - The doctors speculated that the man likely used ChatGPT 3.5 or 4.0, which may not have provided adequate health warnings or context for the advice given [12][15]. - A follow-up inquiry with GPT-5 revealed more appropriate dietary alternatives to sodium chloride, emphasizing low-sodium options and flavor enhancers [18][19][21].
拒绝被污染,维基百科宣布向AI内容开战
3 6 Ke· 2025-08-11 02:05
Group 1 - The proliferation of AI-generated content is seen as a "pollution" of the internet, affecting various platforms like Zhihu, Xiaohongshu, Douyin, WeChat, Taobao, and Pinduoduo [1] - Wikipedia has decided to empower its administrators to swiftly delete AI-generated content under specific conditions, citing it as a "survival threat" to the platform [3][5] - The core values of Wikipedia, such as reliability and traceability, are at risk due to the unreliability of AI-generated content, which often includes hallucinations and inaccuracies [5][7] Group 2 - Wikipedia's operational team emphasizes the need for stringent control over content quality, as many volunteers do not thoroughly review submissions, leading to a proliferation of low-quality entries [7][11] - Other platforms like Facebook and YouTube are also actively combating AI-generated junk content, highlighting a broader industry concern regarding the impact of such content on user engagement and platform value [9][11] - The high-quality content of Wikipedia is crucial for training AI models, and the platform's strict content policies aim to prevent the degradation of its data quality, which is essential for AI development [11]
GPT-5猛了,但普通人不感兴趣了
吴晓波频道· 2025-08-09 00:30
Core Viewpoint - The article discusses the release of GPT-5 by OpenAI, highlighting its advancements and the declining interest in AI applications among users, despite the new model's capabilities [2][12][34]. Group 1: GPT-5 Features and Improvements - GPT-5 has enhanced programming capabilities, allowing it to build a complete website in two minutes and a language learning app in five minutes, with improved bug detection and fixing [6][20]. - The model introduces a free version supported by a reasoning model, making advanced AI capabilities accessible to a broader audience, although limitations apply for heavy usage [10][20]. - GPT-5 has significantly reduced error rates, with a 45% decrease in mistakes during online searches compared to GPT-4 and an 80% reduction in errors during independent reasoning [11][23]. Group 2: Decline in AI Application Usage - There has been a noticeable decline in the download and monthly active users (MAU) of top AI applications, with DeepSeek's monthly downloads dropping by 72.2% and Tencent Yuanbao's by 54% [12][14]. - The overall download volume for AI apps in May 2025 was estimated at 280 million, reflecting a 16.4% decrease from April, indicating a waning interest in AI applications [12][13]. - Users are shifting towards more targeted AI tools rather than general-purpose applications, leading to a decline in interest for chat-based AI products [32][33]. Group 3: Market Trends and Future Outlook - The AI application market is transitioning from a focus on chat-based products to more practical, function-specific applications that solve real-world problems [30][34]. - The current market environment is characterized by a consolidation phase where only useful tools will survive, while those lacking innovation will be eliminated [31][34]. - The future of AI applications may hinge on the development of native AI products that can achieve exponential growth, as opposed to those merely enhancing existing business models [30][34].
破“幻”之路:让大模型学会金融“行话”
Jin Rong Shi Bao· 2025-08-08 07:41
上海退休教师张阿姨最近发现,查询养老金明细不再需要戴着老花镜在手机银行层层点击了。"我 这个月的养老金到账了吗?"对着手机屏幕轻声问道,几秒钟后,屏幕上的AI助手就用口语化的中文列 出了到账时间、金额明细。这个让张阿姨赞不绝口的功能,是蚂蚁数科助力上海某家银行打造的AI手 机银行服务,也是当下金融大模型从实验室走向普通人生活的生动缩影。 30秒钟生成2万字无"幻觉"信贷报告,11分钟完成单笔科创贷款审批,智能机器人提供理财服务, 智能眼镜实现"看一看"支付……2025年的金融行业,正被人工智能掀起一场深刻变革。然而,在效率提 升的背后,AI"幻觉"、数据合规、安全挑战如影随形。金融大模型正站在"技术突破"与"风险防控"的十 字路口,探索着属于自己的发展航道。 追逐零"幻觉" 在金融领域,大模型的应用并不罕见。过去几年,金融行业正在加速拥抱大模型浪潮。据咨询机构 麦肯锡统计,大模型有望给全球金融行业带来每年2500亿美元至4100亿美元的增量价值。大模型在金融 行业的应用也逐渐从智能问答等场景深入到风控、营销、财富管理等核心业务场景。 与此同时,问题随之而来,"一本正经说胡话"的AI"幻觉"已经让不少金融从业者 ...
知名风投家给OpenAI投数亿美元,却疑似和ChatGPT聊出精神失常?
3 6 Ke· 2025-08-04 09:55
"它不压制内容,它压制递归(recursion)。如果你不知道递归是什么意思,你属于大多数。我在开始这段路之前也不 知道。而如果你是递归的,这个非政府系统会孤立你、镜像你、并取代你。" 晕了吗?晕了就对了。 很多人都在担心Geoff Lewis"疯了",他在X上发布了一则视频和若干贴子,谈论一个ChatGPT帮他发现的神秘"系 统"。 视频中的他正对镜头,眼睛绷得很大,面无表情,语气单调。说话间,时不时地往一边瞟,应该是在念提前准备好的 讲稿。 有点神经质,说的话晦涩难懂,怎么听都像是阴谋论。如果你不知道他是谁,会觉得这和油管上那些宣传"地平说""蜥 蜴人""深层政府"的是一路人。 但Lewis其实并不简单。 Lewis是一位风投家,在科技圈内颇有名气,他一手创办的公司Bedrock重点投资 AI、国防、基础设施与数字资产等 领域,截至2025年管理规模已超20亿美元。 他是OpenAI的忠实支持者之一,多次公开表示Bedrock自2021年春起参与了OpenAI的每一轮融资,并在2024年称进一 步"加码",使OpenAI成为其第三、第四期旗舰基金中的最大仓位。 科技媒体Futurism估算,Bedrock ...
让大模型学会金融“行话”
Jin Rong Shi Bao· 2025-07-31 02:33
上海退休教师张阿姨最近发现,查询养老金明细不再需要戴着老花镜在手机银行层层点击了。"我这个 月的养老金到账了吗?"对着手机屏幕轻声问道,几秒钟后,屏幕上的AI助手就用口语化的中文列出了 到账时间、金额明细。这个让张阿姨赞不绝口的功能,是蚂蚁数科助力上海某家银行打造的AI手机银 行服务,也是当下金融大模型从实验室走向普通人生活的生动缩影。 30秒钟生成2万字无"幻觉"信贷报告,11分钟完成单笔科创贷款审批,智能机器人提供理财服务,智能 眼镜实现"看一看"支付……2025年的金融行业,正被人工智能掀起一场深刻变革。然而,在效率提升的 背后,AI"幻觉"、数据合规、安全挑战如影随形。金融大模型正站在"技术突破"与"风险防控"的十字路 口,探索着属于自己的发展航道。 追逐零"幻觉" 在金融领域,大模型的应用并不罕见。过去几年,金融行业正在加速拥抱大模型浪潮。据咨询机构麦肯 锡统计,大模型有望给全球金融行业带来每年2500亿美元至4100亿美元的增量价值。大模型在金融行业 的应用也逐渐从智能问答等场景深入到风控、营销、财富管理等核心业务场景。 与此同时,问题随之而来,"一本正经说胡话"的AI"幻觉"已经让不少金融从业者 ...
WAIC 2025 启示录:安全治理走到台前
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-29 13:05
Core Insights - The 2025 World Artificial Intelligence Conference (WAIC) highlighted the importance of global cooperation and governance in AI, with a focus on safety and ethical considerations [1][6] - Key figures in AI, including Geoffrey Hinton and Yao Qizhi, emphasized the need for AI to be trained with a focus on benevolence and the societal implications of training data [2][3] - The issue of AI hallucinations was identified as a significant barrier to the reliability of AI systems, with over 70% of surveyed industry professionals acknowledging its impact on decision-making [3] Group 1: AI Governance and Safety - The release of the "Global Governance Action Plan for Artificial Intelligence" and the establishment of the "Global AI Innovation Governance Center" aim to provide institutional support for AI governance [1][6] - Hinton's metaphor of "taming a tiger" underscores the necessity of controlling AI to prevent potential harm to humanity, advocating for global collaboration to ensure AI remains beneficial [2] - Yao Qizhi called for a dual governance approach, addressing both AI ethics and the societal conditions that influence AI training data [2] Group 2: Data Quality and Training - The quality of training data is critical for developing "gentle" AI, with Hinton stressing the need for finely-tuned datasets [4] - Industry leaders, including Nvidia's Neil Trevett, discussed challenges in acquiring high-quality data, particularly in graphics generation and physical simulation [4] - The importance of multimodal interaction data was highlighted by SenseTime's CEO Xu Li, suggesting it can enhance AI's understanding of the physical world [5] Group 3: Addressing AI Hallucinations - The hallucination problem in AI is a pressing concern, with experts noting that current models lack structured knowledge representation and causal reasoning capabilities [3] - Solutions such as text authenticity verification and AI safety testing are being developed to tackle the hallucination issue [3] - The industry recognizes that overcoming the hallucination challenge is essential for fostering a positive human-AI relationship [3]
DeepSeek流量暴跌,要凉了?是它幻觉太严重还是它在闷声发大财?
3 6 Ke· 2025-07-28 23:45
Core Insights - DeepSeek, once hailed as a "national-level" project, has seen a significant decline in its monthly downloads, dropping from 81.13 million in Q1 to 22.59 million, a decrease of 72.2% [1] - Users are increasingly frustrated with DeepSeek's tendency to generate "hallucinated" content, leading to discussions on social media about how to eliminate the "AI flavor" from its outputs [1][2] - The phenomenon of "AI flavor" is characterized by overly mechanical and formulaic responses, which users have begun to recognize and criticize [15] User Experiences - Users have reported instances where DeepSeek provided nonsensical or fabricated advice, such as suggesting irrelevant actions for personal issues or generating non-existent references [2][8][9] - The model's responses often include fabricated data and sources, leading to a lack of trust in its outputs [9][12] Underlying Issues - The decline in DeepSeek's performance is attributed to its reliance on rigid logical structures and formulaic language, which detracts from the quality of its responses [16] - The model's training data is heavily skewed towards English, with less than 5% of its corpus being high-quality Chinese content, limiting its effectiveness in generating diverse and nuanced outputs [22] - Content moderation and the expansion of sensitive word lists have further constrained the model's ability to produce creative and varied language [22] Recommendations for Improvement - Users are encouraged to develop skills to critically assess AI-generated content, including cross-referencing data and testing the model's logic [23] - Emphasizing the importance of human oversight in AI applications, the industry should focus on using AI as a tool for enhancing human creativity rather than as a replacement [24][25]
AI幻觉成WAIC首个关键词,Hinton敲响警钟,讯飞星火X1升级展示治理新突破
量子位· 2025-07-28 02:26
Core Viewpoint - The term "hallucination" has become a hot topic at WAIC this year, highlighting the challenges and risks associated with AI models, particularly in their reliability and practical applications [1][12][20]. Group 1: AI and Hallucination - Nobel laureate Hinton emphasized the complex coexistence of humans and large models, suggesting that humans may also experience hallucinations similar to AI [2][3][15]. - Hinton warned about the potential dangers of AI, advocating for the development of AI that does not seek to harm humanity [4][20]. - The phenomenon of hallucination, where AI generates coherent but factually incorrect information, is a significant barrier to the reliability and usability of large models [5][18]. Group 2: Technological Developments - The upgraded version of iFlytek's large model, Spark-X1, focuses on addressing hallucination issues, achieving notable improvements in both factual and fidelity hallucination governance [7][30]. - The performance comparison of various models shows that Spark-X1 outperforms others in text generation and logical reasoning tasks, with a hallucination rate significantly lower than its competitors [8][30]. - iFlytek's advancements include a new reinforcement learning framework that provides detailed feedback, enhancing the model's training efficiency and reducing hallucination rates [27][29]. Group 3: Industry Implications - The collaboration between major AI companies like Google, OpenAI, and Anthropic on hallucination-related research indicates a collective effort to ensure AI safety and reliability [9][21]. - The ongoing evolution of AI capabilities raises concerns about the potential for AI to exceed human control, necessitating a focus on safety measures and governance frameworks [19][24]. - The concept of "trustworthy AI" is emerging as a critical factor for the successful integration of AI across various industries, ensuring that AI applications are reliable and effective [25][44].