Workflow
波将金式理解
icon
Search documents
Gary Marcus惊世之言:纯LLM上构建AGI彻底没了希望!MIT、芝大、哈佛论文火了
机器之心· 2025-06-29 04:23
Core Viewpoint - The article discusses a groundbreaking paper co-authored by MIT, the University of Chicago, and Harvard, which reveals significant inconsistencies in reasoning patterns of large language models (LLMs), termed "Potemkin understanding," suggesting that the hope of creating Artificial General Intelligence (AGI) based solely on LLMs is fundamentally flawed [2][4]. Summary by Sections Introduction - Gary Marcus, a prominent AI scholar, highlights the paper's findings, indicating that even top models like o3 frequently exhibit reasoning errors, undermining the notion of their understanding and reasoning capabilities [2][4]. Key Findings - The paper argues that success in benchmark tests does not equate to genuine understanding but rather reflects a superficial grasp of concepts, leading to a "Potemkin understanding" where models provide seemingly correct answers that mask a deeper misunderstanding [3][17]. - The research team identifies two methods to quantify the prevalence of the Potemkin phenomenon, revealing that it exists across various models, tasks, and domains, indicating a fundamental inconsistency in conceptual representation [17][28]. Experimental Results - The study analyzed seven popular LLMs across 32 concepts, finding that while models could define concepts correctly 94.2% of the time, their performance in applying these concepts in tasks significantly declined, as evidenced by high Potemkin rates [29][33]. - The Potemkin rate, defined as the proportion of incorrect answers following correct responses on foundational examples, was found to be high across all models and tasks, indicating widespread issues in conceptual application [30][31]. Inconsistency Detection - The research also assessed internal inconsistencies within models by prompting them to generate examples of specific concepts and then asking them to evaluate their own outputs, revealing substantial limitations in self-assessment capabilities [36][39]. - The inconsistency scores ranged from 0.02 to 0.64 across all examined models, suggesting that misunderstandings stem not only from incorrect concept definitions but also from conflicting representations of the same idea [39][40]. Conclusion - The findings underscore the pervasive nature of the Potemkin understanding phenomenon in LLMs, challenging the assumption that high performance on traditional benchmarks equates to true understanding and highlighting the need for further research into the implications of these inconsistencies [40].