Core Viewpoint - The article discusses a study that reveals large language models (LLMs) do not possess human-like working memory, which is essential for coherent reasoning and conversation [5][30]. Summary by Sections Working Memory - Working memory in humans retains information for a short period, enabling reasoning and complex tasks [7]. - LLMs are often compared to a "talking brain," but the lack of working memory is a significant barrier to achieving true general artificial intelligence [8]. Evaluation of Working Memory - Traditional N-Back Task assessments are unsuitable for LLMs, as they can access all historical tokens rather than recalling internal memory [10]. Experiments Conducted - Experiment 1: Number Guessing Game - LLMs were asked to think of a number between 1-10 and respond to repeated guesses. Most models failed to provide a "yes" response, indicating a lack of internal memory [13][19]. - Experiment 2: Yes-No Game - LLMs were tasked with answering questions about a chosen object. Results showed that models began to contradict themselves after 20-40 questions, demonstrating inadequate working memory [22][26]. - Experiment 3: Math Magic - LLMs were required to remember and manipulate numbers through a series of calculations. The accuracy was low across models, with LLaMA-3.1-8B performing best [28][29]. Conclusions - None of the tested models passed all three experiments, indicating a significant gap in their ability to mimic human-like working memory [30]. - Future advancements in AI may require integrating a true working memory mechanism rather than relying solely on extended context windows [30].
AI记忆伪装被戳穿!GPT、DeepSeek等17款主流大模型根本记不住数字
机器之心·2025-06-15 04:40