Workflow
条件记忆(conditional memory)
icon
Search documents
DeepSeek发布梁文锋署名新论文
新华网财经· 2026-01-13 03:52
DeepSeek于12日晚发布新论文《Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models》 (基于可扩展查找的条件记忆:大型语言模型稀疏性的新维度)。 该论文为北京大学与DeepSeek共同完成,合著作者署名中出现梁文锋。论文提出条件记忆(conditional memory),通过引入可扩展的 查找记忆结构,在等参数、等算力条件下显著提升模型在知识调用、推理、代码、数学等任务上的表现。同时,DeepSeek开源相关记忆模 块Engram。 来源:财联社 关注" 新华网财经 "视频号 更多财经资讯等你来看 闫学晶,又一代言被终止 卖爆了!山姆499元羽绒服,多地门店已断货 往期推荐 ...
DeepSeek发布梁文锋署名新论文
财联社· 2026-01-13 01:15
DeepSeek于12日晚发布新论文《Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models》(基于可扩展查找的 条件记忆:大型语言模型稀疏性的新维度)。 该论文为北京大学与DeepSeek共同完成,合著作者署名中出现梁文锋。论文提出条件记忆(conditional memory),通过引入可扩展的查找记忆结 构,在等参数、等算力条件下显著提升模型在知识调用、推理、代码、数学等任务上的表现。同时,DeepSeek开源相关记忆模块Engram。 ...
刚刚,梁文锋署名开源「记忆」模块,DeepSeek V4更细节了
机器之心· 2026-01-13 00:12
Core Insights - DeepSeek has introduced a new research paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, focusing on enhancing large language models (LLMs) through a novel approach to memory and computation [1][2]. Group 1: Research Background and Problem Statement - Current large language models primarily utilize Mixture of Experts (MoE) for sparsity, known as "conditional computation," but lack an inherent knowledge retrieval mechanism, leading to inefficient simulation of retrieval behavior [2][8]. - DeepSeek proposes "conditional memory" as a complementary approach to MoE, introducing a new module called Engram to address this limitation [3][8]. Group 2: Engram Module and Its Implementation - The Engram module has been made available on GitHub, allowing for community engagement and further development [4]. - Engram modernizes classic n-gram embeddings to achieve knowledge retrieval in O(1) time complexity, enhancing the efficiency of memory access [8][10]. - The module separates static knowledge storage from dynamic computation processes, enhancing the overall architecture of the Transformer network [12][14]. Group 3: Performance and Efficiency - DeepSeek has expanded Engram to a scale of 27 billion parameters, demonstrating significant performance improvements over pure MoE baseline models under equivalent parameter and FLOPs conditions [10][37]. - Engram has shown notable gains in knowledge retrieval tasks, with improvements such as +3.4 in MMLU and +4.0 in CMMLU, as well as enhanced general reasoning capabilities [10][37]. - The architecture allows for efficient memory access without additional performance overhead, supporting prefetching from host memory during runtime [11][18]. Group 4: Sparsity Distribution and Optimal Allocation - DeepSeek formalized a U-shaped expansion rule to characterize the optimal trade-off between neural computation (MoE) and static memory (Engram) [9][22]. - The research indicates that a balanced allocation of approximately 20%-25% of sparse parameter budget to Engram yields optimal performance, confirming the structural complementarity between the two modules [27][29]. Group 5: Experimental Results - Four models were trained: Dense-4B, MoE-27B, Engram-27B, and Engram-40B, all under identical training conditions [34][35]. - Sparse architectures consistently outperformed the dense model across various benchmarks, with Engram-27B achieving significant improvements over MoE-27B in multiple tasks [37]. - Engram-40B further reduced pre-training loss and improved performance on most benchmarks, indicating that memory capacity has not yet reached saturation [38]. Group 6: Long Context Training - Engram's architecture has been validated for its structural advantages in long-context tasks, demonstrating significant performance gains in global context retention [40][41]. - Controlled experiments revealed that Engram outperforms MoE in complex retrieval tasks, showcasing its inherent architectural superiority [45].