DeepSeek
Search documents
梁文锋署名新论文,DeepSeek V4架构首曝?直击Transformer致命缺陷
3 6 Ke· 2026-01-13 01:24
Core Insights - DeepSeek's new paper introduces a novel approach to address the memory limitations of Transformer models by proposing a complementary "conditional memory" sparse axis through the Engram module, which enables efficient knowledge retrieval with near O(1) complexity [1][6][11]. Group 1: Memory and Model Architecture - The paper highlights that while MoE (Mixture of Experts) has become a mainstream architecture for large models, it fundamentally still relies on Transformers, which lack a native knowledge retrieval mechanism, leading to inefficient computation [9][11]. - Engram is designed to offload static, repetitive patterns in language modeling to a scalable lookup module, allowing the Transformer backbone to focus on more complex tasks requiring combination and reasoning [11][15]. - The authors categorize language modeling tasks into two types: those requiring combination and reasoning, and those resembling pattern retrieval, emphasizing the need for a dedicated mechanism for the latter [12][13]. Group 2: Engram Architecture and Functionality - Engram is conceptualized as a modernized version of classic hash N-gram, functioning as a scalable lookup module integrated within the Transformer architecture [18][20]. - The architecture includes a two-stage process for handling input sequences, focusing on retrieval and fusion, which enhances the model's efficiency in processing static patterns [20][21]. - The introduction of a context-aware gating mechanism allows the model to dynamically adjust its responses based on the retrieved embeddings, improving the overall expressiveness and reducing noise from hash collisions [25][27]. Group 3: Performance and Scaling - The paper presents a U-shaped scaling law indicating that an optimal resource allocation between MoE and Engram can enhance model performance, suggesting that a balance between dynamic computation and static memory is crucial [3][33]. - Experimental results show that Engram, when scaled to 27 billion parameters, outperforms the MoE baseline under equivalent parameter and FLOPs conditions, demonstrating its effectiveness in various benchmarks [5][38]. - Engram's architecture not only improves knowledge retrieval but also enhances reasoning, mathematics, and coding capabilities, indicating a significant leap in performance metrics across multiple tasks [39][48]. Group 4: Future Implications - The findings suggest a paradigm shift in model architecture towards a dual-axis approach of computation and memory, with potential integration into future iterations of large language models, such as V4 [46][50]. - The paper posits that the integration of Engram could lead to substantial improvements in model efficiency and capability, paving the way for more advanced applications in natural language processing [51][52].
港股异动 | MiniMax一度大涨11%!AI概念集体拉升,快手创近3个月新高
Xin Lang Cai Jing· 2026-01-13 01:16
Group 1 - The core viewpoint of the news highlights a significant surge in AI-related stocks in the Hong Kong market, with notable increases in companies such as Kuaishou, Weimob, and MiniMax, indicating strong investor interest in the AI sector [1] - Kuaishou's stock price increased by 4%, reaching a three-month high, while Weimob's stock rose over 13%, and MiniMax saw an 11% increase, with its cumulative listing gain exceeding 130% [1] - The upcoming release of DeepSeek's new flagship model V4, which reportedly outperforms major models like Claude and ChatGPT in code generation, is expected to drive further excitement in the AI market [1] Group 2 - Citic Securities believes that the current dynamics in the AI industry, including financing activities by overseas companies like xAI and Anthropic, along with domestic policies promoting "AI + manufacturing," will lead to a new wave of AI application enthusiasm [2] - The continuous improvement in model capabilities, particularly in reasoning and reduced costs for long-window applications, is accelerating the commercialization of AI in various sectors, including search and marketing, coding, multimodal applications, agents, and AI for science [2] - Companies involved in these areas are expected to see an acceleration in their commercialization processes as the AI landscape evolves [2]
DeepSeek发布梁文锋署名新论文
财联社· 2026-01-13 01:15
Core Insights - DeepSeek released a new paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" on the evening of the 12th, co-authored with Peking University, featuring Liang Wenfeng [1] - The paper introduces conditional memory, which significantly enhances model performance in knowledge retrieval, reasoning, coding, and mathematical tasks under equal parameters and computational conditions [1] - DeepSeek has open-sourced the related memory module called Engram [1]
刚刚,梁文锋署名开源“记忆”模块,DeepSeek V4更细节了
程序员的那些事· 2026-01-13 00:56
Core Insights - DeepSeek has introduced a new research paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, focusing on enhancing large language models (LLMs) through conditional memory and a new module called Engram [1][3][4]. Group 1: Research Background and Problem Statement - Current large language models primarily utilize Mixture of Experts (MoE) for sparsity, but existing Transformer architectures lack native knowledge retrieval mechanisms, leading to inefficient simulation of retrieval behavior [3][9]. - DeepSeek proposes conditional memory as a complementary approach to MoE, introducing the Engram module to address the limitations of current models [4][9]. Group 2: Engram Module and Its Functionality - The Engram module modernizes classic n-gram embeddings, enabling knowledge retrieval with O(1) time complexity [9]. - Engram separates static knowledge storage from dynamic computation processes, enhancing the model's ability to perform complex reasoning by offloading the reconstruction burden from the model's shallow layers [11][13]. Group 3: Performance Improvements - Engram has been scaled to 27 billion parameters, showing significant performance improvements over pure MoE baseline models under equivalent parameter and FLOPs conditions [11]. - Notably, Engram enhances knowledge retrieval capabilities, with improvements in metrics such as MMLU (+3.4), CMMLU (+4.0), and general reasoning tasks like BBH (+5.0) and ARC-Challenge (+3.7) [11][38]. Group 4: System Efficiency and Scalability - Engram's deterministic addressing supports prefetching from host memory at runtime with minimal performance overhead, allowing for efficient memory management [12][19]. - The architecture allows for the decoupling of parameter storage from computational resources, facilitating linear scalability with the number of accelerators [21][22]. Group 5: Experimental Results - Four models were trained: Dense-4B, MoE-27B, Engram-27B, and Engram-40B, all using the same training data and processes [35][36]. - Sparse architectures (MoE-27B, Engram-27B/40B) significantly outperformed the dense model (Dense-4B) across various benchmarks, demonstrating superior scaling properties [38]. Group 6: Long Context Training - Engram architecture has shown significant advantages in long-context tasks by preserving valuable attention capacity for global context processing [41]. - Controlled experiments indicate that Engram outperforms MoE models in complex retrieval tasks, confirming its architectural superiority [46].
Manus和它的“8000万名员工”
虎嗅APP· 2026-01-13 00:49
Core Viewpoint - Manus represents a significant paradigm shift in AI applications, transitioning from merely generating content to autonomously completing tasks, marking a "DeepSeek moment" in the industry [6][7]. Group 1: Manus's Unique Model - Manus has created over 80 million virtual computer instances, which are crucial to its operational model, allowing AI to autonomously handle complex tasks [9][10]. - This model signifies a shift in core operators from humans to AI, establishing Manus as an "artificial intelligence operating system" [11]. - The Manus model is expected to lead to a 0.5-level leap in human civilization, as AI takes over digital economy-related jobs [12]. Group 2: AI Application's "DeepSeek Moment" - Manus achieved an annual recurring revenue (ARR) of over $100 million within a year, indicating its strong market performance [20]. - The introduction of multi-agent systems has shown a 90.2% performance improvement in handling complex tasks compared to single-agent systems, emphasizing the importance of collaboration among AI [14][17]. - The transition from AI as a tool to AI as a worker signifies a major evolution in AI applications, moving beyond the "toy" and "assistant" phases [20]. Group 3: Technological Foundations of Multi-Agent Systems - Manus's multi-agent system relies on several core technologies, including virtual machines for secure execution environments and resource pooling for efficient resource utilization [22][24]. - The virtual machine architecture allows for independent task execution, addressing safety and reliability issues in AI applications [25]. - Intelligent orchestration ensures optimal resource allocation and task management, enhancing overall system efficiency [26][27]. Group 4: Competitive Landscape and Industry Dynamics - Major tech companies are rapidly advancing in multi-agent systems, with Meta, Google, Microsoft, and Amazon all integrating these capabilities into their platforms [30][32]. - In the domestic market, companies like Alibaba, Tencent, and Baidu are also making significant strides in developing multi-agent technologies [31]. - The emergence of new players like Kimi, which has raised $500 million for multi-agent system development, indicates a growing competitive landscape [33]. Group 5: Evolution of Human Roles - The relationship between humans and AI is shifting from operator-tool dynamics to manager-team dynamics, where humans define tasks while AI executes them [35]. - This evolution will likely reduce the demand for lower and mid-level creative jobs while amplifying the value of high-level creative work [37]. - The traditional hierarchical structure of organizations may flatten as multi-agent systems can handle the entire workflow from strategy to execution [38]. Group 6: Underestimated Risks - Data ownership and system security are critical concerns in multi-agent systems, as data becomes a currency for AI collaboration and system evolution [40][41]. - The complexity of multi-agent systems introduces new security challenges, including process safety, collaboration safety, and evolution safety [42][43]. - Balancing security and efficiency remains a fundamental challenge, as overly secure systems may hinder performance while efficient systems may expose vulnerabilities [44]. Group 7: Irreversible Development Path - The proliferation of Manus's 80 million virtual machines signals a new era of productivity, redefining the nature of work itself [47]. - In the short term, vertical applications of multi-agent systems are expected to explode across various industries, leading to intense market competition [48]. - Over the long term, human-AI collaboration will evolve into a more integrated system, blurring the lines between human and machine contributions [49].
刚刚,梁文锋署名开源「记忆」模块,DeepSeek V4更细节了
3 6 Ke· 2026-01-13 00:42
Core Insights - DeepSeek has released a new paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, introducing a new module called Engram to enhance the efficiency of large language models [1][3]. Group 1: Research Overview - The current approach to sparsity in large language models primarily relies on Mixture of Experts (MoE) for conditional computation, but existing Transformer architectures lack a native knowledge retrieval mechanism [3][8]. - DeepSeek proposes conditional memory as a complementary dimension to MoE, introducing the Engram module to facilitate efficient knowledge retrieval with O(1) time complexity [8][9]. Group 2: Engram Module Implementation - The Engram module has been implemented and made available on GitHub, allowing for community engagement and further development [4][5]. - Engram separates static memory storage from dynamic computation processes within the Transformer architecture, enhancing overall model performance [10][12]. Group 3: Performance Metrics - Engram has shown significant improvements in various benchmarks, including a +3.4% increase in MMLU accuracy and a +4.0% increase in CMMLU accuracy, as well as notable gains in general reasoning tasks [9][28]. - The architecture allows for better long-context retrieval capabilities, with accuracy in Multi-Query NIAH increasing from 84.2 to 97.0 [9]. Group 4: Experimental Results - DeepSeek trained four models: Dense-4B (4.1 billion parameters), MoE-27B (26.7 billion), Engram-27B (26.7 billion), and Engram-40B (39.5 billion), all under the same training conditions [25][27]. - The sparse architectures (MoE-27B, Engram-27B/40B) outperformed the dense model (Dense-4B) across all benchmarks, demonstrating superior scalability [28][30]. Group 5: Memory and Computation Decoupling - Engram's deterministic retrieval mechanism allows for the decoupling of parameter storage from computational resources, enabling efficient scaling without increasing computational costs [15][17]. - The architecture supports a multi-level cache hierarchy, optimizing memory access and reducing latency [18]. Group 6: U-Shaped Scaling Law - DeepSeek identified a U-shaped scaling law for optimal allocation between MoE and Engram, suggesting that a balanced distribution of sparse parameters leads to improved performance [19][24]. - The optimal allocation ratio was found to be around 20%-25% of the sparse parameter budget for Engram, confirming the structural complementarity between the two modules [23][24].
DeepSeek开源大模型记忆模块!梁文锋署名新论文,下一代稀疏模型提前剧透
量子位· 2026-01-13 00:39
Core Insights - The article discusses the introduction of "Conditional Memory" in Transformer models, which enhances knowledge retrieval mechanisms that were previously lacking in the original architecture [1][2][9]. Group 1: Introduction of Conditional Memory - Conditional Memory is viewed as an essential modeling primitive for the next generation of sparse models [2]. - The research team, led by Liang Wenfeng in collaboration with Peking University, has proposed a new paradigm and implementation plan called the Engram module [3][5]. Group 2: Performance Improvements - The Engram module allows a 27B parameter model to outperform a pure MoE model of the same size, compressing tasks that originally required 6 layers of attention down to 1-2 layers, thus freeing resources for more complex reasoning tasks [5][13]. - The optimal allocation of sparse parameters between MoE and Engram memory results in a U-shaped curve, indicating that allocating about 20% to 25% of sparse parameters to Engram memory minimizes model validation loss [34][36]. Group 3: Technical Implementation - Engram's design incorporates a large vocabulary for static entities and phrases, enabling O(1) speed for information retrieval [7][14]. - The team addresses traditional N-gram model issues, such as semantic redundancy and storage explosion, by compressing tokens and using multiple hash functions to map N-grams to a fixed-size embedding table [22][25]. Group 4: Experimental Results - The Engram-27B model shows significant improvements across various benchmarks, with notable increases in performance metrics such as BBH, ARC-Challenge, and DROP [47]. - The model's architecture allows for efficient memory management, enabling the use of a 100 billion parameter table offloaded to CPU memory without significant latency impact during inference [63][66]. Group 5: Future Developments - The next generation of sparse models from DeepSeek is expected to be released before the Spring Festival, indicating ongoing advancements in AI model architecture [67].
刚刚,梁文锋署名开源「记忆」模块,DeepSeek V4更细节了
机器之心· 2026-01-13 00:12
Core Insights - DeepSeek has introduced a new research paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, focusing on enhancing large language models (LLMs) through a novel approach to memory and computation [1][2]. Group 1: Research Background and Problem Statement - Current large language models primarily utilize Mixture of Experts (MoE) for sparsity, known as "conditional computation," but lack an inherent knowledge retrieval mechanism, leading to inefficient simulation of retrieval behavior [2][8]. - DeepSeek proposes "conditional memory" as a complementary approach to MoE, introducing a new module called Engram to address this limitation [3][8]. Group 2: Engram Module and Its Implementation - The Engram module has been made available on GitHub, allowing for community engagement and further development [4]. - Engram modernizes classic n-gram embeddings to achieve knowledge retrieval in O(1) time complexity, enhancing the efficiency of memory access [8][10]. - The module separates static knowledge storage from dynamic computation processes, enhancing the overall architecture of the Transformer network [12][14]. Group 3: Performance and Efficiency - DeepSeek has expanded Engram to a scale of 27 billion parameters, demonstrating significant performance improvements over pure MoE baseline models under equivalent parameter and FLOPs conditions [10][37]. - Engram has shown notable gains in knowledge retrieval tasks, with improvements such as +3.4 in MMLU and +4.0 in CMMLU, as well as enhanced general reasoning capabilities [10][37]. - The architecture allows for efficient memory access without additional performance overhead, supporting prefetching from host memory during runtime [11][18]. Group 4: Sparsity Distribution and Optimal Allocation - DeepSeek formalized a U-shaped expansion rule to characterize the optimal trade-off between neural computation (MoE) and static memory (Engram) [9][22]. - The research indicates that a balanced allocation of approximately 20%-25% of sparse parameter budget to Engram yields optimal performance, confirming the structural complementarity between the two modules [27][29]. Group 5: Experimental Results - Four models were trained: Dense-4B, MoE-27B, Engram-27B, and Engram-40B, all under identical training conditions [34][35]. - Sparse architectures consistently outperformed the dense model across various benchmarks, with Engram-27B achieving significant improvements over MoE-27B in multiple tasks [37]. - Engram-40B further reduced pre-training loss and improved performance on most benchmarks, indicating that memory capacity has not yet reached saturation [38]. Group 6: Long Context Training - Engram's architecture has been validated for its structural advantages in long-context tasks, demonstrating significant performance gains in global context retention [40][41]. - Controlled experiments revealed that Engram outperforms MoE in complex retrieval tasks, showcasing its inherent architectural superiority [45].
刚刚,DeepSeek 突发梁文峰署名新论文:V4 新架构提前曝光?
AI前线· 2026-01-12 22:41
Core Insights - DeepSeek has released a significant technological achievement by open-sourcing a new paper and module called Engram, which introduces a "lookup-computation separation" mechanism to enhance the performance of large language models in various tasks [2][5]. Summary by Sections Introduction of Engram - Engram is a scalable, lookup-based memory module designed to improve the efficiency of language models by separating memory retrieval from computational tasks [10][18]. Need for Engram - Traditional large language models rely on Transformer and Mixture-of-Experts (MoE) architectures, which combine memory and computation in a way that can lead to inefficiencies. Engram aims to address this by allowing models to handle factual memory and logical reasoning separately [8][9]. Core Technology of Engram - Engram utilizes modernized hashed N-gram embeddings, allowing for O(1) time complexity in memory retrieval, which significantly reduces computational costs while maintaining high retrieval speed [11][13]. Relationship with MoE - Engram provides a new axis of sparsity that complements MoE by offering static memory retrieval capabilities, thus optimizing parameter efficiency. In a 27 billion parameter model, Engram can utilize a large number of parameters for memory while consuming minimal computational resources during inference [15][16]. Performance Metrics - Engram has shown improved performance metrics across various benchmarks, such as achieving a loss of 1.950 on the Pile dataset and an accuracy of 60.4% on MMLU with 5-shot learning, outperforming both Dense and MoE models [17]. Community Reception - The Engram technology has received positive feedback from the community, with users highlighting its potential to separate memory pattern retrieval from neural computation, marking a new direction in model architecture design [18][19][21]. Future Implications - Observers speculate that Engram will be a core component of DeepSeek's upcoming V4 model, indicating a significant architectural advancement in memory and reasoning collaboration [22][23].
美股异动 | 金山云(KC.US)盘前涨逾15% DeepSeek-V4有望引发新一轮AI应用热潮
Zhi Tong Cai Jing· 2026-01-12 14:04
Core Viewpoint - Kingsoft Cloud (KC.US) experienced a pre-market increase of over 15%, reaching $12.71, driven by news regarding DeepSeek's upcoming release of its next-generation flagship model V4, which is reported to have superior coding capabilities compared to mainstream models like Claude and ChatGPT [1] Group 1: Company Developments - DeepSeek is set to launch its new flagship model V4 in the coming weeks, which is expected to enhance programming capabilities significantly [1] - Initial test results from DeepSeek indicate that V4 outperforms leading models in code generation [1] Group 2: Industry Trends - CITIC Securities highlights the current dynamic developments in the AI industry, noting recent financing activities by overseas companies such as xAI and Anthropic [1] - The introduction of domestic policies promoting "AI + manufacturing" is expected to stimulate growth in the sector [1] - The recent stock surges of companies like Zhipu AI and MiniMax following their listings suggest a growing interest and investment in AI applications, with the upcoming launch of DeepSeek-V4 likely to trigger a new wave of AI application enthusiasm [1]