稀疏大模型
Search documents
腾讯研究院AI速递 20260114
腾讯研究院· 2026-01-13 16:29
Group 1 - Anthropic has launched an AI office tool called Cowork, designed to automate daily tasks such as document creation, planning, data analysis, and file organization [1] - Cowork features proactive and autonomous capabilities, allowing it to create plans and sync progress in real-time, and integrates with external information sources and Chrome [1] - The development of Cowork took only a week and a half, with 100% of the code written by Claude Code, ensuring user control and the ability to halt operations at any time [1] Group 2 - Apple has announced a partnership with Google to develop the next generation of its foundational model based on Gemini, which will also overhaul Siri [2] - The Apple AI team has experienced significant talent loss, with dozens of core members leaving, making collaboration with Google a necessary choice due to Gemini's 1.2 trillion parameters compared to Apple's 150 billion [2] - Google processes 13 trillion tokens monthly, and Gemini has captured over 20% of the global market share, while Elon Musk criticized the concentration of power in this partnership [2] Group 3 - DeepSeek has introduced a new paper proposing a conditional memory module called Engram, which complements MoE conditional computation and addresses the lack of native knowledge retrieval in Transformers [3] - Engram significantly outperforms pure MoE baselines, with improvements in MMLU by 3.4, BBH by 5.0, and HumanEval by 3.0, while increasing long-context retrieval accuracy from 84.2% to 97.0% [3] - The upcoming DeepSeek V4 is becoming clearer, with conditional memory expected to be a core modeling primitive for the next generation of sparse large models [3] Group 4 - OpenAI has acquired AI healthcare startup Torch for approximately $100 million, with $60 million paid upfront and the remainder for employee retention incentives [4] - Torch integrates with healthcare systems like Kaiser Permanente and Apple Health, allowing for unified access to lab results, prescriptions, and medical records, while using AI for classification and health insights [4] - The founding team of Torch has joined OpenAI to develop the ChatGPT Health module, following their previous experience with an online clinic platform [4] Group 5 - Anthropic has launched HIPAA-compliant AI services for healthcare, enabling institutions and individuals to process protected health data while referencing authoritative databases [6] - Claude can export personal health data from applications like Apple Health for aggregation and understanding, with a commitment not to use any medical user data for model training [6] - Over 22,000 clinical service providers from Banner Health are using Claude, with 85% reporting increased work efficiency, and collaborations with major healthcare institutions are underway [6] Group 6 - Baichuan has released the open-source medical model M3, achieving a top score of 65.1 in HealthBench and winning the Hard category with a score of 44.4, surpassing GPT-5.2 [7] - M3 introduces native end-to-end serious inquiry capabilities, following the SCAN principles, and demonstrates superior inquiry abilities compared to average human doctors [7] - M3 employs a dynamic Verifier System and a new SPAR algorithm to address long dialogue training issues, with applications already integrated for doctors and patients [7] Group 7 - OpenAI is set to produce a special audio product called "Sweetpea," designed to replace AirPods, with mass production planned by Foxconn by Q4 2028 [8] - The device, designed by Jony Ive's team, features a metallic design resembling a pebble and includes two capsule-like units for behind-the-ear wear, with a focus on local AI processing [8] - The product is expected to launch in September 2026, with an estimated first-year shipment of 40-50 million units, allowing users to control functions via commands instead of an iPhone [8] Group 8 - Meituan has introduced a new sparse attention mechanism called LoZA, replacing 50% of low-performance MLA modules with a streaming sparse attention structure [9] - The new mechanism improves decoding speed for 128K context by 10 times and preloading speed for 256K context by 50%, while reducing computational complexity to linear O(L·S) [9] - LoZA can be implemented without retraining from scratch, featuring a design that balances local detail and overall logic within sparse windows [9] Group 9 - MIT Technology Review has released its list of the top ten breakthrough technologies for 2026, including large-scale AI data centers, sodium-ion batteries, base editing, and advanced nuclear reactors [10][11] - The report highlights the significant energy consumption of large-scale data centers and the successful application of sodium-ion batteries in specific vehicle models [11] - It emphasizes the shift in AI development focus from "what can be done" to "what should be done," with ethical considerations becoming a central theme in life sciences [11] Group 10 - The CEO of Fal platform revealed that generating a 5-second 24-frame video consumes 12,000 times the computational power of generating 200 tokens of text, with 4K resolution requiring ten times more [12] - The platform supports over 600 generative media models, with top clients using an average of 14 different models simultaneously, indicating a trend towards scaling AI-generated content [12] - The discussion suggests that as content generation becomes limitless, finite intellectual property will gain more value, with education and personalized advertising identified as promising application areas [12]
梁文锋署名,DeepSeek论文上新
Di Yi Cai Jing Zi Xun· 2026-01-13 03:41
Core Insights - DeepSeek has released a new paper focusing on the conditional memory module of large models, suggesting it will be a core modeling primitive in the next generation of sparse large models [2][5][7] Group 1: Research and Development - The new paper, co-authored with Peking University, is titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" [5] - The research identifies two distinct tasks within large models: deep dynamic computation for combinatorial reasoning and static knowledge retrieval, highlighting inefficiencies in the current Transformer architecture [5][6] - DeepSeek introduces conditional memory as a supplementary sparse dimension to optimize the balance between neural computation (MoE) and static memory (Engram) [6][7] Group 2: Performance and Implications - The team discovered a U-shaped scaling law indicating that the mixed sparse capacity allocation between MoE experts and Engram memory significantly outperforms pure MoE baseline models [6] - The introduction of the memory module not only aids knowledge retrieval but also shows significant improvements in general reasoning, coding, and mathematical tasks [6][7] - The paper essentially proposes a "division of labor" optimization for large models, allowing specialized modules to handle specific tasks more efficiently [6][7] Group 3: Future Developments - Industry speculation suggests that the proposed conditional memory may be part of the technical architecture for DeepSeek's upcoming flagship model, DeepSeek V4, expected to be released around February [7] - Initial tests indicate that V4 may surpass other leading models in programming capabilities, with the previous V3 model having already outperformed OpenAI's GPT-5 and Google's Gemini 3.0 Pro in various benchmarks [7]
DeepSeek论文上新!下一代大模型实现“记忆分离”,V4不远了?
Di Yi Cai Jing Zi Xun· 2026-01-13 03:32
Core Insights - DeepSeek has released a new paper focusing on the conditional memory module of large models, suggesting it will be a core modeling primitive in the next generation of sparse large models [1][4]. Group 1: Research Findings - The new paper, co-authored with Peking University, is titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" and highlights the need for a native knowledge retrieval mechanism in existing Transformer architectures [4]. - The research identifies two distinct tasks in large models: deep dynamic computation for combinatorial reasoning and static knowledge retrieval, indicating that current models inefficiently simulate retrieval processes [4][5]. - DeepSeek introduces conditional memory as a supplementary dimension of sparsity, optimizing the trade-off between mixture of experts (MoE) and static memory (Engram) [4][6]. Group 2: Performance Improvements - The team discovered a U-shaped scaling law, showing that the mixed sparse capacity allocation between MoE experts and Engram memory significantly outperforms pure MoE baseline models [5]. - The introduction of the memory module not only aids knowledge retrieval but also yields notable improvements in general reasoning, coding, and mathematical tasks [5][6]. - The paper essentially proposes a "division of labor" optimization for large models, allowing specialized modules to handle specific tasks, thereby enhancing efficiency and resource allocation [6]. Group 3: Future Developments - Industry speculation suggests that the proposed conditional memory may be integral to the architecture of DeepSeek's upcoming flagship model, DeepSeek V4, expected to be released around February [6]. - Initial tests indicate that V4 may surpass other leading models in programming capabilities, with the previous model, V3, having already outperformed OpenAI's GPT-5 and Google's Gemini 3.0 Pro in various benchmarks [6].
梁文锋署名新论文,DeepSeek V4架构首曝?直击Transformer致命缺陷
3 6 Ke· 2026-01-13 01:24
Core Insights - DeepSeek's new paper introduces a novel approach to address the memory limitations of Transformer models by proposing a complementary "conditional memory" sparse axis through the Engram module, which enables efficient knowledge retrieval with near O(1) complexity [1][6][11]. Group 1: Memory and Model Architecture - The paper highlights that while MoE (Mixture of Experts) has become a mainstream architecture for large models, it fundamentally still relies on Transformers, which lack a native knowledge retrieval mechanism, leading to inefficient computation [9][11]. - Engram is designed to offload static, repetitive patterns in language modeling to a scalable lookup module, allowing the Transformer backbone to focus on more complex tasks requiring combination and reasoning [11][15]. - The authors categorize language modeling tasks into two types: those requiring combination and reasoning, and those resembling pattern retrieval, emphasizing the need for a dedicated mechanism for the latter [12][13]. Group 2: Engram Architecture and Functionality - Engram is conceptualized as a modernized version of classic hash N-gram, functioning as a scalable lookup module integrated within the Transformer architecture [18][20]. - The architecture includes a two-stage process for handling input sequences, focusing on retrieval and fusion, which enhances the model's efficiency in processing static patterns [20][21]. - The introduction of a context-aware gating mechanism allows the model to dynamically adjust its responses based on the retrieved embeddings, improving the overall expressiveness and reducing noise from hash collisions [25][27]. Group 3: Performance and Scaling - The paper presents a U-shaped scaling law indicating that an optimal resource allocation between MoE and Engram can enhance model performance, suggesting that a balance between dynamic computation and static memory is crucial [3][33]. - Experimental results show that Engram, when scaled to 27 billion parameters, outperforms the MoE baseline under equivalent parameter and FLOPs conditions, demonstrating its effectiveness in various benchmarks [5][38]. - Engram's architecture not only improves knowledge retrieval but also enhances reasoning, mathematics, and coding capabilities, indicating a significant leap in performance metrics across multiple tasks [39][48]. Group 4: Future Implications - The findings suggest a paradigm shift in model architecture towards a dual-axis approach of computation and memory, with potential integration into future iterations of large language models, such as V4 [46][50]. - The paper posits that the integration of Engram could lead to substantial improvements in model efficiency and capability, paving the way for more advanced applications in natural language processing [51][52].
刚刚,梁文锋署名开源「记忆」模块,DeepSeek V4更细节了
3 6 Ke· 2026-01-13 00:42
Core Insights - DeepSeek has released a new paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, introducing a new module called Engram to enhance the efficiency of large language models [1][3]. Group 1: Research Overview - The current approach to sparsity in large language models primarily relies on Mixture of Experts (MoE) for conditional computation, but existing Transformer architectures lack a native knowledge retrieval mechanism [3][8]. - DeepSeek proposes conditional memory as a complementary dimension to MoE, introducing the Engram module to facilitate efficient knowledge retrieval with O(1) time complexity [8][9]. Group 2: Engram Module Implementation - The Engram module has been implemented and made available on GitHub, allowing for community engagement and further development [4][5]. - Engram separates static memory storage from dynamic computation processes within the Transformer architecture, enhancing overall model performance [10][12]. Group 3: Performance Metrics - Engram has shown significant improvements in various benchmarks, including a +3.4% increase in MMLU accuracy and a +4.0% increase in CMMLU accuracy, as well as notable gains in general reasoning tasks [9][28]. - The architecture allows for better long-context retrieval capabilities, with accuracy in Multi-Query NIAH increasing from 84.2 to 97.0 [9]. Group 4: Experimental Results - DeepSeek trained four models: Dense-4B (4.1 billion parameters), MoE-27B (26.7 billion), Engram-27B (26.7 billion), and Engram-40B (39.5 billion), all under the same training conditions [25][27]. - The sparse architectures (MoE-27B, Engram-27B/40B) outperformed the dense model (Dense-4B) across all benchmarks, demonstrating superior scalability [28][30]. Group 5: Memory and Computation Decoupling - Engram's deterministic retrieval mechanism allows for the decoupling of parameter storage from computational resources, enabling efficient scaling without increasing computational costs [15][17]. - The architecture supports a multi-level cache hierarchy, optimizing memory access and reducing latency [18]. Group 6: U-Shaped Scaling Law - DeepSeek identified a U-shaped scaling law for optimal allocation between MoE and Engram, suggesting that a balanced distribution of sparse parameters leads to improved performance [19][24]. - The optimal allocation ratio was found to be around 20%-25% of the sparse parameter budget for Engram, confirming the structural complementarity between the two modules [23][24].
华为发布天才少年AI挑战课题,汇聚全球智慧共探科技前沿
Sou Hu Cai Jing· 2025-06-17 19:01
Core Insights - Huawei has launched the "Genius Challenge" to attract global talent in five key areas: intelligent connectivity & computing, fundamental research and innovation, intelligent terminals, cloud computing, and intelligent vehicles [3][4][5][6] Group 1: Intelligent Connectivity & Computing - The challenge includes research on autonomous intelligent wireless communication architecture and key technologies to meet future communication demands [3] - It also focuses on the key technologies of the Ascend reinforcement learning system to enhance performance [3] - Research on AI cluster all-optical switching networks aims to improve data transmission speed and efficiency for large-scale AI computing [3] Group 2: Fundamental Research & Innovation - Key technologies for large model security are being explored to address safety risks in current applications [4] - Research on intelligent imaging/editing technology aims to achieve breakthroughs for enhanced user visual experiences [4] - The design and optimization of training cluster architecture will improve the efficiency and quality of model training [4] Group 3: Intelligent Terminals - The challenge includes research on world models to help intelligent terminals better understand and simulate physical laws [5] - It aims to enhance personalization and memory capabilities for intelligent terminals [5] - Research on multimedia algorithms based on computer vision and multimodal understanding is also included [5] Group 4: Cloud Computing - Research on generalizable embodied intelligent operation technology seeks to enable cloud AI to control physical devices [6] - The challenge includes exploring core technologies for the digital-native era [6] - AI-based next-generation cloud network infrastructure research aims to build advanced cloud network systems [6] Group 5: Intelligent Vehicles - The challenge focuses on training and optimizing large models for intelligent vehicles [6] - Research on advanced autonomous driving models is part of the initiative [6] - The development of collaborative control technologies for vehicle chassis aims to enhance safety and comfort [6] Group 6: R&D Investment and Talent Development - Huawei's R&D expenditure for 2024 is projected to reach 179.7 billion yuan, accounting for approximately 20.8% of total revenue [7] - Over the past decade, Huawei has invested more than 1.249 trillion yuan in R&D [7] - The "Genius Challenge" reflects Huawei's commitment to fundamental research and innovation, emphasizing the importance of active participation in basic research [7]