稀疏模型
Search documents
DeepSeek开源大模型记忆模块!梁文锋署名新论文,下一代稀疏模型提前剧透
量子位· 2026-01-13 00:39
Core Insights - The article discusses the introduction of "Conditional Memory" in Transformer models, which enhances knowledge retrieval mechanisms that were previously lacking in the original architecture [1][2][9]. Group 1: Introduction of Conditional Memory - Conditional Memory is viewed as an essential modeling primitive for the next generation of sparse models [2]. - The research team, led by Liang Wenfeng in collaboration with Peking University, has proposed a new paradigm and implementation plan called the Engram module [3][5]. Group 2: Performance Improvements - The Engram module allows a 27B parameter model to outperform a pure MoE model of the same size, compressing tasks that originally required 6 layers of attention down to 1-2 layers, thus freeing resources for more complex reasoning tasks [5][13]. - The optimal allocation of sparse parameters between MoE and Engram memory results in a U-shaped curve, indicating that allocating about 20% to 25% of sparse parameters to Engram memory minimizes model validation loss [34][36]. Group 3: Technical Implementation - Engram's design incorporates a large vocabulary for static entities and phrases, enabling O(1) speed for information retrieval [7][14]. - The team addresses traditional N-gram model issues, such as semantic redundancy and storage explosion, by compressing tokens and using multiple hash functions to map N-grams to a fixed-size embedding table [22][25]. Group 4: Experimental Results - The Engram-27B model shows significant improvements across various benchmarks, with notable increases in performance metrics such as BBH, ARC-Challenge, and DROP [47]. - The model's architecture allows for efficient memory management, enabling the use of a 100 billion parameter table offloaded to CPU memory without significant latency impact during inference [63][66]. Group 5: Future Developments - The next generation of sparse models from DeepSeek is expected to be released before the Spring Festival, indicating ongoing advancements in AI model architecture [67].
腾讯研究院AI速递 20251216
腾讯研究院· 2025-12-15 16:22
生成式AI 一、 深夜炸场!Manus 1.6 突然发布,史诗级进化暴力实测 1. Manus 1.6 Max发布,实现从"辅助工具"到"独立承包商"的质变,用户满意度提升19.2%,采用子Agent并行处理架 构,能独立完成复杂Excel财务建模和数据分析; 2. 新增移动开发功能,支持端到端App开发流程,用户只需描述需求即可生成可运行的iOS和Android应用; 3. 推出Design View设计视图,实现局部修图、精准文字渲染和多图层合成,解决AI生图不可控的痛点。 https://mp.weixin.qq.com/s/8gsfjMHOiadZMrRUUo4ZRw 二 、 OpenAI开源模型Circuit-Sparsity,0.4B,99.9%权重为零 4. OpenAI开源Circuit-Sparsity模型参数量仅0.4B,强制99.9%权重为零仅保留0.1%非零权重,解决模型可解释性 问题; 1. 前OpenAI CTO Mira Murati创办的Thinking Machines取消候选名单全面开放Tinker产品,这是用于帮开发者微 调语言模型的API; 2. 新增支持Kimi K2 ...
OpenAI又开源了,仅0.4B,给模型大瘦身
3 6 Ke· 2025-12-15 08:14
有外网网友称这一技术让当下的MoE(混合专家模型)走到了尽头,并说"我们一直以来都将权重隔离到'专家'中,以此粗略地近似稀疏性, 仅仅是为了满足稠密矩阵核的要求。" 智东西12月15日报道,昨天,OpenAI开源新模型Circuit-Sparsity,模型参数量仅0.4B,99.9%的权重为零。 在AI飞速发展的今天,大语言模型(LLM)虽然表现出了惊人的能力,但其内部运作机制始终像一个神秘的"黑箱"。 我们不知道它为何做出某个回答,也不清楚它是如何从海量数据中提取知识的。这种不可解释性,成为了AI在医疗、金融、法律等高风险领 域落地的重大障碍。 对此,OpenAI研究团队训练出了一个权重稀疏的Transformer模型,强制模型权重矩阵中99.9%权重为零,仅保留0.1%非零权重。 在这项研究中,研究团队在模型内部形成了紧凑且可读的"电路"(Circuits),每个电路都仅保留了保证模型性能的关键节点,神经元的激活 变得具有明确的语义。 Circuit-Sparsity开源(来源:Hugging Face) 这个技术试图解决模型的可解释性问题,简单来说就是回答"模型为什么做出这个决策?"以及"它是如何得出这 ...
OpenAI又Open了一下:发布可解释性新研究,作者来自Ilya超级对齐团队
量子位· 2025-11-15 02:08
Core Insights - OpenAI has introduced a new method for training smaller models that enhances interpretability, making the internal mechanisms of models easier for humans to understand [5][6][7] - The research focuses on creating sparse models with many neurons but fewer connections, simplifying neural networks for better comprehension [7][11] Summary by Sections Model Interpretability - OpenAI's language models have complex structures that are not fully understood, and the new method aims to bridge this gap [6] - The core idea is to train sparse models that maintain a high number of neurons while limiting their connections, making them simpler and more interpretable [7][11] Research Methodology - The researchers designed a series of simple algorithmic tasks to evaluate the model's interpretability, identifying the "circuit" for each task [13][18] - A "circuit" is defined as the smallest computational unit that allows the model to perform a specific task, represented as a graph of nodes and edges [15][16] Example of Circuit - An example task involves predicting the correct closing quote for a string in Python, demonstrating how the model can remember the type of opening quote to complete the string [19][22] Findings and Implications - The research indicates that larger, sparser models can produce increasingly powerful functions while maintaining simpler circuits [26] - This suggests potential for extending the method to understand more complex behaviors in models [27] Current Limitations - The study acknowledges that sparse models are significantly smaller than state-of-the-art models and still contain many "black box" elements [30] - Training efficiency for sparse models is currently low, with two proposed solutions: extracting sparse circuits from existing dense models or developing more efficient training techniques [31][32]
反直觉: MoE混合专家模型和场景没什么关系
理想TOP2· 2025-08-28 16:01
Core Viewpoint - The MoE (Mixture of Experts) model is fundamentally a sparse attention mechanism aimed at improving computational efficiency, rather than a model where each expert corresponds to a specific scenario [1][2]. Group 1: Scene Limitations - Having multiple MoE sub-models does not mean they can only handle specific scenes; it is impractical to train separate models for each scenario under the one model paradigm [1]. - If models are divided by scene, it does not represent a true MoE structure [1]. Group 2: Uniform Distribution - If only one type of scenario is run, a significant portion of the model's parameters may remain unused, leading to inefficiencies [2]. - It is more effective to distribute tasks evenly among experts rather than assigning specific experts to specific tasks, as low-usage experts may not justify their inclusion [2]. Group 3: Multiple Experts Activation - The MoE model can activate multiple experts simultaneously, allowing for a more even distribution of computational resources and addressing more complex problems effectively [2]. - The essence of the MoE model lies in the fact that only a small number of parameters significantly influence the output, making it a sparse model that enhances computational efficiency [2]. Group 4: Understanding the Model - Describing different experts as being suited for specific scenarios is a simplification that aids understanding, but it does not reflect the intentional design of the model [3].
Jeff Dean:一年内 AI 将取代初级工程师,网友:“Altman 只会画饼,Jeff 说的话才致命”
AI前线· 2025-05-28 05:17
Core Insights - Jeff Dean, a prominent figure in AI, predicts that within a year, AI systems capable of functioning like junior engineers will be available [1][15][16] - The conversation highlights the transformative potential of AI in software development and the broader implications for the job market [4][10] Group 1: AI Development and Trends - AI has been evolving for over a decade, with significant advancements in neural networks and machine learning since 2012 [5][6] - The mantra "larger models, more data, better results" has held true over the past 12 to 15 years, indicating a trend towards increasingly capable AI systems [6][8] - The emergence of multi-modal AI, capable of processing various input formats, is seen as a crucial trend in the industry [6][8] Group 2: AI Capabilities and Applications - AI agents are expected to perform tasks traditionally requiring human intervention, with a clear path for enhancing their capabilities through reinforcement learning [7][8] - The development of large models necessitates significant investment, leading to a market where only a few advanced models will survive [9][10] - The potential for AI to revolutionize education and other fields is highlighted, with examples of AI generating educational content from video inputs [11][12] Group 3: Hardware and Infrastructure - Specialized hardware for machine learning is critical, with Google’s TPU project being a significant development in this area [17][20] - The future of computing infrastructure is expected to adapt to the demands of running large-scale neural networks efficiently [22][23] - The distinction between training and inference workloads is emphasized, suggesting that different solutions may be required for each [23][24] Group 4: Future of AI Models - Sparse models, which utilize different parts of the model for specialized tasks, are viewed as a promising direction for future AI development [26][27] - The concept of dynamic scaling in models, allowing for the addition of new parameters and efficient resource allocation, is proposed as a more organic approach to AI learning [27][28]
Jeff Dean:一年内 AI 将取代初级工程师,网友:“Altman只会画饼,Jeff说的话才致命”
Xin Lang Cai Jing· 2025-05-18 22:46
Group 1 - Jeff Dean predicts that within a year, AI systems capable of operating 24/7 with "junior engineer" abilities will be available [1][14][15] - Dean emphasizes the significant advancements in AI, particularly in neural networks and their applications across various tasks since 2012 [4][6][7] - The evolution of AI is marked by improvements in algorithms and hardware, leading to larger models and enhanced capabilities [6][22] Group 2 - The industry is witnessing a potential transformation in the software development job market due to the rise of AI engineers who can outperform human engineers in certain tasks [4][8] - Dean discusses the importance of specialized hardware for machine learning, highlighting Google's TPU project and the need for efficient computation [16][19] - The future of AI models may involve sparse models that utilize different parts of the model for specialized tasks, enhancing efficiency significantly [24][25]