扩散语言模型
Search documents
用更一致的轨迹、更少的解码步数「驯服」掩码扩散语言模型,扩散语言模型的推理性能和效率大幅提升
机器之心· 2025-11-05 04:15
扩散大语言模型得到了突飞猛进的发展,早在 25 年 2 月 Inception Labs 推出 Mercury—— 第一个商业级扩散 大型语言模型,同期人民大学发布第一个开源 8B 扩散大语言模型 LLaDA,5 月份 Gemini Diffusion 也接踵 而至。种种迹象表明,扩散大语言模型很可能是下一代大语言模型基础范式的有力竞争者。但是针对于扩 散大语言模型的解码策略和强化学习算法仍然是欠探索的。 近期,复旦大学、上海人工智能实验室、上海交通大学联合研究团队发布最新论文《Taming Masked Diffusion Language Models via Consistency Trajectory Reinforcement Learning with Fewer Decoding Step》。 他们提出了一套对于掩码扩散大语言模型(Masked Diffusion Large Language Model,MDLM)的 高效解码 策略 + 强化学习训练组合 ,显著提升了掩码扩散大语言模型的 推理性能与效率 ,为扩散大语言模型的发展 开辟了新路径。 代码仓库:https://github.com/ ...
从掩码生成到「再掩码」训练:RemeDi让扩散语言模型学会自我纠正与反思
机器之心· 2025-10-16 02:20
近期,扩散语言模型备受瞩目,提供了一种不同于自回归模型的文本生成解决方案。为使模型能够在生成过程中持续修正与优化中间结果, 西湖大学 MAPLE 实 验室齐国君教授团队成功训练了 具有「再掩码」能力的扩散语言模型( Rem asking- e nabled Di ffusion Language Model, RemeDi 9B)。在扩散去噪的多步过程 中,通过进行再掩码 SFT 和 RL 训练,为每个 token 输出一个去掩码置信度,RemeDi 能够从序列中已经生成的内容中识别无法确定的位置进行 再掩码(remask) ,从而修正错误内容并提升文本质量,在各方面都超越了现有的扩散语言模型。该模型还具有 可变长生成(variable-length generation) 能力,打破了现有中大规 模扩散语言模型仅支持定长生成的限制,提高了模式能力的灵活性。 背景 扩散语言模型已成为自回归语言模型的有力替代方案。这一类方法首先定义了一个将文本逐步破坏为噪声的前向过程,然后让模型学习从噪声中恢复出干净文本 的逆向过程。在这一类方法中,当前最主流的是基于掩码的扩散语言模型。该方案要求模型在训练中学习恢复被掩码的 ...
推理速度10倍提升,蚂蚁集团开源业内首个高性能扩散语言模型推理框架dInfer
机器之心· 2025-10-13 09:24
Core Insights - Ant Group has launched dInfer, the industry's first high-performance inference framework for diffusion large language models (dLLM), achieving over 10 times the inference speed compared to Fast-dLLM [2][29] - dInfer has set a new milestone in performance, reaching a throughput of 1011 tokens per second in single-batch inference scenarios, surpassing highly optimized autoregressive (AR) models [29] Group 1: dInfer Framework - dInfer is designed to support various dLLM architectures, including LLaDA, LLaDA-MoE, and LLaDA-MoE-TD, emphasizing modularity and scalability [9][20] - The framework integrates four core modules: Model, KV Cache Manager, Iteration Manager, and Decoder, allowing developers to customize and optimize strategies [11][13] - dInfer addresses three core challenges in dLLM inference: high computational costs, KV cache invalidation, and the complexities of parallel decoding [12][19] Group 2: Performance Enhancements - dInfer employs a "Vicinity KV-Cache Refresh" strategy to reduce computational costs while maintaining generation quality by selectively recalculating KV caches [15][17] - The framework optimizes the forward computation speed of dLLM to match that of AR models through various system enhancements [18] - It introduces hierarchical and credit decoding algorithms to maximize the number of tokens decoded in parallel without additional training [19][20] Group 3: Performance Metrics - In tests with 8 NVIDIA H800 GPUs, dInfer achieved an average inference speed of 681 tokens per second, which is 10.7 times faster than Fast-dLLM [29] - When combined with trajectory distillation technology, dInfer's average inference speed soared to 847 tokens per second, exceeding the performance of AR models by over 3 times [24][29] - dInfer's performance in code generation tasks has set a record, demonstrating significant speed advantages in latency-sensitive scenarios [29] Group 4: Open Source and Community Engagement - The release of dInfer marks a significant step in the practical efficiency of diffusion language models, inviting global developers and researchers to collaborate in building a more efficient and open AI ecosystem [28][25] - The complete code, technical reports, and experimental configurations for dInfer v0.1 have been made open source [27][28]
推理性能提升10倍 蚂蚁集团开源高性能扩散语言模型推理框架dInfer
Huan Qiu Wang· 2025-10-13 09:03
Core Insights - Ant Group has officially announced the open-source release of dInfer, the industry's first high-performance inference framework for diffusion language models [1][5] - dInfer demonstrates a significant improvement in inference speed, achieving a 10.7 times increase compared to NVIDIA's Fast-dLLM framework, and reaching a speed of 1011 tokens per second in the HumanEval code generation task [1][4] - The framework addresses key challenges in diffusion language model inference, including high computational costs, KV cache failures, and parallel decoding [1][2] Summary by Sections - **Performance Metrics** - dInfer achieves an average inference speed of 681 tokens per second, compared to 63.6 tokens per second for Fast-dLLM, marking a 10.7 times improvement [4] - When compared to the AR model Qwen2.5-3B, dInfer's average inference speed is 2.5 times faster, at 681 tokens per second versus 277 tokens per second [5] - **Technical Architecture** - dInfer is designed with a modular architecture that includes four core components: Model, KV-Cache Manager, Iteration Manager, and Decoder, allowing developers to customize and optimize their configurations [2] - Each module integrates targeted solutions to overcome the three main challenges faced by diffusion language models [2] - **Industry Impact** - The launch of dInfer signifies a critical step in transitioning diffusion language models from theoretical feasibility to practical efficiency, connecting cutting-edge research with industrial applications [5] - Ant Group invites global developers and researchers to explore the potential of diffusion language models, aiming to build a more efficient and open AI ecosystem [5]
首次超越自回归模型!蚂蚁集团开源业内首个高性能扩散语言模型推理框架dInfer
Xin Lang Ke Ji· 2025-10-13 09:00
Core Insights - Ant Group has officially open-sourced the industry's first high-performance diffusion language model inference framework, dInfer, which significantly enhances the efficiency of diffusion language models [1][2] Performance Metrics - dInfer achieves a 10.7 times improvement in inference speed compared to NVIDIA's Fast-dLLM framework, with average transactions per second (TPS) increasing from 63.6 to 681 [1] - In the HumanEval code generation task, dInfer reaches a speed of 1011 tokens per second in single-batch inference, surpassing autoregressive models for the first time in the open-source community [1] - When compared to the vLLM framework running the Qwen2.5-3B model, dInfer's average inference speed is 2.5 times faster, with 681 TPS versus 277 TPS [1] Industry Impact - The launch of dInfer marks a critical step in transitioning diffusion language models from theoretical feasibility to practical efficiency, connecting cutting-edge research with industrial application [2] - Ant Group invites global developers and researchers to explore the vast potential of diffusion language models, aiming to build a more efficient and open AI ecosystem [2]
扩散语言模型也有MoE版本了!蚂蚁&人大从头训练LLaDA-MoE,即将完全开源
机器之心· 2025-09-12 11:31
Core Viewpoint - The article discusses the development of the LLaDA-MoE model, the first native MoE architecture diffusion language model trained from scratch, which demonstrates significant performance and efficiency advantages over traditional autoregressive models [2][15][18]. Group 1: Model Development and Performance - The LLaDA-MoE model was trained on 20 terabytes of data and features 1.4 billion active parameters, achieving performance comparable to denser autoregressive models like Qwen2.5-3B while maintaining faster inference speeds [15][17][29]. - The LLaDA series has rapidly evolved, with LLaDA-MoE being a notable milestone, surpassing previous models like LLaDA1.0/1.5 and Dream-7B in various benchmark tests [13][18][29]. - The model's architecture allows for significant scaling potential, with plans to explore higher sparsity ratios and larger MoE diffusion language models [29][40]. Group 2: Technical Innovations and Advantages - The diffusion model approach allows for parallel decoding, bidirectional modeling, and iterative correction, addressing limitations of autoregressive models such as serial bottlenecks and lack of error correction capabilities [38][40]. - Evidence suggests that diffusion language models can achieve better learning outcomes than autoregressive models, particularly in scenarios with limited data, demonstrating a data utilization efficiency that can exceed three times that of autoregressive models [40][41]. - The training framework and infrastructure developed by Ant Group, including the ATorch framework, supports the efficient training of large-scale MoE models [25][26]. Group 3: Strategic Vision and Future Directions - The development of LLaDA-MoE reflects a strategic choice to explore high-potential areas in AI, moving beyond established paths to enhance the limits of intelligence [44][47]. - Ant Group's commitment to innovation is evident in its previous projects and ongoing research in areas like dynamic MoE architectures and hybrid linear architectures, all aimed at achieving general artificial intelligence (AGI) [45][46][47].
蚂蚁联手人大,发布MoE扩散模型
Hua Er Jie Jian Wen· 2025-09-12 06:02
Core Insights - Ant Group and Renmin University of China jointly released the industry's first native MoE architecture diffusion language model "LLaDA-MoE" at the 2025 Bund Conference, marking a significant advancement towards AGI [1][2] - The LLaDA-MoE model was trained on approximately 20 terabytes of data, demonstrating scalability and stability in industrial-grade large-scale training, outperforming previous models like LLaDA1.0/1.5 and Dream-7B, while maintaining several times the inference speed advantage [1][2] - The model achieved language intelligence comparable to Qwen2.5, challenging the prevailing notion that language models must be autoregressive, and only required activation of 1.4 billion parameters to match the performance of a 3 billion dense model [1][2] Model Performance and Features - LLaDA-MoE demonstrated an average performance improvement of 8.4% across 17 benchmarks, surpassing LLaDA-1.5 by 13.2% and equaling Qwen2.5-3B-Instruct [3] - The model's development involved a three-month effort to rewrite training code based on LLaDA-1.0, utilizing Ant Group's self-developed distributed framework ATorch for parallel acceleration [2][3] - The model's architecture, based on a 7B-A1B MoE structure, successfully addressed core challenges such as load balancing and noise sampling drift during training [2] Future Developments - Ant Group plans to open-source the model weights and a self-developed inference engine optimized for dLLM parallel characteristics, which has shown significant acceleration compared to NVIDIA's official fast-dLLM [3] - The company aims to continue investing in the AGI field based on dLLM, collaborating with academia and the global AI community to drive new breakthroughs [3] - The statement emphasizes that autoregressive models are not the endpoint, and diffusion models can also serve as a main pathway towards AGI [3]
蚂蚁、中国人民大学发布行业首个原生MoE扩散语言模型
第一财经· 2025-09-12 03:08
外滩大会上,蚂蚁集团和中国人民大学联合研发原生MoE架构扩散语言模型(dLLM) LLaDA-MoE, 在约20T数据上完成了从零训练MoE架构的扩散语言模型,验证了工业级大规模训练的扩展性和稳定 性。该模型将在近期完全开源。(第一财经记者 陈杨园) ...
阿里巴巴发布最强语言模型挑战者:扩散模型能否颠覆ChatGP
Sou Hu Cai Jing· 2025-08-20 02:41
Core Insights - The research on diffusion language models represents a potential paradigm shift in AI dialogue systems, moving away from traditional autoregressive methods to a more parallel and efficient approach [2][8]. - Diffusion language models can generate text in a manner akin to an artist painting, allowing for simultaneous processing of multiple words, which significantly enhances speed and contextual understanding [3][4]. Development and Mechanism - The evolution of diffusion language models began with the D3PM model in 2021, transitioning from continuous to discrete spaces, ultimately leading to models like DiffusionBERT and LLaDA series that operate directly in the text space [3][4]. - The training strategy for diffusion models resembles a fill-in-the-blank game, enhancing the model's ability to understand bidirectional relationships between words [5]. Performance and Comparison - Recent findings indicate that diffusion language models, such as LLaDA-8B, can perform comparably or even exceed traditional autoregressive models like LLaMA3-8B in various benchmarks, suggesting no compromise between speed and quality [4][5]. - The unique inference optimization of diffusion models allows for iterative adjustments during text generation, improving overall output quality [5][6]. Applications and Challenges - Diffusion language models have shown promising results in applications like code generation, mathematical reasoning, and document summarization, particularly in tasks requiring global planning [6][7]. - Challenges include the "curse of parallel generation," where dependencies between generated words may not be adequately considered, and the need for infrastructure support tailored to diffusion models [6][7]. Future Directions - Future development of diffusion language models will focus on improving training efficiency, enhancing long-text generation capabilities, and refining inference algorithms to close the gap with traditional models [7]. - Companies are beginning to commercialize diffusion language models, with models like Mercury claiming to generate thousands of words per second, indicating significant potential for real-time applications [7][8].
Meta没做的,英伟达做了,全新架构吞吐量狂飙6倍,20万亿Token训练
3 6 Ke· 2025-08-19 02:33
Core Insights - NVIDIA has launched a new 9B model, the NVIDIA Nemotron Nano 2, utilizing a revolutionary Mamba-Transformer hybrid architecture that achieves up to 6 times higher inference throughput compared to the industry benchmark Qwen3-8B, while maintaining or exceeding performance in complex reasoning tasks [1][23]. Group 1: Model Architecture and Performance - The Nemotron Nano 2 model is based on the innovative Mamba-2 architecture, which replaces most self-attention layers in traditional Transformer architectures, resulting in significant speed improvements during complex reasoning tasks [10][15]. - The model demonstrates competitive accuracy in various benchmarks, including mathematics, code generation, and general reasoning, performing on par or better than similar open-source models like Qwen3-8B and Gemma3-12B [23][24]. - In specific benchmarks, the model achieved notable scores, such as 97.8% in MATH500 and 72.1% in AIME25, showcasing its capabilities in mathematical reasoning and general knowledge [24]. Group 2: Training and Data Utilization - The training process for the Nemotron Nano 2 involved a massive dataset of 20 trillion tokens, utilizing advanced FP8 training techniques to create a foundational model with 120 billion parameters, which was later distilled to 9 billion parameters [17][22]. - The model's training included high-quality data from various sources, focusing on mathematics, code, and multilingual question-answering, ensuring a robust pre-training dataset [18][25]. - NVIDIA has also released a comprehensive pre-training dataset, Nemotron-Pre-Training-Dataset-v1, which includes 6.6 trillion tokens from diverse domains, further enhancing the model's training foundation [25][27]. Group 3: Open Source Commitment - NVIDIA has committed to open-sourcing the Nemotron models on the HuggingFace platform, providing access to the 9B model, its base version, and the larger 12B model, along with the associated datasets [25][30]. - This move reflects NVIDIA's ongoing efforts to contribute to the open-source community, contrasting with other companies that are shifting towards more closed-source strategies [27].