自回归模型

Search documents
华人团队终结Token危机:扩散模型数据潜力超自回归三倍
量子位· 2025-08-13 09:13
Core Viewpoint - The article discusses the potential of diffusion language models (DLMs) in data learning, highlighting their ability to outperform autoregressive models in terms of data utilization and learning efficiency [1][4]. Group 1: Diffusion Language Models - Diffusion language models can achieve over three times the data potential compared to autoregressive models when token quantity is limited [1]. - A diffusion model with 1 billion parameters trained on 1 billion tokens for 480 cycles achieved 56% and 33% accuracy on HellaSwag and MMLU benchmarks, respectively, without any data filtering or tricks [5]. - The model's performance did not show saturation even under extreme repetition, indicating that it can extract more useful information from the data [4]. Group 2: Learning Mechanisms - The strong data learning capability of diffusion language models is attributed to two main factors: the diffusion objective and bidirectional attention mechanisms, allowing for comprehensive data utilization beyond causal relationships [8][9]. - Diffusion models invest more computational resources (FLOPs) during training and inference, enhancing model performance through iterative optimization [11]. - Unlike autoregressive models that prioritize computational efficiency, diffusion models focus on maximizing data potential, which leads to improved learning outcomes [14]. Group 3: Overfitting and Data Utilization - The research team observed that the number of training cycles before overfitting occurs is positively correlated with the amount of unique data and negatively correlated with model size [18]. - Even when overfitting occurs, the model's performance on downstream tasks may continue to improve, suggesting that absolute loss values do not necessarily translate to relative performance changes [19][21]. - The phenomenon of overconfidence in certain text segments after repeated exposure to limited training data may explain the observed performance trends [26][27]. Group 4: Future Research Directions - The research team plans to use larger models and more unique data in future studies to further validate their findings and hypotheses regarding diffusion language models [28].
扩散语言模型真的会比自回归好?理论分析结果可能恰恰相反
机器之心· 2025-06-10 08:41
本工作来自北京大学智能学院贺笛老师课题组与蚂蚁集团武威团队。贺笛老师在机器学习领域获得过多项荣誉,包括 ICLR 2023 杰出论文奖与 ICLR 2024 杰出论 文奖提名。 扩散模型近年来在图像生成领域取得了令人瞩目的成就,其生成图像的质量和多样性令人惊叹。这自然引发了人们的思考:这种强大的生成范式能否迁移到文本 领域,挑战甚至取代目前主流的自回归语言模型?扩散语言模型(Diffusion Language Models)凭借其并行生成多个词元的潜力,似乎预示着文本生成领域的一场 效率革命。然而,这一前景是否真的如此美好? 来自北京大学和蚂蚁集团的最新研究表明,答案远非简单的 "是" 或 "否",在某些关键场景下,结论甚至可能恰 恰相反。 | Guhao Feng* | Yihan Geng* | Jian Guan | Wei Wu | Liwei Wang | | --- | --- | --- | --- | --- | | Peking University | Peking University | Ant Group | Ant Group | Peking University | 论文标题 ...