Workflow
Attention Mechanism
icon
Search documents
被Transformer光芒掩盖的论文,Meta科学家回顾十年前创新之作
机器之心· 2025-05-01 02:11
Core Viewpoint - The article discusses the significance of the "End-To-End Memory Networks" paper, highlighting its foundational contributions to the development of large language models (LLMs) and its overshadowing by the more popular "Attention is All You Need" paper [3][8][25]. Group 1: Historical Context and Contributions - The "End-To-End Memory Networks" paper, published in 2015, introduced key concepts that are now integral to LLMs, such as multi-layer soft attention and position embeddings [8][22]. - The paper was a refinement of the earlier "Memory Networks" paper from 2014, which introduced hard attention mechanisms [9][16]. - Despite its innovations, "End-To-End Memory Networks" received significantly less attention, with only over 3,000 citations compared to the 170,000 citations of "Attention is All You Need" [3][9]. Group 2: Technical Innovations - The model proposed in "End-To-End Memory Networks" was the first to completely replace recurrent neural networks (RNNs) with attention mechanisms, allowing for complex reasoning capabilities [8][13]. - The authors utilized reinforcement learning to train the memory network to focus on relevant information without predefined labels, which was a novel approach at the time [18][22]. - The introduction of position embeddings addressed the issue of order invariance in attention mechanisms, a critical advancement for LLMs [22][25]. Group 3: Current Relevance and Future Directions - The article emphasizes that even after ten years, there is still significant work to be done in improving architectures for LLMs, as evidenced by the recent release of the "Multi-Token Attention" paper, which enhances attention mechanisms for better handling of long contexts [26][27]. - The ongoing research aims to address challenges related to memory scaling, which was identified as a future direction in the original "Memory Networks" paper [26][27].
大模型 “注意力简史”:与两位 AI 研究者从 DeepSeek、Kimi 最新改进聊起
晚点LatePost· 2025-03-02 06:10
嘉宾 丨 肖朝军、傅天予 整理 丨 程曼祺 上周,DeepSeek、Kimi 都放出了新的大模型架构改进和优化成果,分别是 NSA、MoBA。二者都聚焦对大 模型中 "注意力机制" 的改进。 o 1 、 R 1 等 推 理 模 型 的 出 现,给 了 长 文 本 新 课 题 。 注意力机制是当前大语言模型(LLM)的核心机制。2017 年 6 月那篇开启大语言模型革命的 Transformer 八 子论文,标题就是:Attention Is All You Need(注意力就是你所需要的一切)。 而优化 Attention 的计算效率和效果,又能帮助解决 AI 学界和业界都非常关心的一个问题,就是长文本(long context)。 不管是要一次输入一整本书,让模型能帮我们提炼、理解;还是在生成现在 o1、R1 这类模型需要的长思维 链;又或者是希望模型未来能有越来越长的 "记忆",这都需要长文本能力的支持。 这期节目我们邀请了两位做过 Attention 机制改进的 AI 研究者做嘉宾。 一位是清华计算机系自然语言处理实验室的博士生肖朝军,他是 InfLLM 注意力机制改进的一作,导师是清华 计算机系副教授 ...