Workflow
Which Attention is All You Need?
机器之心·2025-11-09 01:30

Core Insights - The article discusses the ongoing innovations and challenges in the Attention mechanism within AI and Robotics, highlighting the need for breakthroughs in algorithm design to address computational complexities and enhance performance [5][7]. Group 1: Attention Mechanism Innovations - The industry is focusing on optimizing the Attention mechanism due to the computational complexity of O(N^2) associated with standard self-attention, which poses a fundamental obstacle for efficient long-sequence modeling [9]. - Two main paths for improving Attention have emerged: Linear Attention, which aims to reduce complexity to O(N), and Sparse Attention, which seeks to limit calculations to a subset of important tokens [10][13]. - Kimi Linear, a recent development, has shown significant improvements over traditional full attention methods, achieving up to 75% reduction in KV cache requirements and processing contexts of up to 1 million tokens six times faster than full attention [11][12]. Group 2: Linear Attention Approaches - Linear Attention can be categorized into three main types: Kernelized methods, forgetting mechanisms, and in-context learning, each aiming to optimize the attention process while maintaining performance [10][11]. - The Kimi Linear architecture, which incorporates a channel-wise gating mechanism, optimizes memory usage in RNNs and demonstrates superior performance across various scenarios [12]. - The design of Kimi Linear includes a hierarchical mixed architecture that combines linear and full attention layers, enhancing its efficiency and effectiveness [12]. Group 3: Sparse Attention Strategies - Sparse Attention focuses on pre-selecting a subset of important tokens for attention calculations, utilizing methods such as fixed patterns, block-sparse, and clustering approaches [13][14]. - DeepSeek's NSA and DSA represent significant advancements in Sparse Attention, with DSA employing a token-wise sparse strategy that dramatically reduces attention complexity while maintaining performance [16][17]. - In tests, DSA has achieved a reduction in attention complexity from O(L^2) to O(Lk), resulting in cost reductions of 60%-70% during both pre-filling and decoding phases [17].