Workflow
Grouped Latent Attention(GLA)
icon
Search documents
Mamba核心作者新作:取代DeepSeek在用的注意力机制,专为推理打造
量子位· 2025-06-01 03:40
Core Insights - The article discusses a new research paper by Tri Dao and his team from Princeton University, introducing two attention mechanisms specifically designed for inference, which significantly enhance decoding speed and throughput while maintaining model performance [1][2][5]. Summary by Sections Introduction of New Attention Mechanisms - The research presents two novel attention mechanisms: Grouped-Tied Attention (GTA) and Grouped Latent Attention (GLA), which optimize memory usage and computational logic during model inference [2][8]. - GTA reduces KV cache usage by approximately 50% compared to the existing GQA mechanism, while GLA offers faster decoding speeds than the MLA mechanism, sometimes up to 2 times faster than FlashMLA [2][11][36]. Mechanism Details - GTA combines and reuses the key and value states of different query heads, reducing memory transfer frequency and improving efficiency [15][16]. - GLA employs a dual-layer structure to enhance hardware efficiency and maintain parallel scalability, optimizing decoding speed without sacrificing model performance [17][18]. Experimental Results - Experiments were conducted on models of various sizes (small, medium, large, and XL) using the FineWeb-Edu-100B dataset, demonstrating that GTA outperforms GQA in larger models, while GLA matches MLA performance [21][22]. - The results indicate that both GTA and GLA can maintain or improve performance as model size increases, validating their effectiveness as alternatives to GQA and MLA [24][36]. Performance Metrics - The study evaluated performance using perplexity and downstream task accuracy across several benchmarks, showing that GTA and GLA maintain competitive performance while reducing KV cache requirements [26][27]. - GLA demonstrated superior throughput in real-time server performance tests, especially under concurrent request scenarios, indicating its efficiency in handling long contexts [30][33].