DeepSeek Sparse Attention (DSA)
Search documents
国庆前放大招!DeepSeek-V3.2-Exp发布并开源,API成本将降低50%以上
华尔街见闻· 2025-09-29 11:12
Core Insights - DeepSeek has launched the DeepSeek-V3.2-Exp model on Hugging Face, introducing the DeepSeek Sparse Attention (DSA) mechanism to enhance training and inference efficiency for long texts [1][3] - Huawei Cloud has adapted the DeepSeek-V3.2-Exp model, supporting a maximum context length of 160K [2] - The DSA technology significantly improves training and inference efficiency for long text scenarios with minimal impact on model output [3] - The training settings of DeepSeek-V3.2-Exp were strictly aligned with the previous version, V3.1-Terminus, showing comparable performance across various benchmarks [5] - The new model has led to a reduction of over 50% in API costs, with immediate price adjustments implemented [8] - DeepSeek has made the DeepSeek-V3.2-Exp model fully open-source on Hugging Face and ModelScope, with related research papers also published [9] - The company has retained API access for the V3.1-Terminus version for comparison purposes until October 15, 2025 [9] - Additionally, DeepSeek has open-sourced GPU operators designed for the new model, recommending the use of the TileLang version for research experiments [10]
刚刚,DeepSeek开源V3.2-Exp,公开新稀疏注意力机制DSA
机器之心· 2025-09-29 10:29
Core Viewpoint - DeepSeek has released the experimental version DeepSeek-V3.2-Exp, which introduces a new sparse attention mechanism aimed at optimizing training and inference efficiency in long-context scenarios [3][5][10]. Summary by Sections Model Release - DeepSeek-V3.2-Exp has been open-sourced with a parameter count of 685 billion [3]. - The release includes a paper detailing the new sparse attention mechanism [5]. Sparse Attention Mechanism - The DeepSeek Sparse Attention (DSA) is the only architectural improvement in version 3.2, focusing on enhancing computational efficiency when processing extended text sequences [5][6][10]. - DSA achieves fine-grained sparse attention while maintaining nearly the same output quality as its predecessor, DeepSeek-V3.1-Terminus [9]. Performance Comparison - A comparison of benchmark results between DeepSeek-V3.1-Terminus and DeepSeek-V3.2-Exp shows that the new version performs comparably across various tasks [11]. - Specific benchmark results include: - MMLU-Pro: 85.0 (V3.1) vs. 85.0 (V3.2) - AIME 2025: 88.4 (V3.1) vs. 89.3 (V3.2) - Codeforces: 2046 (V3.1) vs. 2121 (V3.2) [11]. Future Developments - The upcoming release of Z.ai's GLM-4.6 model is noted, with GLM-4.5 being the previous flagship model [12].