强化学习协议
Search documents
DeepSeek的小更新,暴打了OpenAI,追上了Gemini
3 6 Ke· 2025-12-03 00:58
Core Insights - DeepSeek has launched two new models, DeepSeek V3.2 and DeepSeek-V3.2-Speciale, which are designed to compete with leading models like GPT-5 and Gemini [1][5][20]. Model Performance - DeepSeek V3.2 has shown competitive performance in various benchmarks, achieving scores close to or surpassing those of GPT-5 and Gemini in several tests [6][20]. - The model's performance in specific benchmarks includes: - AIME 2025: DeepSeek V3.2 scored 93.1, while DeepSeek V3.2-Speciale scored 96.0 [6]. - HMMT Feb 2025: DeepSeek V3.2 scored 92.5, and DeepSeek V3.2-Speciale scored 99.2 [6]. - Overall, DeepSeek V3.2-Speciale is noted for its ability to compete effectively with Gemini 3 [20][27]. Technological Innovations - DeepSeek has implemented Sparse Attention (DSA) in its models, which allows for more efficient processing of longer texts by reducing computational complexity [9][13]. - The company has focused on enhancing post-training processes for open-source models, investing over 10% of total training compute to improve model performance in challenging tasks [17][21]. - DeepSeek V3.2 Speciale encourages longer reasoning without penalizing the model for extended thought processes, enhancing its ability to tackle complex problems [18][20]. Cost Efficiency - Despite higher token consumption compared to competitors, DeepSeek offers a more cost-effective solution, with a significant price advantage over models like Gemini [32][33]. - For example, using 8077 tokens on DeepSeek costs approximately $0.0032, while using 4972 tokens on Gemini costs around $0.06, highlighting a 20-fold price difference [33]. Industry Context - The gap between open-source and closed-source models is reportedly widening, but DeepSeek is actively working to close this gap through innovative approaches and cost-saving measures [35][36]. - The company's strategy emphasizes algorithmic improvements over merely increasing computational power, aligning with industry insights on the importance of efficient model training [38][39].