DeepSeek上新!首个奥数金牌水平的模型来了

Core Insights - DeepSeek has released a new model, DeepSeek-Math-V2, which is the first open-source model to achieve International Mathematical Olympiad (IMO) gold medal level performance [1] - The model outperforms Google's Gemini DeepThink in certain benchmarks, showcasing its capabilities in mathematical reasoning [1][5] Performance Metrics - DeepSeek-Math-V2 achieved 83.3% on IMO 2025 problems and 73.8% on CMO 2024 problems [4] - In the Putnam 2024 competition, it scored 98.3%, demonstrating exceptional performance [4] - On the Basic benchmark, Math-V2 scored nearly 99%, while Gemini DeepThink scored 89% [5] - In the Advanced subset, Math-V2 scored 61.9%, slightly below Gemini DeepThink's 65.7% [5] Research and Development Focus - The model emphasizes self-verification in mathematical reasoning, moving from a result-oriented approach to a process-oriented one [8] - DeepSeek aims to enhance the rigor and completeness of mathematical proofs, which is crucial for solving open problems [8] - The research indicates that self-verifying mathematical reasoning is a viable direction for developing more powerful AI systems [8] Industry Reaction - The release has generated significant interest, with comments highlighting DeepSeek's competitive edge over Google's model [9] - The industry is keenly awaiting further developments from DeepSeek, especially regarding their flagship model updates [10]