可自我验证的数学推理
Search documents
DeepSeek上新!首个奥数金牌水平的模型来了
Di Yi Cai Jing· 2025-11-28 00:22
"鲸鱼"回来了。 11月27日晚,DeepSeek悄悄地在Hugging Face 上开源了一个新模型:DeepSeek-Math-V2。这是一个数学方面的模型,也是目前行业首个达到IMO(国际奥林 匹克数学竞赛)金牌水平且开源的模型。 在同步发布的技术论文中,DeepSeek表示,Math-V2的部分性能优于谷歌旗下的Gemini DeepThink,并展示了模型在IMO-ProofBench基准以及近期数学竞赛 上的表现。 | Contest Problems | Points | | --- | --- | | IMO 2025 P1, P2, P3, P4, P5 83.3% | | | CMO 2024 P1 , P2 , P4 , P5 , P6 | 73.8% | | Putnam 2024 A1 ~ B4 , B5, B6 | 98.3% | 具体来看,在其中的Basic基准上,DeepSeek-Math-V2 远胜其他模型,达到了近99%的高分,而排在第二的谷歌旗下Gemini Deep Think (IMO Gold)分数为 89%。但在更难的 Advanced 子集上,Math-V2分数 ...
DeepSeek强势回归,开源IMO金牌级数学模型
3 6 Ke· 2025-11-27 23:34
突破级推理模型来了,DeepSeek 打开了自我验证的数学推理方向。 The whale is back! 就在刚刚,DeepSeek 又悄咪咪在 Hugging Face 上传了一个新模型:DeepSeek-Math-V2。 那时隔一年半,这个基于 DeepSeek-V3.2-Exp-Base 开发的 DeepSeek-Math-V2 又带来了哪些惊喜? DeepSeek 表示,它的性能优于 Gemini DeepThink,实现了 IMO 金牌级的水平。 顾名思义,这是一个数学方面的模型。它的上一个版本 ——DeepSeek-Math-7b 还是一年多以前发的。当时,这个模型只用 7B 参数量,就达到了 GPT-4 和 Gemini-Ultra 性能相当的水平。相关论文还首次引入了 GRPO,显著提升了数学推理能力。 论文开篇,DeepSeek 就指出了当前 AI 在数学推理方面的研究局限:以正确的最终答案作为奖励,过于追求最终答案准确度。 这种做法虽然能让推理模型在 AIME 和 HMMT 等基准上达到更高水平,乃至达到饱和,但 DeepSeek 表示这并不能解决核心问题:正确答案并不保证推 理过程正确 ...
DeepSeek强势回归,开源IMO金牌级数学模型
机器之心· 2025-11-27 12:13
Core Insights - DeepSeek has released a new mathematical reasoning model, DeepSeek-Math-V2, which surpasses its predecessor, DeepSeek-Math-7b, in performance, achieving gold medal levels in mathematical competitions [5][21]. - The model addresses limitations in current AI mathematical reasoning by focusing on self-verification and rigorous proof processes rather than merely achieving correct final answers [7][25]. Model Development - DeepSeek-Math-V2 is based on the DeepSeek-V3.2-Exp-Base architecture and has shown improved performance compared to Gemini DeepThink [5]. - The previous version, DeepSeek-Math-7b, utilized 7 billion parameters and achieved performance comparable to GPT-4 and Gemini-Ultra [3]. Research Limitations - Current AI models often prioritize the accuracy of final answers, which does not ensure the correctness of the reasoning process [7]. - Many mathematical tasks require detailed step-by-step deductions, making the focus on final answers inadequate [7]. Self-Verification Mechanism - DeepSeek emphasizes the need for comprehensive and rigorous verification of mathematical reasoning [8]. - The model introduces a proof verification system that allows it to self-check and acknowledge its mistakes, enhancing its reliability [11][17]. System Design - The system consists of three roles: a proof verifier (teacher), a meta-verifier (supervisor), and a proof generator (student) [12][14][17]. - The proof verifier evaluates the reasoning process, while the meta-verifier checks the validity of the verifier's feedback, improving overall assessment accuracy [14]. Innovative Training Approach - The proof generator is trained to self-evaluate its solutions, promoting deeper reflection and correction of errors before finalizing answers [18]. - An honest reward mechanism encourages the model to admit mistakes, fostering a culture of self-improvement [18][23]. Automation and Evolution - DeepSeek has developed an automated process that allows the system to evolve independently, enhancing both the proof generator and verifier over time [20]. - The model's approach shifts from a results-oriented to a process-oriented methodology, focusing on rigorous proof examination [20]. Performance Metrics - DeepSeek-Math-V2 achieved impressive results in competitions, scoring 83.3% in IMO 2025 and 98.3% in Putnam 2024 [21][22]. - The model demonstrated near-perfect performance in the Basic benchmark of the IMO-ProofBench, achieving close to 99% accuracy [22]. Future Directions - DeepSeek acknowledges that while significant progress has been made, further work is needed to enhance the self-verification framework for mathematical reasoning [25].