Workflow
DeepSeek
icon
Search documents
GPT-5危了,DeepSeek开源世界首个奥数金牌AI,正面硬刚谷歌
3 6 Ke· 2025-11-28 01:55
Core Insights - DeepSeek has launched its new model, DeepSeekMath-V2, which has won the IMO 2025 gold medal, showcasing capabilities that rival or even surpass Google's IMO gold medal model [1][3][22] - This is the first open-source IMO gold medal model, marking a significant advancement in AI [1][24] Model Performance - DeepSeekMath-V2 demonstrated strong theorem-proving abilities, solving 5 out of 6 problems in the IMO 2025, achieving a gold medal level [3][4] - In the CMO 2024, it also reached gold medal status, and in the Putnam 2024, it scored 118 out of 120, surpassing the highest human score of 90 [3][4] Comparison with Competitors - DeepSeekMath-V2 outperformed Google's Gemini Deep Think in the ProofBench-Basic tests and closely followed it in the ProofBench-Advanced tests [5][22] - The model's performance indicates a significant leap in capabilities compared to existing models like OpenAI's GPT-5 and Gemini 2.5-Pro [26][28] Self-Verification Mechanism - A key breakthrough of DeepSeekMath-V2 is its self-verification capability, allowing it to self-assess and improve its proofs [12][36] - The model employs a unique "three-in-one" system consisting of a Generator, Verifier, and Meta-Verifier to enhance its proof quality [15][16] Training Methodology - The training process involved a high-compute search strategy, generating numerous candidate proofs and validating them rigorously [32][35] - The model's ability to self-correct and refine its proofs through multiple iterations significantly improved its performance [38] Implications for AI Development - The success of DeepSeekMath-V2 suggests a shift in AI from merely mimicking human responses to emulating human thought processes, emphasizing the importance of self-reflection in achieving advanced AI [36][37]
DeepSeek再破谷歌OpenAI垄断:开源IMO数学金牌大模型
量子位· 2025-11-28 01:53
Core Insights - DeepSeek has released a new mathematical model, DeepSeekMath-V2, focusing on self-verifiable mathematical reasoning [1][7] - The model has achieved gold medal-level scores in IMO 2025 and CMO 2024, and scored 118/120 in Putnam 2024, surpassing the highest human score of 90 [2][43] - DeepSeekMath-V2 is the first open-source IMO gold medal model, raising competitive pressure on companies like Google and OpenAI [4][5] Model Performance - DeepSeekMath-V2 outperforms GPT-5-Thinking-High and Gemini 2.5-Pro across all CNML problem categories, including algebra, geometry, number theory, combinatorics, and inequalities [2][34] - The model's architecture includes 685 billion parameters, emphasizing strong proof verification capabilities [7] Training Methodology - The training process involves an iterative reinforcement learning loop that alternates between optimizing the proof verifier and the proof generator [9] - A large dataset of 17,500 proof-required math problems was collected from AoPS competitions to train the proof verifier [12] - The verifier is trained to identify issues in proofs and assign scores based on three levels of correctness [10] Meta-Verification Mechanism - A meta-verification mechanism was introduced to enhance the verifier's accuracy by assessing the validity of the identified issues [14] - The meta-verifier is trained using a dataset created from expert evaluations of the verifier's output [15] Proof Generation - The trained verifier serves as a reward model for the proof generator, which learns to self-review and correct its outputs [23] - The reward structure encourages accurate self-assessment and correction of errors in generated proofs [27] Automation and Efficiency - The collaboration between the verifier and generator leads to a fully automated data labeling process, replacing time-consuming manual annotations [29][35] - The automated process ensures high consistency with expert evaluations, significantly improving efficiency [35] Experimental Results - The model's average quality score for proof analysis improved from 0.85 to 0.96, demonstrating the effectiveness of the meta-verification mechanism [21] - The model's ability to generate correct proofs was validated through rigorous testing, showing superior performance across various mathematical problem categories [34][39]
DeepSeek的模型,让AI第一次学会了反思。
数字生命卡兹克· 2025-11-28 01:21
Core Insights - DeepSeek has launched a new model, DeepSeekMath-V2, which emphasizes self-verifiable mathematical reasoning, addressing limitations in previous AI models that focused solely on final answers [1][8][30]. Group 1: Model Capabilities - DeepSeekMath-V2 can not only provide answers but also self-check its problem-solving steps, allowing it to identify and correct its own mistakes [3][49]. - The model has achieved performance levels comparable to Olympic gold medalists, excelling in competitions such as IMO 2025 and Putnam 2024 [5][6][50]. Group 2: Philosophical Context - The model's development responds to concerns raised by AI experts about the gap between AI performance in assessments and real-world problem-solving capabilities [12][26]. - The approach taken by DeepSeekMath-V2 reflects a shift from external validation to internal self-assessment, promoting a deeper understanding of mathematical reasoning [50]. Group 3: Methodology - DeepSeekMath-V2 employs a dual-structure system with a Generator that creates solutions and a Verifier that critically evaluates these solutions for logical consistency and accuracy [46][49]. - The introduction of a Meta-Verifier ensures that the evaluation process is fair and accurate, enhancing the overall reliability of the model [49]. Group 4: Performance Metrics - In the IMO competition, DeepSeekMath-V2 solved 5 out of 6 problems, demonstrating its high-level capabilities [50]. - In the Putnam Competition, it scored 118 out of 120, showcasing its ability to tackle extremely challenging mathematical problems [50].
新突破!DeepSeek推出新模型
新华网财经· 2025-11-28 01:15
Core Insights - DeepSeek launched a new mathematical reasoning model, DeepSeekMath-V2, on HuggingFace, which utilizes a self-verifying training framework [2] - The model is built on DeepSeek-V3.2-Exp-Base and employs an LLM verifier to automatically review generated mathematical proofs, continuously optimizing performance with high-difficulty samples [3] - DeepSeekMath-V2 achieved gold medal levels in both the 2025 International Mathematical Olympiad (IMO) and the 2024 Chinese Mathematical Olympiad (CMO), and scored 118/120 in the 2024 Putnam Mathematical Competition [3][4] Performance Metrics - In the IMO 2025, the model scored 83.3% across problems P1 to P5 [4] - In the CMO 2024, it achieved 73.8% on problems P1, P2, P4, P5, and P6 [4] - In the Putnam 2024, it scored 98.3% on problems A1 to B6 [4] Model Architecture - The core architecture of DeepSeekMath-V2 establishes a self-driven verification-generation loop, with one LLM acting as a "reviewer" for proof verification and another as a "creator" for proof generation, utilizing reinforcement learning for collaboration [5] - A "meta-verification" layer is introduced to effectively suppress model hallucinations [5] Competitive Edge - In a self-constructed test of 91 CNML-level problems, DeepSeekMath-V2 demonstrated superior mathematical reasoning capabilities, outperforming GPT-5-Thinking-High and Gemini 2.5-Pro across all categories including algebra, geometry, number theory, combinatorics, and inequalities [7] - The model also excelled in the IMO-ProofBench benchmark, surpassing DeepMind's DeepThink at the IMO gold medal level in basic sets and maintaining strong competitiveness in more challenging advanced sets [8] Future Directions - The DeepSeek team indicates that while significant work remains, these results suggest that self-verifying mathematical reasoning is a viable research direction, potentially aiding in the development of more powerful mathematical AI systems [10]
第1个获得数学奥赛金牌的开源模型!DeepSeek新模型获网友盛赞:公开技术文件,了不起!
Hua Er Jie Jian Wen· 2025-11-28 00:46
Core Insights - DeepSeek has launched its latest open-source mathematical reasoning model, DeepSeekMath-V2, which has achieved gold medal status in the highly competitive International Mathematical Olympiad (IMO) 2025, marking a significant breakthrough in open-source AI capabilities in complex reasoning [1][3]. Group 1: Model Performance - DeepSeekMath-V2 solved 5 out of 6 problems in the simulated IMO 2025, becoming the first open-source model to achieve gold medal status in such a prestigious competition [1]. - The model also demonstrated top-tier performance in other challenging mathematics competitions, including achieving gold medal status in the Chinese Mathematical Olympiad (CMO) and scoring 118 out of 120 in the Putnam Mathematics Competition 2024, surpassing the highest human score of 90 [3]. Group 2: Innovation in Training Framework - The model employs an innovative self-verification training framework, which includes a dedicated verifier that assesses the quality of the proof process rather than just the correctness of the final answer [2][11]. - To prevent overfitting, DeepSeek has implemented a dynamic evolution strategy that increases computational demands and automatically labels difficult proofs, ensuring that the verifier and generator evolve in sync [12]. Group 3: Open Source and Community Impact - DeepSeekMath-V2's weights are publicly available under the Apache 2.0 license, allowing researchers and developers to download and utilize the model freely, which is seen as a significant step towards the democratization of AI [2][4]. - The release has sparked discussions about the potential impact of open-source models on the commercial viability of closed-source products, particularly concerning major players like NVIDIA [2].
DeepSeek上新,“奥数金牌水平”
Di Yi Cai Jing· 2025-11-28 00:40
Core Insights - DeepSeek has released a new model, DeepSeek-Math-V2, which is the first open-source model to achieve International Mathematical Olympiad (IMO) gold medal level performance [3][5] - The model outperforms Google's Gemini DeepThink in certain benchmarks, showcasing its capabilities in mathematical reasoning [5][9] Performance Metrics - DeepSeek-Math-V2 achieved 83.3% in IMO 2025 and 73.8% in CMO 2024, while scoring 98.3% in the Putnam 2024 competition [4] - In the Basic benchmark, Math-V2 scored nearly 99%, significantly higher than Gemini DeepThink's 89%, but in the Advanced subset, Math-V2 scored 61.9%, slightly lower than Gemini's 65.7% [5] Research Implications - The paper titled "DeepSeek Math-V2: Towards Self-Validating Mathematical Reasoning" emphasizes the importance of rigorous mathematical proof processes rather than just correct answers [8] - DeepSeek advocates for self-validation in mathematical reasoning to enhance the development of more powerful AI systems [8] Industry Reactions - The release of Math-V2 has generated excitement in the industry, with comments highlighting its unexpected success over Google's model [9] - The competitive landscape is evolving, with other major players like OpenAI and Google releasing new models, raising anticipation for DeepSeek's next moves [10]
DeepSeek上新,“奥数金牌水平”
第一财经· 2025-11-28 00:35
Core Viewpoint - DeepSeek has released an open-source model, DeepSeek-Math-V2, which is the first model to achieve IMO gold medal level in mathematics and outperforms Google's Gemini DeepThink in certain benchmarks [3][5]. Group 1: Model Performance - DeepSeek-Math-V2 achieved nearly 99% on the Basic benchmark, significantly outperforming Gemini DeepThink, which scored 89% [5]. - In the more challenging Advanced subset, Math-V2 scored 61.9%, slightly below Gemini DeepThink's 65.7% [5]. - The model has demonstrated gold medal-level performance in IMO 2025 and CMO 2024, and nearly perfect scores in the Putnam 2024 exam (118/120) [8]. Group 2: Research and Development Insights - DeepSeek emphasizes the importance of verifying mathematical reasoning comprehensively and rigorously, moving from a result-oriented approach to a process-oriented one [8]. - The model is designed to teach AI to review proof processes like a mathematician, enhancing its ability to solve complex mathematical proofs without human intervention [8]. Group 3: Industry Reactions and Expectations - The release of Math-V2 has generated excitement in the industry, with reactions noting that DeepSeek has surpassed expectations by defeating Google's IMO Gold model by a 10% margin [9]. - There is anticipation regarding DeepSeek's next moves, especially concerning updates to its flagship models, as the industry awaits further developments [9].
DeepSeek上新!首个奥数金牌水平的模型来了
Di Yi Cai Jing· 2025-11-28 00:22
Core Insights - DeepSeek has released a new model, DeepSeek-Math-V2, which is the first open-source model to achieve International Mathematical Olympiad (IMO) gold medal level performance [1] - The model outperforms Google's Gemini DeepThink in certain benchmarks, showcasing its capabilities in mathematical reasoning [1][5] Performance Metrics - DeepSeek-Math-V2 achieved 83.3% on IMO 2025 problems and 73.8% on CMO 2024 problems [4] - In the Putnam 2024 competition, it scored 98.3%, demonstrating exceptional performance [4] - On the Basic benchmark, Math-V2 scored nearly 99%, while Gemini DeepThink scored 89% [5] - In the Advanced subset, Math-V2 scored 61.9%, slightly below Gemini DeepThink's 65.7% [5] Research and Development Focus - The model emphasizes self-verification in mathematical reasoning, moving from a result-oriented approach to a process-oriented one [8] - DeepSeek aims to enhance the rigor and completeness of mathematical proofs, which is crucial for solving open problems [8] - The research indicates that self-verifying mathematical reasoning is a viable direction for developing more powerful AI systems [8] Industry Reaction - The release has generated significant interest, with comments highlighting DeepSeek's competitive edge over Google's model [9] - The industry is keenly awaiting further developments from DeepSeek, especially regarding their flagship model updates [10]
DeepSeek强势回归,开源IMO金牌级数学模型
3 6 Ke· 2025-11-27 23:34
Core Insights - DeepSeek has introduced a new model, DeepSeek-Math-V2, which aims to enhance self-verifiable mathematical reasoning capabilities in AI [1][2] - The model reportedly outperforms Gemini DeepThink, achieving gold medal-level performance in mathematical competitions [3] Model Development - DeepSeek-Math-V2 is based on the previous version, DeepSeek-Math-7b, which utilized 7 billion parameters to match the performance of GPT-4 and Gemini-Ultra [4] - The new model addresses limitations in current AI mathematical reasoning by focusing on the rigor of the reasoning process rather than just the accuracy of final answers [5][6] Self-Verification Mechanism - The model incorporates a self-verification system that includes a proof verification component, a meta-verification layer, and a self-evaluating generator [7][11] - The verification system is designed to assess the reasoning process in detail, providing feedback similar to human experts [8][10] Training and Evaluation - The training process involves a unique honest reward mechanism, where the model is incentivized to self-assess its performance and identify its own errors [11][15] - The model has demonstrated impressive results in various mathematical competitions, achieving high scores in IMO 2025, CMO 2024, and Putnam 2024 [16][17] Performance Metrics - In the IMO-ProofBench benchmark, DeepSeek-Math-V2 achieved nearly 99% accuracy in basic problems and performed competitively in advanced problems [18] - The model's dual improvement cycle between the verifier and generator significantly reduces the occurrence of hallucinations in large models [20] Future Implications - DeepSeek emphasizes that self-verifiable mathematical reasoning represents a promising research direction that could lead to the development of more powerful mathematical AI systems [20]
AI员工几分钟响应 跨镇街建十大万亩级园区
Nan Fang Du Shi Bao· 2025-11-27 23:11
Core Viewpoint - The modernization of urban governance in Zhongshan is being driven by technology empowerment, institutional innovation, and ecological prioritization, aiming to create a livable, resilient, and smart city that enhances operational efficiency and reduces management costs [2]. Group 1: Technology Empowerment - Zhongshan has implemented an "AI employee" in its government services, achieving an average response time of 0.8 seconds and an accuracy rate of over 80% for inquiries related to public housing [3]. - The AI service integrates with 12 departmental business systems, maintaining over 800 high-frequency service items, thus facilitating a shift from fragmented responses to systematic optimization of government services [3][4]. - A domestic government big data center has been established, aggregating over 60 billion data entries from more than 300 government systems, which supports AI model training and enables intelligent approval processes, reducing processing times to under 2 minutes for 14 high-frequency business items [4]. Group 2: Institutional Reform - Zhongshan is actively pursuing integrated reforms across various sectors, including industrial, water, land, and urban reforms, to break down barriers and enhance governance efficiency [5]. - The city has decentralized 107 permissions to town streets and has implemented a pilot program for 43 major and 77 minor permissions in specific districts, addressing the challenges faced by local governance [5]. Group 3: Land and Urban Development - Comprehensive land reform initiatives have been launched, recovering 23,600 acres of farmland and promoting a new land utilization model that emphasizes ecological preservation and efficient agricultural practices [6]. - Zhongshan is developing ten large-scale modern industrial parks across town streets to foster regional collaboration and economic integration, enhancing the overall development landscape [6]. Group 4: Environmental Governance - The city is committed to rigorous water pollution control, having laid 6,638 kilometers of pipelines and constructed 17 wastewater treatment plants, effectively eliminating black and odorous water bodies in urban areas [8]. - Zhongshan is also enhancing its urban landscape through various beautification projects, including the improvement of highway aesthetics and the establishment of a comprehensive park system that ensures green spaces are accessible to residents [9].