M系列模型
Search documents
AI行业主线开年布局展望:智谱&MiniMax
2026-02-25 04:12
中金公司闭门会议仅限受邀嘉宾参会本次会议交流内容仅供参考不构成对买卖任何证券或其他金融工具的出价或增价或提供任何投资决策建议的服务嘉宾交流观点仅代表其本人的判断中金公司对其信息及观点的完整性准确性不做任何形式的确认或保证对依据或者使用本会议内容所造成的一切后果 This conference is a closed-door meeting of CICC.Only invited guests are allowed to attend the conference.The information and opinions presented in this conference are for reference only,and do not constitute an offer to buy or sell or a solicitation of an offer to buy or sell any securities or financial products,or any investment advice or service. The invited guest's speech on ...
从开源最强到挑战全球最强:DeepSeek新模型给出了解法
Guan Cha Zhe Wang· 2025-12-02 11:38
Core Insights - DeepSeek has released two official models: DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, with the former focusing on balancing reasoning ability and output length for everyday use, while the latter enhances long-form reasoning and mathematical proof capabilities [1][2][4] - The open-source large model ecosystem has seen significant growth, with DeepSeek's advancements posing a challenge to closed-source models, particularly in light of the recent release of Google Gemini 3.0, which has raised the competitive bar [2][15] - DeepSeek's models are positioned to bridge the gap between open-source and closed-source models through innovative architecture and training strategies, despite limitations in computational resources compared to industry giants [8][15][16] Model Performance - DeepSeek-V3.2 has achieved performance levels comparable to GPT-5 and is slightly below Google’s Gemini 3 Pro, demonstrating its effectiveness in reasoning tasks [6][7] - The Speciale version has outperformed Gemini 3 Pro in several reasoning benchmarks, including the American Mathematics Invitational Exam (AIME) and the Harvard-MIT Mathematics Tournament (HMMT) [7][8] - Speciale's design focuses on rigorous mathematical proof and logical verification, making it a specialized tool for complex reasoning tasks [6][8] Technological Innovations - DeepSeek employs a novel DSA (DeepSeek Sparse Attention) mechanism to optimize computational efficiency, allowing for effective long-context processing without sacrificing performance [8][12] - The concept of "Interleaved Thinking" has been integrated into DeepSeek's models, enhancing the interaction between reasoning and tool usage, which is crucial for AI agents [9][12] - The focus on agent capabilities signifies a strategic shift towards creating actionable AI, moving beyond traditional chat-based interactions to more complex task execution [13][14] Industry Context - The competitive landscape is shifting, with DeepSeek acknowledging the widening gap between open-source and closed-source models, particularly in complex task performance [15][16] - DeepSeek aims to address its limitations by increasing pre-training computational resources and optimizing model efficiency, indicating a clear path for future improvements [16][19] - The release of DeepSeek-V3.2 has been seen as a significant achievement in the open-source community, suggesting that the gap with leading closed-source models is narrowing [16][19]