Core Insights - DeepSeek has officially released two models: DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, following the experimental version launched two months ago [1] - DeepSeek-V3.2 aims to balance reasoning capability and output length, making it suitable for everyday use, such as Q&A scenarios and general agent tasks [1] - The performance of DeepSeek-V3.2 in benchmark tests is comparable to GPT-5 and slightly lower than Gemini-3.0-Pro, with significantly reduced output length compared to Kimi-K2-Thinking, leading to lower computational costs and reduced user wait times [1] Model Specifications - DeepSeek-V3.2-Speciale is designed to push the reasoning capabilities of open-source models to the limit, serving as an enhanced version of DeepSeek-V3.2 with theorem-proving abilities from DeepSeek-Math-V2 [2] - This model excels in instruction following, rigorous mathematical proofs, and logical validation, achieving performance on par with Gemini-3.0-Pro in mainstream reasoning benchmarks [2] - DeepSeek-V3.2-Speciale has won gold medals in prestigious competitions such as IMO 2025, CMO 2025, ICPC World Finals 2025, and IOI 2025, with ICPC and IOI scores reaching the second and tenth positions among human competitors, respectively [2] - While the Speciale model significantly outperforms the standard version in complex tasks, it consumes more tokens and incurs higher costs [2] - Currently, DeepSeek-V3.2-Speciale is available only for research purposes and does not support tool invocation, nor has it been optimized for everyday conversation and writing tasks [2]
DeepSeek V3.2 正式版发布:性能比肩GPT-5 ,略低于 Gemini-3.0-Pro