DeepSeek V3.2正式版发布:推理比肩GPT-5

Core Insights - DeepSeek has officially released its next-generation open-source large model, DeepSeek-V3.2, along with the enhanced version DeepSeek-V3.2-Speciale [1] - The new model's inference capability is reported to reach GPT-5 levels, closely approaching Gemini-3.0-Pro, while significantly reducing output length compared to Kimi-K2-Thinking to lower computational costs [1] - The V3.2-Speciale version integrates theorem proving capabilities from DeepSeek-Math-V2, achieving gold medal results in several international competitions, with ICPC results reaching the second place level of human competitors [1] - The new version uniquely combines thinking modes with tool invocation, allowing external tools to be called during the reasoning process [1] - The model underwent reinforcement learning training in over 1,800 environments and more than 85,000 complex instructions, enhancing its generalization ability [1] - The official claims that it has reached the highest level among current open-source models in agent evaluation, further narrowing the gap with closed-source models [1] Additional Information - The experimental version DeepSeek-V3.2-Exp was released two months prior, and user feedback indicated that the DSA sparse attention mechanism did not show significant performance decline across various scenarios [2] - The Speciale version is currently available in a temporary API format for community research and evaluation [2]