Core Insights - DeepSeek has officially launched and open-sourced its new model, DeepSeek-V3.1-Base, following the release of GPT-5, despite not having released R2 yet [1] - The new model features 685 billion parameters and supports multiple tensor types, with significant optimizations in inference efficiency and an expanded context window of 128k [1] Model Performance - Initial tests show that DeepSeek V3.1 achieved a score of 71.6% on the Aider Polyglot programming benchmark, outperforming other open-source models, including Claude 4 Opus [5] - The model successfully processed a long text and provided relevant literary recommendations, demonstrating its capability in handling complex queries [4] - In programming tasks, DeepSeek V3.1 generated code that effectively handled collision detection and included realistic physical properties, showcasing its advanced programming capabilities [8] Community and Market Response - Hugging Face CEO Clément Delangue noted that DeepSeek V3.1 quickly climbed to the fourth position on the trends chart, later reaching second place, indicating strong market interest [79] - The update removed the "R1" label from the deep thinking mode and introduced native "search token" support, enhancing the search functionality [79][80] Future Developments - The company plans to discontinue the mixed thinking mode in favor of training separate Instruct and Thinking models to ensure higher quality outputs [80] - As of the latest update, the model card for DeepSeek-V3.1-Base has not yet been released, but further technical details are anticipated [81]
实测低调上线的DeepSeek新模型:编程比Claude 4还能打,写作...还是算了吧