Core Viewpoint - The recent upgrade of DeepSeek to version 3.1 has shown significant improvements in context length and user interaction, while also merging features from previous models to reduce deployment costs [1][11][12]. Group 1: Model Improvements - DeepSeek V3.1 now supports a context length of 128K, enhancing its ability to handle longer texts [4]. - The model's parameter count increased slightly from 671 billion to 685 billion, but the user experience has improved noticeably [5]. - The model's programming capabilities have been highlighted, achieving a score of 71.6% in multi-language programming tests, outperforming Claude 4 Opus [7]. Group 2: Economic Efficiency - The merger of V3 and R1 models allows for reduced deployment costs, requiring only 60 GPUs instead of the previous 120 [12]. - Developers noted that the performance could improve by 3-4 times with the new model due to increased cache size [12]. - The open-source release of DeepSeek V3.1-Base on Huggingface indicates a move towards greater accessibility and collaboration in the AI community [13]. Group 3: Market Context - The AI industry is closely watching the developments of DeepSeek, especially in light of the absence of the anticipated R2 model [19]. - Competitors like OpenAI, Google, and Alibaba have released new models, using R1 as a benchmark for their advancements [1][15]. - The market is eager for DeepSeek's next steps, particularly regarding the potential release of a multi-modal model following the V3.1 update [23].
DeepSeek又更新了,期待梁文锋「炸场」