Core Insights - DeepSeek has released an updated version of its model, V3.1, which shows significant improvements in context length and user interaction, although it is not the highly anticipated R2 model [2][4][14] - The model now supports a context length of 128K, enhancing its ability to handle longer texts and improving its programming capabilities [5][10] - The update merges the functionalities of V3 and R1, leading to reduced deployment costs and improved efficiency [13][25] Group 1: Model Improvements - The new V3.1 model has a parameter count of 685 billion, showing only a slight increase from the previous version, V3, which had 671 billion parameters [7] - User experience has been enhanced with more natural language responses and the use of tables for information presentation [8][10] - The programming capabilities of V3.1 have been validated through tests, achieving a score of 71.6% in multi-language programming, outperforming Claude 4 Opus [10] Group 2: Market Context - The release of V3.1 comes seven months after the launch of R1, during which time other major companies have also released new models, using R1 as a benchmark [3][16] - Despite the improvements in V3.1, the industry is still eagerly awaiting the release of the R2 model, which has not been announced [4][20] - The competitive landscape includes companies like Alibaba and ByteDance, which have launched models that claim to surpass DeepSeek R1 in various metrics [17][19] Group 3: Future Outlook - There are indications that the merging of V3 and R1 may be a preparatory step for the release of a multi-modal model [25] - Industry insiders suggest that the focus will shift towards innovations in economic viability and usability for future models [24] - The absence of the R2 model in the current update has heightened expectations for its eventual release, with speculation that it may not arrive until later [21][22]
DeepSeek又更新了,期待梁文锋“炸场”