Workflow
DeepSeek-V3.1正式发布,叫板OpenAI,适配下一代国产芯片

Core Insights - The release of DeepSeek V3.1 is positioned as a significant step towards the "Agent Era," featuring a hybrid reasoning architecture that allows the model to switch between fast responses and longer reasoning processes [1] - The new model reduces token generation by 20% to 50% compared to its predecessor, enhancing response speed and lowering usage costs [1] - V3.1 improves throughput efficiency and energy performance, laying the groundwork for large-scale applications [1] - The model demonstrates enhanced capabilities in programming tasks, showing improved execution and stability in real-world environments [1] - In complex search tasks, V3.1 exhibits advanced retrieval and integration abilities, outperforming previous models in multi-disciplinary challenges [1] Business and Ecosystem Strategy - DeepSeek adopts a "dual-track" strategy, continuing to offer API services while adjusting pricing and eliminating night discounts starting September 6 [2] - The base model and post-training versions of V3.1 have been open-sourced on Hugging Face and MoDa [2] Technical Specifications - V3.1 utilizes UE8M0 FP8 Scale parameter precision, aligning with the upcoming generation of domestic chips, which may require specific software adaptations for optimal performance [4] - The release appears to be a direct competitor to GPT-5, with both models supporting long contexts and complex task handling, while offering flexible base model calls and cost structures [4]