Workflow
DeepSeek-V3.1版本更新,双模式开放体验

Core Insights - The new version DeepSeek-V3.1-Terminus has been launched, featuring both "Thinking Mode" and "Non-Thinking Mode" with support for 128K long context [1] Group 1: Model Upgrades - The deepseek-chat and deepseek-reasoner models have been unified and upgraded to DeepSeek-V3.1-Terminus, with deepseek-chat corresponding to Non-Thinking Mode and deepseek-reasoner to Thinking Mode [1] - Key optimizations include improved language consistency, significantly alleviating issues with mixed Chinese and English as well as abnormal characters, resulting in more standardized outputs [1] - The Agent capabilities have been further enhanced, particularly the execution performance of Code Agent and Search Agent [1] Group 2: Output Length and Pricing - In terms of output length, Non-Thinking Mode supports a default of 4K, with a maximum of 8K, while Thinking Mode has a default of 32K and can be expanded up to 64K, catering to different generation length requirements [1] - Pricing for the new model is set at 0.5 yuan for cache hits and 4 yuan for cache misses per million tokens input, with an output pricing of 12 yuan per million tokens, providing developers with a cost-effective AI large model service [1]