Seek .(SKLTY)
Search documents
DeepSeek-V3.2-Exp来了,API价格再度大幅下调
Feng Huang Wang· 2025-09-29 14:03
Core Insights - The new pricing policy will reduce the cost for developers using the DeepSeek API by over 50% [2][3] - The release of the DeepSeek-V3.2-Exp model on September 29, 2025, introduces the DeepSeek Sparse Attention mechanism, enhancing training and inference efficiency for long texts [2] - The V3.2-Exp model maintains performance levels comparable to the previous V3.1-Terminus model across various benchmarks [2][3] Performance Comparison - In the MMLU-Pro benchmark, DeepSeek-V3.1-Terminus scored 85.0, while V3.2-Exp maintained the same score [3] - For the BrowseComp search benchmark, V3.2-Exp improved to 40.1 from 38.5 in V3.1-Terminus [3] - The Codeforces-Div1 benchmark saw an increase from 2046 in V3.1-Terminus to 2121 in V3.2-Exp [3] Accessibility and Development - The V3.2-Exp model has been made open-source on Huggingface and Modao platforms, allowing users to access and develop further [5] - The updated version is available on the official app, web, and mini-programs [2][3]
DeepSeek发布新模型V3.2-Exp并再度降价
Xin Jing Bao· 2025-09-29 13:28
Core Insights - DeepSeek has released an experimental version of its model, DeepSeek-V3.2-Exp, which introduces Sparse Attention for improved training and inference efficiency on long texts [1] Group 1: Model Development - The new version, V3.2-Exp, is a step towards a next-generation architecture, building on the previous V3.1-Terminus [1] - The Sparse Attention mechanism is aimed at optimizing the model's performance for long text processing [1] Group 2: Pricing and Accessibility - The API pricing has been significantly reduced, with costs now at 0.2 yuan per million tokens for cache hits, 2 yuan for cache misses, and 3 yuan for output [1] - This pricing represents a reduction of over 50% compared to previous costs for developers using the DeepSeek API [1]
DeepSeek-V3.2-Exp发布,训练推理提效,API成本降50%以上
Sou Hu Cai Jing· 2025-09-29 13:18
Core Insights - DeepSeek has released the DeepSeek-V3.2-Exp model, which is an experimental version aimed at transitioning to a new generation architecture [1] - The new model introduces DeepSeek Sparse Attention, focusing on optimizing training and inference efficiency for long texts [1] - The official app, web version, and mini-program have all been updated to DeepSeek-V3.2-Exp, and the API costs have been significantly reduced by over 50% for developers [1] - The performance of DeepSeek-V3.2-Exp on various public evaluation sets is comparable to that of V3.1-Terminus [1]
深度求索正式发布DeepSeek-V3.2-Exp模型
Bei Jing Shang Bao· 2025-09-29 12:58
北京商报讯(记者 魏蔚)9月29日,深度求索正式发布 DeepSeek-V3.2-Exp 模型,在 V3.1-Terminus 的 基础上引入了 DeepSeek Sparse Attention(一种稀疏注意力机制),针对长文本的训练和推理效率进行 了探索性的优化和验证。目前,官方 App、网页端、小程序均已同步更新为 DeepSeek-V3.2-Exp,同时 API (应用程序编程接口)大幅度降价。在新的价格政策下,开发者调用 DeepSeek API 的成本将降低 50% 以上。 ...
DeepSeek,新版本
Zhong Guo Zheng Quan Bao· 2025-09-29 12:39
9月29日,DeepSeek发布DeepSeek-V3.2-Exp模型版本。据介绍,这是一个实验性(Experimental)版 本,在此前版本V3.1-Terminus的基础上引入DeepSeek Sparse Attention(一种稀疏注意力机制),针对 长文本的训练和推理效率进行探索性的优化和验证。目前,DeepSeek的App、网页端、小程序均已同步 更新为DeepSeek-V3.2-Exp。同时,得益于新模型服务成本的大幅降低,API价格也相应下调。在新的价 格政策下,开发者调用DeepSeek API的成本将降低50%以上。 寒武纪表示,公司一直高度重视大模型软件生态建设,支持以DeepSeek为代表的所有主流开源大模 型。借助于长期活跃的生态建设和技术积累,寒武纪得以快速实现对DeepSeek-V3.2-Exp这一全新实验 性模型架构的day 0适配和优化。此前,公司对DeepSeek系列模型进行深入的软硬件协同性能优化,达 成了业界领先的算力利用率水平。针对本次的DeepSeek-V3.2-Exp新模型架构,寒武纪通过Triton算子开 发实现快速适配,利用BangC融合算子开发实现极致性能 ...
DeepSeek-V3.2-Exp模型发布并开源,API价格大幅下调
3 6 Ke· 2025-09-29 12:12
Core Insights - DeepSeek-V3.2-Exp model has been officially released and open-sourced, featuring significant updates in architecture and efficiency [1][4] - The introduction of DeepSeek Sparse Attention (DSA) aims to enhance training and inference efficiency for long texts without compromising output quality [1][5] - The API costs for developers have been reduced by over 50% due to the new model's service cost decrease [4] Group 1: Model Features - DeepSeek-V3.2-Exp is an experimental version that builds on V3.1-Terminus, incorporating a sparse attention mechanism [1] - The model achieves fine-grained sparse attention, significantly improving long text training and inference efficiency [1] - The new model's performance is comparable to V3.1-Terminus across various public evaluation datasets [5] Group 2: Development and Implementation - The development of the new model required the design and implementation of numerous new GPU operators, utilizing TileLang for rapid prototyping [2] - The open-sourced operators include both TileLang and CUDA versions, with a recommendation for the community to use the TileLang version for easier debugging [2] Group 3: Previous Versions and Improvements - DeepSeek-V3.1 was released on August 21, featuring a mixed inference architecture and improved efficiency compared to DeepSeek-R1-0528 [4] - The subsequent update to DeepSeek-V3.1-Terminus on September 22 addressed user feedback, enhancing language consistency and agent capabilities [4]
降价!DeepSeek,大消息!
Zheng Quan Shi Bao Wang· 2025-09-29 12:07
Core Insights - DeepSeek has officially released the DeepSeek-V3.2-Exp model, which introduces the DeepSeek Sparse Attention mechanism to enhance training and inference efficiency for long texts [1][3] - The performance of DeepSeek-V3.2-Exp is comparable to its predecessor, DeepSeek-V3.1-Terminus, across various benchmark datasets [3][4] - The official app, web version, and mini-program have been updated to DeepSeek-V3.2-Exp, with a significant reduction in API costs by over 50% for developers [4] Model Performance - DeepSeek-V3.2-Exp maintains similar performance levels to DeepSeek-V3.1-Terminus in several benchmarks, such as MMLU-Pro (85.0), GPQA-Diamond (79.9), and SimpleQA (97.1) [4] - Notable improvements were observed in the BrowseComp and Codeforces-Div1 benchmarks, with scores of 40.1 and 2121 respectively for V3.2-Exp [4] Recent Developments - DeepSeek has been active recently, with the release of DeepSeek-V3.1 on August 21, which marked a step towards the "Agent era" with enhanced reasoning capabilities and efficiency [8] - A research paper on the DeepSeek-R1 reasoning model was featured on the cover of the prestigious journal Nature, highlighting significant advancements in AI technology from China [8][9] - Nature's editorial praised DeepSeek for breaking the gap in independent peer review for mainstream large models, marking a milestone for Chinese AI research [9]
“价格屠夫”DeepSeek上线,新模型成本下降超50%
Di Yi Cai Jing· 2025-09-29 11:50
Core Insights - DeepSeek, known as the "price butcher," has significantly reduced its pricing for the newly released DeepSeek-V3.2-Exp model, with output prices dropping by 75% and overall API costs for developers decreasing by over 50% [1][3]. Pricing Changes - Input pricing for DeepSeek-V3.2-Exp has been adjusted: - Cache hit price decreased from 0.5 yuan per million tokens to 0.2 yuan per million tokens - Cache miss price reduced from 4 yuan per million tokens to 2 yuan per million tokens - Output pricing has been slashed from 12 yuan per million tokens to 3 yuan per million tokens [3]. Model Performance and Features - The V3.2-Exp model is an experimental version that introduces DeepSeek Sparse Attention, enhancing training and inference efficiency for long texts without compromising output quality [3][6]. - Performance evaluations show that DeepSeek-V3.2-Exp maintains comparable results to the previous V3.1-Terminus model across various public benchmark datasets [3][4][5]. Community Support and Open Source - DeepSeek has open-sourced GPU operators designed for the new model, including TileLang and CUDA versions, encouraging community research and experimentation [6]. - The model is now available on platforms like Huggingface and has been updated across official applications and APIs [5][6]. Industry Context - Following the recent release of DeepSeek-V3.1-Terminus, there is speculation about the future of the V4 and R2 versions, with industry voices expressing anticipation for major updates [6].
DeepSeek V3.2和智谱GLM-4.6即将发布
Zheng Quan Ri Bao Wang· 2025-09-29 11:46
Group 1 - DeepSeek has launched the DeepSeek-V3.2-base model on Huggingface as of September 29 [1] - Zhiyu's next-generation flagship model GLM-4.6 is set to be released soon, with the current flagship model GLM-4.5 available on Z.ai's official website [1]
DeepSeek-V3.2-Exp模型正式发布并开源 官方大幅下调API价格
智通财经网· 2025-09-29 10:53
Core Insights - DeepSeek officially released the experimental version DeepSeek-V3.2-Exp on September 29, which introduces a sparse attention architecture aimed at optimizing training and inference efficiency for long texts [1][2] - The new model has been integrated into various platforms including the official app, web, and mini-programs, with a significant reduction in API costs for developers [1] Group 1 - The DeepSeek-V3.2-Exp model builds on the V3.1-Terminus version and incorporates a fine-grained sparse attention mechanism called DeepSeek Sparse Attention (DSA), which enhances long text training and inference efficiency without compromising output quality [1] - The model is now available on Huawei Cloud's Model as a Service (MaaS) platform, utilizing a large EP parallel deployment scheme to optimize context parallel strategies while maintaining latency and throughput performance [1] Group 2 - The DeepSeek team conducted a rigorous evaluation of the impact of the sparse attention mechanism, ensuring that the training settings of DeepSeek-V3.2-Exp were aligned with V3.1-Terminus, resulting in comparable performance across various public evaluation datasets [2] - The introduction of the new model has led to a significant reduction in API service costs, with developer costs for accessing DeepSeek API decreasing by over 50% under the new pricing policy [2]