
Core Insights - DeepSeek has released an experimental version of its model, DeepSeek-V3.2-Exp, which introduces Sparse Attention for improved training and inference efficiency on long texts [1] Group 1: Model Development - The new version, V3.2-Exp, is a step towards a next-generation architecture, building on the previous V3.1-Terminus [1] - The Sparse Attention mechanism is aimed at optimizing the model's performance for long text processing [1] Group 2: Pricing and Accessibility - The API pricing has been significantly reduced, with costs now at 0.2 yuan per million tokens for cache hits, 2 yuan for cache misses, and 3 yuan for output [1] - This pricing represents a reduction of over 50% compared to previous costs for developers using the DeepSeek API [1]