
Core Viewpoint - The release of DeepSeek-V3.2-Exp model by DeepSeek Company marks a significant advancement in the domestic AI chip ecosystem, introducing a sparse attention mechanism that reduces computational resource consumption and enhances inference efficiency [1][7]. Group 1: Model Release and Features - DeepSeek-V3.2-Exp model incorporates DeepSeek Sparse Attention, leading to a reduction in API prices by 50% to 75% across its official app, web, and mini-programs [1]. - The new model has received immediate recognition and adaptation from several domestic chip manufacturers, including Cambricon, Huawei, and Haiguang, indicating a collaborative ecosystem [2][6]. Group 2: Industry Impact and Ecosystem Development - The rapid adaptation of DeepSeek-V3.2-Exp by various companies suggests a growing consensus within the domestic AI industry regarding the model's significance, positioning DeepSeek as a benchmark for domestic open-source models [2][5]. - The domestic chip industry, primarily operating under a "Fabless" model, is expected to progress quickly as it aligns with standards defined by DeepSeek, which is seen as a key player in shaping the future of the industry [4][5]. Group 3: Comparison with Global Standards - DeepSeek's swift establishment of an ecosystem contrasts with NVIDIA's two-decade-long development of its CUDA platform, highlighting the rapid evolution of the domestic AI landscape [3][8]. - The collaboration among major internet companies like Tencent and Alibaba in adapting to domestic chips further emphasizes the expanding synergy within the AI hardware and software ecosystem [8].