Workflow
国产化AI
icon
Search documents
H20获得“口头放行”之后,英伟达需要重新认识中国市场
Jing Ji Guan Cha Wang· 2025-07-15 07:09
Core Insights - NVIDIA's CEO Jensen Huang visited China for the third time in 2025, coinciding with NVIDIA's market capitalization surpassing $4 trillion [2] - NVIDIA is seeking to resume sales of the H20 GPU, which was previously restricted by U.S. export controls, with assurances from the U.S. government for licensing [2][8] - The H20 chip was developed in response to U.S. export regulations, designed to comply with specific performance metrics set by the U.S. Department of Commerce [4][5] Product Development and Compliance - The H20 chip was created following the U.S. export control updates in October 2023, which imposed strict performance thresholds for chips sold to China [4] - The H20's specifications show a significant reduction in performance compared to the restricted H100, with H20 achieving 148 TFLOPS in FP16/BF16 performance, while H100 reached 1979 TFLOPS [5] - H20's memory capacity increased from 80GB in H100 to 96GB, and its memory bandwidth improved from 3.35 TB/s to 4.0 TB/s, while maintaining interconnect performance [6] Market Dynamics and Competition - The introduction of H20 and the new NVIDIA RTX PRO GPU aims to tap into the industrial AI applications market, aligning with China's manufacturing upgrade policies [8] - The Chinese AI chip market is rapidly evolving, with local suppliers expected to capture 40% of the AI server market by 2025, reducing reliance on foreign chips [10] - Local AI chip manufacturers are gaining market share, with significant contracts awarded to companies within the Huawei Ascend ecosystem, indicating a shift towards domestic solutions [11][12] Challenges Ahead - Despite the potential re-entry of H20, NVIDIA faces intense competition from local players who have established a strong foothold in the AI infrastructure market [9][12] - The need for Chinese customers to adapt to NVIDIA's compliance-driven performance adjustments poses a challenge, as they have invested in local AI infrastructure [12]