Workflow
存储近计算
icon
Search documents
HBM之战:中国加速破墙,英伟达杀入基础裸片设计
Hu Xiu· 2025-08-18 01:43
Group 1 - The core technology of HBM (High Bandwidth Memory) is becoming increasingly strategic for AI high-end chips and the computing power supply chain [1][2] - The competition in AI computing power is intensifying, with the next generation of HBM technology being crucial for breakthroughs in AI large models and low-cost large-scale deployment [2][6] - China's domestic replacement of HBM technology is accelerating, with the technology gap with leading companies shrinking from 8 years to 4 years [3][14] Group 2 - China has begun mass production of HBM2 ahead of schedule, with HBM3 samples delivered to customers in June and mass production verification expected by the end of the year [4][12] - Major players are moving towards HBM4, which will lead to a technological leap, with Nvidia starting to design HBM bare chips expected to begin mass production in 2027 [5][24] - The capacity and bandwidth of HBM have increased significantly, with HBM capacity growing 2.4 times and bandwidth 2.6 times from H100 to GB200, but the growth rate of model parameters and context length is faster, increasing storage pressure [6][20] Group 3 - The lack of domestic HBM is a significant challenge for China's AI competitiveness, especially with restrictions on accessing advanced HBM technologies from the US [7][19] - The mainstream domestic AI chips are primarily using HBM2E from the three major memory suppliers, while global competitors have moved to HBM3E [8][9] - The speed of domestic HBM replacement is faster than previously expected, with companies like Changxin Storage and Wuhan Xinxin rapidly catching up [10][13] Group 4 - Changxin Storage is actively expanding HBM capacity, with HBM2 already in mass production and plans for HBM3 mass production by next year [12][14] - The time gap between Chinese manufacturers and leading memory companies is expected to shrink to about 3-4 years by 2027 [14][15] - The transition to HBM4 is expected to introduce new technical paths, making it difficult to replicate the success of HBM2 to HBM3 [20][25] Group 5 - The future of HBM will not be standardized, with a trend towards customization to reduce power consumption and performance loss [24][25] - Nvidia's design of its own HBM base die is a critical development, with plans for a 3nm process base die expected to begin small-scale production in 2027 [35][36] - The integration of storage and computing architectures is becoming a key factor in the future of AI chips, with HBM being a decisive element [34][44]