HBM(高带宽存储芯片)
Search documents
HBM之战:中国加速破墙,英伟达杀入基础裸片设计
Hu Xiu· 2025-08-18 01:43
Group 1 - The core technology of HBM (High Bandwidth Memory) is becoming increasingly strategic for AI high-end chips and the computing power supply chain [1][2] - The competition in AI computing power is intensifying, with the next generation of HBM technology being crucial for breakthroughs in AI large models and low-cost large-scale deployment [2][6] - China's domestic replacement of HBM technology is accelerating, with the technology gap with leading companies shrinking from 8 years to 4 years [3][14] Group 2 - China has begun mass production of HBM2 ahead of schedule, with HBM3 samples delivered to customers in June and mass production verification expected by the end of the year [4][12] - Major players are moving towards HBM4, which will lead to a technological leap, with Nvidia starting to design HBM bare chips expected to begin mass production in 2027 [5][24] - The capacity and bandwidth of HBM have increased significantly, with HBM capacity growing 2.4 times and bandwidth 2.6 times from H100 to GB200, but the growth rate of model parameters and context length is faster, increasing storage pressure [6][20] Group 3 - The lack of domestic HBM is a significant challenge for China's AI competitiveness, especially with restrictions on accessing advanced HBM technologies from the US [7][19] - The mainstream domestic AI chips are primarily using HBM2E from the three major memory suppliers, while global competitors have moved to HBM3E [8][9] - The speed of domestic HBM replacement is faster than previously expected, with companies like Changxin Storage and Wuhan Xinxin rapidly catching up [10][13] Group 4 - Changxin Storage is actively expanding HBM capacity, with HBM2 already in mass production and plans for HBM3 mass production by next year [12][14] - The time gap between Chinese manufacturers and leading memory companies is expected to shrink to about 3-4 years by 2027 [14][15] - The transition to HBM4 is expected to introduce new technical paths, making it difficult to replicate the success of HBM2 to HBM3 [20][25] Group 5 - The future of HBM will not be standardized, with a trend towards customization to reduce power consumption and performance loss [24][25] - Nvidia's design of its own HBM base die is a critical development, with plans for a 3nm process base die expected to begin small-scale production in 2027 [35][36] - The integration of storage and computing architectures is becoming a key factor in the future of AI chips, with HBM being a decisive element [34][44]
华卓精科推出系列高端装备助力HBM芯片制造设备国产化
国芯网· 2025-03-14 04:33
国芯网[原:中国半导体论坛] 振兴国产半导体产业! 大模型时代, AI 芯片搭载 HBM( 高带宽存储芯片 ) 内存已是业内共识。在人工智能算力需 求爆发的当下, HBM 凭借其 3D 堆叠架构与超高带宽性能,已成为 AI 芯片、数据中心及超 算领域的 " 性能倍增器 " 。然而, HBM 核心制造技术长期被海外巨头垄断,高端装备几乎 完全依赖进口。 北京华卓精科科技股份有限公司(以下简称 " 华卓精科 " )面向 HBM 芯片制造的核心环 节,自主研发出多款系列高端装备,包括:混合键合设备( UP-UMA HB300) 、熔融键合设 备 ( UP-UMA FB300 ) 、芯粒键合设备 (UP-D 2W-HB) 、激光剥离设备 (UP-LLR-300) 、激光退火设备 (UP-DLA-300) ,突破国产 HBM 芯片 " 卡脖子 " 困局,为中国存储产业 自主化注入硬核动能。 HBM 通过垂直堆叠多层 DRAM 芯片并与逻辑芯片互联,实现每秒数 TB 的带宽跃升,但其 制造面临多个技术壁垒: 1). 精度极限:芯片堆叠需亚微米甚至几十纳米级对准,否则将导致互联失效或信号衰减; 2). 工艺复杂度:混合键合 ...