下一代HBM:三大技术,定生死!
Micron TechnologyMicron Technology(US:MU) 半导体行业观察·2025-04-03 01:23

Core Viewpoint - SK Hynix emphasizes that the commercialization of the next generation of HBM (High Bandwidth Memory) requires technological advancements across various fields, particularly in power efficiency, and closer collaboration with major foundries is expected [1][3]. Group 1: Development Focus of HBM - SK Hynix's next generation HBM development focuses on three main tasks: bandwidth, power, and capacity [3]. - The bandwidth is a critical measure of data transfer speed, with the number of I/O ports for HBM4 expected to double compared to HBM3E, reaching 2,048 [3]. - Future HBM is anticipated to improve in power consumption and capacity, with the number of DRAM stacks expected to increase from a maximum of 12 to 16 or even 20 layers [3][4]. Group 2: Challenges in HBM Production - To stack more layers of the next generation HBM within a limited height specification of 775 micrometers, the spacing between each DRAM must be reduced [4]. - SK Hynix is advancing hybrid bonding technology, which connects DRAMs directly without bumps, thus reducing chip thickness and improving power efficiency [4][5]. - However, hybrid bonding faces challenges in commercialization due to high technical difficulty and issues with mass production and reliability [5]. Group 3: Competitive Landscape - Samsung has successfully produced 16-layer stacked HBM3 memory using hybrid bonding technology and plans to mass-produce HBM4 using this technology [6][8]. - Micron Technology is on track with its HBM4 development, expecting to start mass production in 2026, with HBM4E following shortly after [12]. - Micron's HBM4 will utilize 1β (5th generation 10nm class) DRAM technology, integrating up to 16 DRAM chips per stack, each providing 32 GB capacity, with a peak bandwidth of 1.64 TB/s [12][13]. Group 4: Future Projections - The HBM4 and HBM4E are seen as crucial for the ongoing expansion of AI performance, with expectations for significant improvements in density and bandwidth [22]. - Nvidia's upcoming AI accelerators are projected to utilize HBM4 technology, with the Rubin Ultra expected to feature up to 1TB of memory capacity [20][22]. - The competitive landscape is intensifying, with both Samsung and SK Hynix planning to adopt advanced foundry processes for HBM4 production, aiming for enhanced performance and efficiency [16][17].