Workflow
HBM4,箭在弦上

Core Viewpoint - HBM has evolved from a niche product to a core component of the AI revolution, effectively breaking through traditional memory bottlenecks and significantly enhancing bandwidth and data transfer efficiency [2]. Market Competition Landscape - SK Hynix and Samsung dominate the HBM market, collectively holding over 90% market share in 2024 (SK Hynix 54%, Samsung 39%), while Micron is a follower with a 7% share [2]. - The competition for the next generation of HBM, specifically HBM4, is intensifying among these three giants, with each company promoting their advancements as revolutionary [2]. SK Hynix's Strategy - SK Hynix positions HBM as "Near-Memory," which is closer to the computing core (CPU/GPU) than traditional DRAM, offering higher bandwidth and faster response times [4]. - The company highlights three structural advantages of HBM: high capacity through 3D TSV stacking, high bandwidth via wide-channel parallel transmission, and lower energy consumption per bit compared to traditional DRAM [4]. HBM Evolution and Performance - HBM has seen significant bandwidth improvements across generations, with HBM4 expected to achieve a 200% increase in bandwidth compared to HBM3E, reaching over 2TB/s [5]. - HBM4 can handle up to 36GB capacity and is designed to efficiently process large language models (LLMs), with a 60% overall advantage in cost per bandwidth, power consumption, and heat dissipation compared to previous generations [5]. Samsung's Approach - Samsung's HBM evolution roadmap shows a consistent increase in bandwidth from HBM2 (307 GB/s) to HBM4 (projected 2.048 TB/s by 2026) [6]. - The company emphasizes the importance of energy efficiency, with a notable decrease in energy consumption from HBM2 to HBM3E [8]. Micron's Position - Micron, although a late entrant, is making strides in HBM technology, skipping HBM3 and directly entering the market with HBM3E, which is crucial for NVIDIA's H200 GPU [11]. - Micron's HBM4 is expected to feature a 36GB capacity and over 2TB/s bandwidth, with more than a 20% improvement in energy efficiency compared to its predecessor [11]. HBM Manufacturing Complexity - The manufacturing process of HBM is complex, involving multiple steps from silicon etching to packaging, with a focus on improving front-end processes to enhance bandwidth and die density [14]. - Different companies employ various stacking technologies, with SK Hynix known for MR-MUF and Samsung and Micron primarily using TC-NCF [14][15]. Market Growth Projections - The global HBM revenue is projected to grow from $17 billion in 2024 to $98 billion by 2030, with a compound annual growth rate (CAGR) of 33% [19]. - HBM's share of the DRAM market revenue is expected to increase from 18% in 2024 to 50% by 2030, highlighting its high value and pricing compared to traditional DRAM [20]. Challenges Ahead - Despite the promising outlook, the HBM market may face cyclical adjustments due to potential oversupply as major suppliers ramp up production [21]. - The increasing demand for HBM, particularly from AI applications, may lead to significant competition and market corrections in the coming years [21].