半导体:英伟达“Rubin”确认搭载HBM4,HBM加速迭代升级
Huajin Securities·2024-06-03 23:00

Investment Rating - The report maintains an investment rating of "Outperform" with expectations of returns exceeding the CSI 300 Index by more than 10% over the next six months [3]. Core Insights - NVIDIA's next-generation AI platform "Rubin" is set to integrate HBM4 memory, with a planned release in 2026, marking a significant upgrade in storage capabilities [1]. - The Blackwell chip has commenced production, with the enhanced Blackwell Ultra GPU expected in 2025, while the Rubin architecture will feature in future data center GPUs [1]. - Major memory manufacturers, including Micron, SK Hynix, and Samsung, are accelerating the production of HBM3E and advancing HBM4 development, indicating a robust demand for high-bandwidth memory in AI applications [1][2]. Summary by Sections Investment Highlights - NVIDIA's H200, based on the Hopper architecture, was released in 2023, featuring 141GB capacity and 4.8TB/s bandwidth [1]. - The Blackwell GPU, equipped with 192GB capacity and 8TB/s bandwidth, has begun production [1]. - The upcoming Rubin GPU will utilize 8 HBM4 chips, while the Rubin Ultra will integrate 12 HBM4 chips, marking NVIDIA's first use of 12 chips in AI semiconductor products [1]. Industry Trends - The HBM industry is experiencing rapid iterations, driven by AI chip manufacturers like NVIDIA, which is expected to create new demand and upgrade the supply chain [2]. - Samsung's HBM4 research indicates a maximum bandwidth of 2TB/s with 48GB capacity through 16-layer stacking, planned for release in 2025 [2]. - Investment opportunities in the HBM supply chain include companies involved in packaging, equipment, and materials, such as Tongfu Microelectronics and Changdian Technology [2].