Workflow
AI内存市场
icon
Search documents
美光据报加速HBM4芯片扩产,月产能将提升至1.5万片
Ge Long Hui A P P· 2026-01-07 05:10
格隆汇1月7日|据CNMO科技,行业消息称,美光计划在2026年将HBM4月产能提升至1万5000片晶圆 规模,占其整体HBM总产能(约5万5000片/月)的近30%,显示出其全力押注下一代AI内存市场的战略 意图。长期以来,美光在HBM领域因产能规模落后于韩国竞争对手而处于劣势。但这一局面正在改 变。美光CEO桑杰·梅赫罗特拉在2025年12月财报会上表示,公司将于2026年第二季度起显著提升 HBM4产量,并预计其良率爬坡速度将快于上一代HBM3E。业内分析指出,美光已启动设备投资,正 加快产能建设。 ...
三星存储:一个坏消息,一个好消息
半导体芯闻· 2025-06-13 09:41
Group 1 - Samsung Electronics is struggling with the mass production strategy for the next-generation NAND V10, with full-scale investment expected to be delayed until the first half of next year [1][2] - The V10 NAND features a stacking layer count of 430 layers, surpassing the current V9 generation, which has 290 layers [1] - The uncertainty in high-stacking NAND demand and the introduction of new technologies are hindering Samsung's development [1][2] Group 2 - Samsung is collaborating with major front-end equipment manufacturers like Lam Research and TEL to evaluate low-temperature etching equipment for the V10 NAND [2] - The assessment results indicate that low-temperature etching may not be immediately applicable for mass production, leading to a reevaluation of the equipment [2] - The investment costs associated with new equipment are a significant factor in Samsung's decision to postpone the V10 NAND mass production [2] Group 3 - Samsung has secured a supply agreement with AMD for the fifth-generation 12-layer HBM3E memory, which will be used in the upcoming MI350 AI accelerator [3][4] - The new 12-layer HBM3E offers over 50% improvement in performance and capacity compared to the previous 8-layer version, supporting bandwidth of up to 1,280GB/s [4] - AMD's upcoming MI400 series is expected to utilize Samsung's HBM4, which is seen as a critical battleground for dominance in the AI memory market [5] Group 4 - The HBM4 is anticipated to provide significant advantages for Samsung, especially as competitors are using fifth-generation 10nm technology while Samsung plans to adopt a more advanced sixth-generation process [5] - The Helios server architecture, which includes 72 MI400 GPUs, will have a total of 31TB HBM4, significantly enhancing AI processing capabilities [5]