Workflow
英伟达AI计算平台
icon
Search documents
三星发布HBM4E并深化与英伟达合作,AI算力“存储竞赛”再提速
Hua Er Jie Jian Wen· 2026-03-16 22:23
Group 1 - The core focus of the article is the unveiling of Samsung's next-generation high bandwidth memory chip, HBM4E, at NVIDIA's GTC conference, highlighting its significance in the AI computing landscape [1][2]. - HBM4E is positioned as an upgrade from HBM4, designed to provide higher bandwidth and lower latency for next-generation AI accelerators [2]. - The expected single-pin speed of HBM4E is 16Gbps, with a total bandwidth of approximately 4TB/s, aimed at supporting large-scale AI models and data center expansion [3]. Group 2 - The introduction of HBM4E is seen as a critical infrastructure for enhancing data throughput required for trillion-parameter models and AI data centers [2]. - HBM technology utilizes a 3D stacking method to vertically connect multiple DRAM chips, significantly increasing memory bandwidth while reducing power consumption, making it a core component for AI GPUs and accelerators [2]. - The release of HBM4E intensifies competition in the HBM market, particularly between Samsung and SK Hynix [1].