Workflow
AI存储技术路线转变
icon
Search documents
他们抛弃了HBM
3 6 Ke· 2025-11-03 00:47
AI浪潮的汹涌席卷,让原本以周期波动著称的存储市场,进入前所未有的"超级繁荣周期"。在AI大模型训练和推理的双重驱动下,算力需求暴增,HBM 成为AI服务器的关键组件。它通过堆叠多层DRAM,与GPU紧密结合,为AI计算提供更快的数据通道,成为AI时代最炙手可热的"黄金存储"。 而HBM的火热也带动了整个存储产业链的升温。三星电子、SK海力士和美光科技等全球三大存储巨头,纷纷迎来业绩爆发。三星第三季度净利润同比增 长21%,SK海力士创下公司史上最高季度利润,美光则实现净利同比增长三倍。SK海力士也表示,其2025年前的HBM产能已被客户预订一空。 与此同时,传统DRAM和NAND芯片也正意外走俏。 由于存储厂集中扩产HBM,常规内存产能趋紧,市场供需出现再平衡。亚马逊、谷歌、Meta等数据中心巨头,为了扩充AI推理与云服务能力,正大规模 采购传统DRAM。事实上,在AI推理阶段,普通内存依然发挥着不可替代的作用——这让整个存储市场呈现"全线紧俏"的局面。 LPDDR5的爆火 先一步爆火的,是所有智能手机几乎都会用到的LPDDR。 近日,高通发布了全新的AI200和AI250数据中心加速器,预计将于2026年 ...
他们抛弃了HBM!
半导体行业观察· 2025-11-01 01:07
Group 1 - The core viewpoint of the article highlights the transformative impact of AI on the storage market, leading to a "super boom cycle" driven by increased demand for computing power, particularly for HBM (High Bandwidth Memory) as a key component in AI servers [2] - Major storage companies like Samsung, SK Hynix, and Micron are experiencing significant profit growth, with Samsung's Q3 net profit increasing by 21%, SK Hynix achieving its highest quarterly profit ever, and Micron's net profit tripling year-on-year [2] - The demand for traditional DRAM and NAND chips is also rising as data center giants like Amazon, Google, and Meta are ramping up purchases to enhance their AI inference and cloud service capabilities, leading to a tight supply across the storage market [2] Group 2 - Qualcomm's new AI200 and AI250 data center accelerators, set to launch in 2026 and 2027, are designed to compete with AMD and NVIDIA by offering higher efficiency and lower operational costs for large-scale generative AI workloads [4][5] - The AI200 system will feature 768 GB of LPDDR memory and utilize direct liquid cooling, with a power consumption of up to 160 kW per rack, marking a significant advancement in power efficiency for inference solutions [7] - Qualcomm's approach of using LPDDR memory, which is significantly cheaper than HBM, indicates a shift in AI storage technology, suggesting that LPDDR could become a viable alternative for inference workloads [8][13] Group 3 - The transition from HBM to LPDDR reflects a broader industry adjustment, as the number of inference workloads is expected to be 100 times greater than training workloads by 2030, highlighting the need for efficient data flow rather than just computational power [11] - LPDDR memory offers a cost advantage over HBM, with a reported 13 times better cost-performance ratio, allowing large language model inference workloads to run directly in memory, resulting in faster response times and lower energy consumption [13] - The introduction of LPDDR6, which promises higher bandwidth and lower power consumption, is expected to further enhance the capabilities of AI applications in mobile devices and edge computing [19][22] Group 4 - The increasing demand for LPDDR memory in data centers could lead to a supply crisis affecting the consumer electronics market, as major suppliers like Samsung, SK Hynix, and Micron may prioritize data center orders over smartphone production [16] - This shift could result in higher memory costs and longer delivery times for smartphone manufacturers, potentially forcing them to compromise on memory configurations or increase prices for mid-to-high-end devices [17] - The competition for LPDDR memory could create a scenario where data centers utilize mobile memory while consumers face shortages and price hikes, illustrating the paradox of technological advancement benefiting enterprise solutions at the expense of consumer interests [27][28]