Workflow
高带宽闪存(HBF)
icon
Search documents
闪迪联手SK海力士,发力新型HBM
半导体行业观察· 2025-08-08 01:47
Core Viewpoint - Sandisk and SK Hynix are collaborating to standardize High Bandwidth Flash (HBF) technology, which aims to enhance GPU access to large NAND capacities, thereby accelerating AI training and inference workloads [1][3][6]. Group 1: Collaboration and Standardization - The memorandum of understanding (MoU) between Sandisk and SK Hynix focuses on defining technical requirements and creating an HBF technology ecosystem [3][4]. - Sandisk's CTO emphasized that this collaboration addresses the urgent need for scalable memory in the AI industry, aiming to provide innovative solutions to meet exponential data demands [3][4]. - SK Hynix's expertise in HBM technology positions it well to contribute to the development of HBF, which is seen as crucial for unlocking the full potential of AI and next-generation data workloads [3][6]. Group 2: Technical Specifications and Advantages - HBF aims to provide bandwidth comparable to HBM while offering 8-16 times the capacity at similar costs, potentially reaching up to 768 GB [4][6]. - HBF technology combines NAND flash with HBM-like bandwidth capabilities, allowing for significant capacity increases while sacrificing some latency [6][8]. - Unlike DRAM, NAND flash is non-volatile, enabling lower energy consumption for persistent storage, which is critical as AI inference expands into energy-constrained environments [6][8]. Group 3: Market Implications and Future Developments - The collaboration signifies the importance of a multi-supplier HBF market, ensuring customers are not reliant on a single vendor and fostering competition to accelerate HBF development [4][10]. - Sandisk's HBF technology received recognition at the FMS 2025 event, and the first samples are expected to be launched in the second half of 2026, with AI inference devices anticipated in early 2027 [5][9]. - The integration of HBF technology could pave the way for heterogeneous memory stacks, allowing DRAM, flash, and new persistent memory types to coexist in AI accelerators, addressing rising HBM costs [10].