Workflow
HBM 之父大胆猜测:NVIDIA 可能买存储公司

Core Insights - NVIDIA's CEO Jensen Huang visited South Korea for the first time in 15 years, meeting with key figures from Samsung and Hyundai to strengthen collaboration in memory and AI megafactories [2] - The importance of memory in the AI era is increasing, with experts suggesting that NVIDIA may consider acquiring memory companies like Micron or SanDisk to maintain its leadership in AI [2][3] - Memory bottlenecks are critical issues that need to be addressed for AI inference, with major companies focusing on solutions [3][4] Memory Demand and Types - Memory requirements for AI are categorized into HBM, DRAM, and SSD, with HBM used for real-time data storage, DRAM for short-term memory, and SSD for long-term data [4] - HBM capacity ranges from 10GB to hundreds of GB, DRAM from hundreds of GB to TB, and SSD from TB to PB [4] AI Inference Mechanism - AI inference utilizes a mechanism similar to human brain attention, which involves storing important information (Key and Value) to enhance processing speed [5] - The introduction of KV Cache allows AI models to remember previously processed information, significantly improving response times for ongoing discussions [5]