AI服务器内存技术

Search documents
英伟达调整内存技术布局 或暂停第一代SOCAMM推广并聚焦SOCAMM2研发
Huan Qiu Wang Zi Xun· 2025-09-15 04:20
Core Insights - Nvidia has decided to cancel the promotion of the first-generation SOCAMM (System-on-Chip Add-on Memory Module) and shift its focus entirely to the next-generation SOCAMM2, aiming to optimize memory module performance for AI server applications [1][3] - The initial SOCAMM technology was designed as a high-bandwidth, low-power memory solution for AI servers, targeting performance close to HBM (High Bandwidth Memory) while effectively reducing costs [1] - The SOCAMM2 is expected to enhance data rates from 8533 MT/s in the first generation to 9600 MT/s, with improved computational efficiency [3] Technical Development - Nvidia has initiated sample testing for SOCAMM2, involving participation from the three major global memory suppliers, laying the groundwork for technical implementation and subsequent mass production [3] - The SOCAMM2 may support LPDDR6 memory specifications, although this feature has not yet been confirmed by relevant suppliers, with technical details pending further disclosure [3] Market Dynamics - Micron has already taken the lead by being the "only" memory manufacturer to ship SOCAMM products to the AI server market as early as March this year, gaining a first-mover advantage [3] - In contrast, Samsung and SK Hynix are progressing more slowly, with plans to achieve mass production of similar products by the third quarter of 2025 [3] - Nvidia's strategic shift towards SOCAMM2 development may provide Samsung and SK Hynix an opportunity to catch up, potentially narrowing the gap with Micron in the AI server memory market and stimulating industry competition [3]