Group 1 - Nvidia's next-generation Rubin CPX hardware separates AI inference computing loads, with memory upgrades providing faster transmission [1][2] - The new Nvidia flagship AI server, NVIDIA Vera Rubin NVL144 CPX, integrates 36 Vera CPUs, 144 Rubin GPUs, and 144 Rubin CPX GPUs, offering 100 TB of high-speed memory and 1.7 PB/s memory bandwidth [2] - The performance of Rubin CPX in handling large context windows is up to 6.5 times higher than the current flagship rack GB300 NVL72 [2] Group 2 - The average capacity of Server DRAM is expected to grow by 17.3% year-on-year in 2024, driven by increasing AI server demand [4] - AI high-end chips, including Nvidia's next-generation Rubin and self-developed ASICs from cloud service providers, are being launched or entering mass production, contributing to the rise in both volume and price of DRAM products [4] - Kaipu Cloud is acquiring a 30% stake in Nanning Taike from Shenzhen Jintaike and transferring its storage product business assets to Nanning Taike [3]
国泰海通:供应商陆续推出AI高端芯片 内存升级助力DRAM量价齐升