Core Insights - The article discusses the unveiling of NVIDIA's Vera Rubin computing platform at CES 2026, aimed at accelerating AI training speeds to bring next-generation models to market sooner [1][5] - NVIDIA's approach involves extreme collaborative design across all levels of chips and platforms due to the slowing of Moore's Law, which can no longer keep pace with the tenfold annual growth of AI models [1][5] Group 1: Vera Rubin Platform Features - The Vera CPU features 88 custom NVIDIA Olympus cores, supports 176 threads, and has a system memory of 1.5TB, which is three times that of the previous generation [1][5] - The Rubin GPU achieves a NVFP4 inference performance of 50 PFLOPS, which is five times that of the previous Blackwell architecture, and contains 336 billion transistors, a 1.6 times increase from Blackwell [1][5] Group 2: Additional Components - The ConnectX-9 network card operates on an 800 Gb/s Ethernet with 200G PAM4 SerDes and has 23 billion transistors [2][6] - The BlueField-4 DPU is designed for next-generation AI storage platforms, featuring an 800G Gb/s capability and 126 billion transistors [2][6] - The NVLink-6 switch chip connects 18 compute nodes and supports up to 72 Rubin GPUs, providing 3.6 TB/s all-to-all communication bandwidth [2][6] Group 3: Performance Enhancements - The Vera Rubin NVL72 system shows a fivefold increase in NVFP4 inference performance, reaching 3.6 EFLOPS, and a 3.5 times increase in training performance at 2.5 EFLOPS compared to Blackwell [3][7] - The system includes 54TB of LPDDR5X memory, three times that of the previous generation, and HBM capacity of 20.7TB, a 1.5 times increase [3][7] - HBM4 bandwidth reaches 1.6 PB/s, a 2.8 times improvement, while Scale-Up bandwidth achieves 260 TB/s, doubling the previous performance [3][7]
比Blackwell算力提升5倍!黄仁勋展示Vera Rubin计算平台|直击CES