Workflow
云端算力竞赛新突破:创纪录英伟达GB200参与MLPerf测试,性能提升超两倍
NvidiaNvidia(US:NVDA) 硬AI·2025-06-05 10:32

Core Viewpoint - The collaboration between CoreWeave, NVIDIA, and IBM has resulted in the largest MLPerf Training v5.0 test, showcasing significant advancements in AI infrastructure capabilities [2][3]. Group 1: MLPerf Training v5.0 Test Results - CoreWeave utilized 2,496 GB200 Grace Blackwell chips to create the largest NVIDIA GB200 NVL72 cluster in MLPerf history, surpassing previous submissions by 34 times [2][3]. - The GB200 NVL72 cluster completed the training of the Llama 3.1 405B model in just 27.3 minutes, achieving over a twofold performance improvement compared to similar scale clusters [3]. - This performance leap highlights the capabilities of the GB200 NVL72 architecture and CoreWeave's infrastructure in handling demanding AI workloads [3]. Group 2: Industry Participation and Growth - The MLPerf Training v5.0 test saw a record number of submissions, with 201 performance test results from 20 different organizations, indicating a significant increase in industry participation [6]. - The introduction of the Llama 3.1 405B model as the largest model in the training suite has attracted more submissions than previous tests based on GPT-3, reflecting the growing importance of large-scale training [5][6]. - New participants in the MLPerf Training tests include AMD, IBM, and others, emphasizing the expanding landscape of AI infrastructure providers [6].