NVIDIA GB200 GPUs

Search documents
CoreWeave, NVIDIA and IBM Submit Largest-Ever MLPerf Results on NVIDIA GB200 Grace Blackwell Superchips
Prnewswire· 2025-06-04 15:08
Core Insights - CoreWeave, in collaboration with NVIDIA and IBM, achieved a significant milestone by delivering the largest-ever MLPerf® Training v5.0 submission using 2,496 NVIDIA Blackwell GPUs, marking a 34x larger scale than the next largest submission from a cloud provider [1][2] - The submission completed the training of the Llama 3.1 405B model in just 27.3 minutes, demonstrating over 2x faster training performance compared to other similar cluster submissions, showcasing the capabilities of the GB200 NVL72 architecture [2][3] - CoreWeave's cloud platform is designed to meet the demands of AI workloads, enabling faster model development cycles and optimized Total Cost of Ownership, effectively cutting training time in half for its customers [3] Company Overview - CoreWeave, known as the AI Hyperscaler™, provides a cloud platform that supports accelerated computing for enterprises and AI labs, with a growing presence of data centers across the US and Europe since its inception in 2017 [4] - The company has been recognized as one of the TIME100 most influential companies and featured in the Forbes Cloud 100 ranking in 2024, highlighting its impact and leadership in the cloud computing space [4]