
Core Viewpoint - CoreWeave is the first AI cloud provider to deploy NVIDIA's latest GB300 NVL72 systems, aiming for significant global scaling of these deployments [1][5] Performance Enhancements - The NVIDIA GB300 NVL72 offers a 10x boost in user responsiveness, a 5x improvement in throughput per watt compared to the previous NVIDIA Hopper architecture, and a 50x increase in output for reasoning model inference [2] Technological Collaboration - CoreWeave collaborated with Dell, Switch, and Vertiv to establish the initial deployment of the NVIDIA GB300 NVL72 systems, enhancing speed and efficiency for AI cloud services [3] Software Integration - The GB300 NVL72 deployment is integrated with CoreWeave's cloud-native software stack, including CoreWeave Kubernetes Service (CKS) and Slurm on Kubernetes (SUNK), along with hardware-level data integration through Weights & Biases' platform [4] Market Leadership - CoreWeave continues to lead in providing first-to-market access to advanced AI infrastructure, expanding its offerings with the new NVIDIA GB300 systems alongside its existing fleet [5] Benchmark Achievement - In June 2025, CoreWeave achieved a record in the MLPerf® Training v5.0 benchmark using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips, completing a complex model in just 27.3 minutes [6] Company Background - CoreWeave, recognized as one of the TIME100 most influential companies and featured in Forbes Cloud 100 ranking in 2024, has been operating data centers across the US and Europe since 2017 [7]