Workflow
NVIDIA Mission Control
icon
Search documents
Schneider Electric Announces New Reference Designs, Featuring Integrated Power Management and Liquid Cooling Controls, Supporting NVIDIA Mission Control and NVIDIA GB300 NVL72
Globenewswire· 2025-09-18 11:15
Core Insights - Schneider Electric, in collaboration with NVIDIA, has announced new reference designs aimed at accelerating the deployment of AI infrastructure solutions for data centers [1][11] - The designs focus on integrated power management and liquid cooling systems, enhancing interoperability with NVIDIA Mission Control and supporting the latest advancements in AI technology [2][6] Reference Design Overview - The first reference design is the industry's first critical framework for integrated power management and liquid cooling control systems, enabling seamless management of complex AI infrastructure components [2][4] - The second reference design supports AI infrastructure deployment for AI factories with a maximum power capacity of 142 kW per rack, specifically designed for NVIDIA GB300 NVL72 racks [3][9] Technical Specifications - The reference designs cover four technical areas: facility power, facility cooling, IT space, and lifecycle software, and are compliant with both ANSI and IEC standards [3] - The designs include advanced features such as redundant systems for power and cooling, and new guidance for measuring AI rack power profiles to ensure high uptime and reliability [8][14] Strategic Importance - Schneider Electric's reference designs are intended to help data center operators overcome deployment challenges associated with high-density, GPU-accelerated AI clusters, optimizing for cost, efficiency, and reliability [5][11] - The collaboration with NVIDIA aims to provide a validated blueprint for AI factory digital twins, enabling operators to optimize their advanced computing infrastructure [6][10] Future Readiness - The new reference designs are described as future-ready and scalable, designed to meet the surging demand for AI and to redefine data center architectures through integrated intelligence across power, cooling, and operations [6][7] - Schneider Electric continues to develop a range of AI reference designs for various scenarios, demonstrating a commitment to energy efficiency and resilience in data center architecture [11][12]
NVIDIA Blackwell Ultra DGX SuperPOD Delivers Out-of-the-Box AI Supercomputer for Enterprises to Build AI Factories
GlobeNewswire News Room· 2025-03-18 19:20
Core Insights - NVIDIA has launched the NVIDIA DGX SuperPOD, the most advanced enterprise AI infrastructure, utilizing NVIDIA Blackwell Ultra GPUs for enhanced AI reasoning capabilities [1][3] - The new DGX systems, including DGX GB300 and DGX B300, are designed to provide out-of-the-box AI supercomputing solutions, significantly improving performance for AI applications [2][9] Product Features - The DGX GB300 systems feature 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs, designed for real-time responses in advanced reasoning models [3][7] - DGX GB300 systems deliver up to 70 times more AI performance compared to previous NVIDIA Hopper systems, with 38TB of fast memory for complex reasoning tasks [7] - Each DGX GB300 system includes 72 NVIDIA ConnectX-8 SuperNICs, achieving networking speeds of up to 800Gb/s, which is double the performance of the previous generation [8][11] Market Strategy - NVIDIA introduced the NVIDIA Instant AI Factory, a managed service that will be first offered by Equinix, providing preconfigured AI infrastructure in 45 global markets [5][10] - The Instant AI Factory aims to meet the increasing demand for advanced AI infrastructure, allowing businesses to deploy AI capabilities without extensive pre-deployment planning [15][14] Software and Ecosystem - NVIDIA Mission Control software has been announced to automate the management of AI infrastructure, enhancing operational efficiency for enterprises [12][13] - The DGX systems support the NVIDIA AI Enterprise software platform, which includes tools and frameworks for optimizing AI agent performance [13]