Workflow
NVIDIA DGX H100
icon
Search documents
Eos: The AI Factory Powering NVIDIA AI’s Breakthroughs
NVIDIA· 2025-08-22 20:53
AI Infrastructure & Innovation - Nvidia's AI factory, EOS, ranks as the ninth fastest supercomputer globally [1] - EOS is a large-scale Nvidia DGX super pod designed for leading-edge AI innovation [2] - EOS utilizes a full-stack architecture with Nvidia accelerated infrastructure, networking, and AI software [2] - Nvidia DGX H100 systems within EOS feature eight Nvidia H100 Tensor Core GPUs each [2] - The system is designed for training generative AI projects at high speeds [2] Enterprise AI Strategy - Enterprises can leverage AI factories like EOS to tackle demanding AI projects [3] - AI factories enable enterprises to achieve their AI aspirations [3]
拆解英伟达1.6T的网络模块
半导体行业观察· 2025-07-23 00:53
Core Viewpoint - The article discusses NVIDIA's transition from A100 to H100 series, highlighting the shift to PCIe Gen5 and the introduction of the Cedar module for enhanced network bandwidth and system efficiency [2][10]. Group 1: Technical Specifications - The DGX H100 utilizes Cedar modules, each equipped with four ConnectX-7 controllers, providing a total bandwidth of 3.2Tbps [4][12]. - Each ConnectX-7 controller offers 400Gbps of network bandwidth, contributing to a significant increase in data transfer capabilities [22][12]. - The Cedar modules are designed for better space efficiency compared to traditional PCIe cards, allowing for improved airflow and cooling within the system [10][35]. Group 2: Product Features - The Cedar modules are customizable and can be used by various vendors, indicating a broader application potential in AI systems [12][13]. - The DGX H100 is designed to accommodate both direct copper cables and standard optical modules for flexible connectivity options [7][10]. - The integration of BlueField-3 controllers alongside Cedar modules allows for dedicated tasks such as storage access, enhancing overall system performance [10][12]. Group 3: Market Implications - The introduction of Cedar modules is expected to meet the increasing bandwidth demands of next-generation AI models, positioning NVIDIA favorably in the competitive landscape [12][13]. - The shift to a more compact design with Cedar modules may influence future hardware designs in the industry, promoting efficiency and performance [35][36].