Workflow
GB200 NVL72 超节点
icon
Search documents
超节点与Scaleup网络专题之英伟达:行业标杆,领先优势建立在NVLink和NVLink3
Dongxing Securities· 2026-02-05 02:28
Investment Rating - The report maintains a "Positive" outlook on the communication industry [2] Core Insights - The evolution of large language model (LLM) parameters from hundreds of billions to trillions and even hundreds of trillions necessitates tensor parallelism (TP) across servers, making the development of high-bandwidth, low-latency Scale up networks a mainstream technical path in the industry [4][18] - NVIDIA is positioned as a leader in the supernode space, with plans to launch multiple generations of supernodes from 2024 to 2026, including GH200 NVL72, GB200/GB300 NVL72, and VR200 NVL72 [5][43] - The advantages of NVIDIA's supernodes are built on NVLink and NVLink Switch technologies, which support high bandwidth and low latency data transmission essential for AI training clusters [6][86] Summary by Sections 1. High Bandwidth and Low Latency Requirements - The training of LLMs requires extremely high bandwidth and low latency, driving the innovation of supernodes as a key direction in AI computing networks [18] - The need for cross-server tensor parallelism (TP) and expert parallelism (EP) has led to the establishment of Scale up networks [8] 2. NVIDIA's Leading Advantage - NVIDIA's supernode solutions are based on NVLink and NVLink Switch, which have evolved from point-to-point connections to full interconnect communication [33] - The sixth generation of NVLink and NVLink Switch supports GPU-to-GPU communication bandwidth of 3.6TB/s, with total aggregated bandwidth of 260TB/s in the VR NVL72 system [33][75] 3. Supernode Specifications - The GB200 NVL72 supernode features 180 PFLOPS of TF32 Tensor Core computing power, 13.8TB of memory, and a memory bandwidth of 576TB/s, with a total exchange capacity of 129.6TB/s [47][48] - The VR200 NVL72 supernode, set to be released in 2026, will double the total exchange capacity to 259.2TB/s compared to the GB200 NVL72 [70][75] 4. Investment Strategy - Starting from 2025, supernodes will become a significant innovation direction in AI computing networks, with various global manufacturers entering the competition [9] - NVIDIA currently holds a leading position, and attention should be paid to its supernode supply chain, including PCB backplanes, high-speed copper cables, optical modules, and cooling systems [9]