Core Viewpoint - Starting from 2025, supernodes will become a significant technological innovation direction in the AI computing network, with increasing competition among AI chip manufacturers in both chip performance and Scale up network [1][5]. Group 1: Supernode Development - Nvidia has launched mature supernode solutions, with plans to release GH200 NVL72, GB200/GB300 NVL72, and VR200 NVL72 from 2024 to 2026 [1][3]. - The Blackwell architecture standardizes Scale up with GB200 NVL72 stabilizing the scale at 72 GPUs per cabinet, consisting of 18 Compute Trays and 9 Switch Trays [2]. - The Rubin architecture will enhance bandwidth, with the NVLink 6 Switch increasing single GPU interconnect bandwidth to 3.6 TB/s, up from 1.8 TB/s [2]. Group 2: Nvidia's Competitive Advantage - Nvidia maintains a leading position in the supernode market, with a projected shipment of approximately 2,800 units of GB200/300 NVL72 by 2025 [3]. - Future plans include the introduction of Vera Rubin NVL144 and Rubin Ultra NVL576, expanding interconnected GPUs from 72 to 576 [3]. - Innovations such as NVLink and NVLink Switch are crucial for achieving high bandwidth and low latency in AI training clusters, with NVLink 5 Switch supporting a total bandwidth of 130 TB/s for 72 GPUs [4]. Group 3: Industry Landscape and Investment Strategy - The global supernode competition landscape is still being established, with Nvidia currently in a leading position [6]. - The report suggests monitoring Nvidia's supernode supply chain, including components like PCB backplanes, high-speed copper cables, optical modules, and cooling systems [6]. - Chinese manufacturers are actively participating in the supernode and Scale up network sectors, with potential for domestic firms to gain a competitive edge [6].
东兴证券:全球超节点竞争格局尚未确立 建议关注发布国产超节点云厂商等