Core Insights - The rapid development of large models driven by the AI wave has created stringent demands for computing power, leading to the emergence of the "SuperPod" as a key solution in the industry [1][2] - The transition from traditional computing architectures to SuperPod technology signifies a shift towards high-performance, low-cost, and energy-efficient AI training solutions [1][2] Industry Trends - The SuperPod, proposed by NVIDIA, represents the optimal solution for Scale Up architecture, integrating GPU resources to create a low-latency, high-bandwidth computing entity [2] - The traditional air-cooled AI servers are reaching their power density limits, prompting the adoption of advanced cooling technologies like liquid cooling in SuperPod designs [2][5] - The market outlook for SuperPods is optimistic, with many domestic and international server manufacturers adopting this next-generation solution [2][4] Technological Developments - Current mainstream SuperPod solutions include private protocol schemes (e.g., NVIDIA, Trainium, Huawei) and open organization schemes, with copper connections becoming increasingly prevalent for internal communications [3][4] - The ETH-X open SuperPod project, led by the Open Data Center Committee, exemplifies the integration of Scale Up and Scale Out networking strategies [4] Company Initiatives - Chinese tech companies are actively investing in the SuperPod space, with Huawei showcasing its Ascend 384 SuperPod, which features the largest scale of 384-card high-speed bus interconnection [5] - Other companies like Xizhi Technology and Muxi have introduced innovative solutions, such as distributed optical interconnects and liquid-cooled GPU modules, enhancing the SuperPod technology landscape [5][6] - Moore Threads has established a comprehensive AI computing product line, aiming to create a new generation of AI training infrastructure, referred to as a "super factory" for advanced model production [6]
超节点,凭何成为AI算力“新宠”
2 1 Shi Ji Jing Ji Bao Dao·2025-07-31 00:38