Core Viewpoint - The evolution of AI large models towards trillion parameters, multimodal capabilities, and intelligent agents is driving a transition in computing infrastructure towards "super-node" architecture, which significantly enhances training efficiency and inference throughput through high bandwidth and low latency interconnections [1] Group 1: Architectural Transformation - Traditional architectures are facing communication and energy consumption bottlenecks, necessitating a shift to super-node architecture [1] - The NVL72 solution exemplifies this shift by improving training efficiency and inference throughput [1] Group 2: Market Demand and Growth - The transformation in architecture is reshaping the logic of interconnection component ratios, leading to an exponential increase in demand for switching chips, optical modules, and high-speed line modules [1] - Current domestic AI computing investments in China have significant room for improvement compared to overseas, indicating a potential growth opportunity [1] Group 3: Investment Recommendations - The super-node architecture is essential for domestic computing infrastructure to catch up, with cloud vendors and equipment manufacturers accelerating the adaptation of open protocols [1] - It is recommended to focus on the value reassessment opportunities brought by increased interconnection density, particularly in high-speed connection module manufacturers, switching interconnection manufacturers, optical module manufacturers, and AIDC and supporting manufacturers [1]
国产算力建设提速
Zheng Quan Shi Bao Wang·2025-12-01 01:40