Workflow
Scale up网络
icon
Search documents
中信证券:阿里与腾讯押注NPO,关注Scale up网络发展机遇
Xin Lang Cai Jing· 2026-02-27 01:07
Core Viewpoint - The current AI computing network is undergoing a critical transition towards all-optical interconnection, with NPO technology emerging as an ideal compromise to overcome bandwidth physical limitations. Major tech giants like Alibaba and Tencent are accelerating the implementation and standardization of NPO architecture, indicating that this technology has entered the stage of large-scale commercialization [1][7]. Group 1: NPO Technology and Its Advantages - NPO technology demonstrates a strong balance in signal integrity, power consumption, and maintainability, making it a suitable solution for the extreme challenges posed by the evolution of large model architectures in AI [2][8]. - Compared to traditional high-power pluggable optical modules and CPO (Co-Packaged Optics) solutions, NPO (Near-Package Optics) offers significant advantages by relocating the optical engine closer to the switching chip, thus reducing signal transmission distance and power consumption while maintaining the independent replaceability of the optical engine [2][8]. Group 2: Major Players and Developments - Leading CSP manufacturers are rapidly advancing the application of NPO-related solutions. Alibaba has released the "UPN512 Technical Architecture White Paper," aiming to construct a fully interconnected system with 512 xPUs, which is expected to reduce optical interconnection costs by over 30% [3][9]. - Alibaba has successfully activated a 3.2T NPO module that supports silicon photonics and VCSEL technology, with a typical TDECQ of only 1.9dB and a power consumption of just 20W. This module has already been applied in the new generation of domestic four-chip switches [3][9]. - Tencent is also actively exploring the evolution of NPO based on silicon photonics technology and has initiated a standardization project with Alibaba, with prototype systems expected to be operational by Q3 2026 [3][9]. Group 3: Industry Transformation and Opportunities - The acceleration of NPO technology by internet giants signifies its entry into large-scale commercial use, which will profoundly reshape the business models and supply chain structures within the optical communication industry [4][10]. - NPO technology requires optical module manufacturers to possess strong silicon photonics integration capabilities, high-precision packaging processes, and deep collaboration with chip manufacturers, pushing them from merely providing modules to becoming "optical interconnection solution providers" [4][10]. - Domestic leading companies are seizing this historic opportunity, with firms like Zhongji Xuchuang showcasing OpenSocket NPO solutions expected to be deployed on a large scale by 2027, and Huagong Technology's 3.2T NPO optical engine anticipated for large-scale commercialization by 2026 [4][10].
东兴证券:全球超节点竞争格局尚未确立 建议关注发布国产超节点云厂商等
智通财经网· 2026-02-05 06:20
Core Viewpoint - Starting from 2025, supernodes will become a significant technological innovation direction in the AI computing network, with increasing competition among AI chip manufacturers in both chip performance and Scale up network [1][5]. Group 1: Supernode Development - Nvidia has launched mature supernode solutions, with plans to release GH200 NVL72, GB200/GB300 NVL72, and VR200 NVL72 from 2024 to 2026 [1][3]. - The Blackwell architecture standardizes Scale up with GB200 NVL72 stabilizing the scale at 72 GPUs per cabinet, consisting of 18 Compute Trays and 9 Switch Trays [2]. - The Rubin architecture will enhance bandwidth, with the NVLink 6 Switch increasing single GPU interconnect bandwidth to 3.6 TB/s, up from 1.8 TB/s [2]. Group 2: Nvidia's Competitive Advantage - Nvidia maintains a leading position in the supernode market, with a projected shipment of approximately 2,800 units of GB200/300 NVL72 by 2025 [3]. - Future plans include the introduction of Vera Rubin NVL144 and Rubin Ultra NVL576, expanding interconnected GPUs from 72 to 576 [3]. - Innovations such as NVLink and NVLink Switch are crucial for achieving high bandwidth and low latency in AI training clusters, with NVLink 5 Switch supporting a total bandwidth of 130 TB/s for 72 GPUs [4]. Group 3: Industry Landscape and Investment Strategy - The global supernode competition landscape is still being established, with Nvidia currently in a leading position [6]. - The report suggests monitoring Nvidia's supernode supply chain, including components like PCB backplanes, high-speed copper cables, optical modules, and cooling systems [6]. - Chinese manufacturers are actively participating in the supernode and Scale up network sectors, with potential for domestic firms to gain a competitive edge [6].