Core Viewpoint - The report from Guohai Securities highlights the new demand for high-speed interconnect protocols driven by Scale-Up in the era of large models, emphasizing the importance of bus interconnect in facilitating the development of AI models and applications, thereby creating a positive feedback loop from models to computing power [1] Group 1: High-Speed Interconnect Protocols - High-speed interconnect protocols serve the Scale-Up needs in the era of large models, with computer buses connecting systems and components for data transmission, control, and operation [1] - Mainstream interconnect protocols include NVLink, UALink, SUE, CXL, HSL, and UB, which are crucial for enhancing communication and expanding system bandwidth and device numbers [1] Group 2: NVLink and Competitors - NVLink is leading in the Scale-Up interconnect space, enabling high-speed communication between GPUs, while NVSwitch supports multi-GPU inference with low latency and high bandwidth [2] - The fifth-generation NVLink offers a single-channel bandwidth of 200 Gbps, significantly higher than PCIe Gen5's 32 Gbps [2] - Other protocols like UALink and SUE are also emerging, with UALink achieving a maximum data transfer rate of 200 GT/s and SUE leveraging Ethernet for efficient deployment [3] Group 3: Open Source and Evolving Requirements - NVLink Fusion is moving towards open-source collaboration with several companies, allowing for customized chip Scale-Up to meet model training and inference needs [4] - The evolution of computing power demands higher bandwidth and lower latency from interconnect technologies, as the performance of language models improves with increased model size, dataset size, and computational requirements [4]
国海证券:总线互联促进AI模型与应用产业发展 维持计算机行业“推荐”评级