Core Viewpoint - The UALink alliance has officially launched the UALink 1.0 specification, aimed at providing an open GPU interconnect I/O architecture to compete with Nvidia's NVLink technology [1][2]. Group 1: UALink Alliance Formation and Purpose - The UALink alliance was established in May 2022, consisting of major companies like AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, among others, totaling over 65 members [1]. - The primary goal of the alliance is to create an open standard for GPU accelerator interconnect I/O, facilitating high-speed, low-latency connections for AI servers and clusters [1][2]. Group 2: UALink 1.0 Specifications - UALink 1.0 is based on Ethernet physical layer specifications with a 200G standard, offering transmission rates of 100 Gb/s or 200 Gb/s per channel, with an effective signal rate of 212.5 GT/s [5]. - It allows for the interconnection of up to 1,024 GPU accelerators, forming a scalable AI Pod unit [6]. Group 3: Comparison with NVLink - UALink 1.0 provides higher single-channel bandwidth and larger GPU interconnect scale compared to NVLink, which allows for a maximum of 576 directly connected GPUs [6][7]. - While UALink can offer 800 Gb/s total bandwidth per GPU, NVLink can provide up to 1800 GB/s per GPU through multiple links, indicating a trade-off in total bandwidth versus interconnect scale [6][7]. Group 4: Market Implications and Future Outlook - The first products supporting UALink 1.0 are expected to be released between 2026 and 2027, with potential competition from Nvidia's upcoming NVLink 6.0 [7].
UALink,能否一战?