Workflow
UALink
icon
Search documents
高盛Communacopia与技术大会之硬件:AI推动企业服务器与网络市场分化,高端厂商盈利可期
Zhi Tong Cai Jing· 2025-09-12 09:52
Group 1: AI Server Demand and Market Dynamics - The demand for AI servers in the hardware sector is showing a differentiated trend, with Dell Technologies (DELL.US) capturing high-end market share from Super Micro Computer (SMCI.US) and HPE (Hewlett Packard Enterprise) achieving growth through cost control [1] - The traditional enterprise server market is under short-term pressure as companies prioritize investment in AI infrastructure [1] - Goldman Sachs predicts that by 2027, mature cloud service providers will gradually shift towards ODM custom or semi-custom server designs, while Dell and HPE may see a decline in x86 unit sales but can maintain stable revenue through high average prices and profit margins [1] Group 2: Backend Network Market and Future Projections - The backend scale-out network market is expected to reach a total size of $23 billion by 2029, with the Ethernet scale-out incremental market projected to be around $8 billion to $10 billion [1] - NVLink currently dominates the scale-out backend network, but Ethernet is expected to become a strong competitive alternative in the future, while UALink and PCIe will maintain niche market positions [1] Group 3: Software Technology and AI Network Competitiveness - In the software technology sector, AI networks remain competitive across hyperscale, secondary cloud/new cloud, enterprise, and sovereign customer verticals [2] - Hyperscale enterprises are driven by strong demand for AI economic transformation, with brand suppliers continuing to dominate [2] - The secondary cloud sector shows that Spectrum-X bundled network/computing solutions are the most adaptable, while OEMs like Dell, HPE, and Cisco (CSCO.US) hold advantages in the enterprise market due to their extensive customer base and distribution capabilities [2] Group 4: Communication Technology and Market Trends - Companies like Cisco, HPE, and Juniper are actively participating in the AI network market, while market share data from Arista Networks (ANET.US) and Tianhong Technology (CLS.US) indicates that there is no trend of brand suppliers shifting towards white-box switches in the scale-out field [2]
Astera Labs (ALAB) 2025 Conference Transcript
2025-09-04 13:52
Summary of Astera Labs Conference Call Company Overview - **Company**: Astera Labs - **Industry**: AI and Cloud Infrastructure Connectivity Solutions - **IPO Date**: 2024 - **Founding Year**: 2017 Key Points and Arguments Company Journey and Product Development - Astera Labs was founded with the mission to provide connectivity solutions for cloud and AI infrastructure, recognizing that AI's performance relies on effective GPU communication [3][4] - The company has established a leadership position in the market with products like Ares retimer, Taurus for Ethernet, and LEO for CXL memory expansion, all of which are in full production [4][5] - Revenue growth has been significant, with $116 million in 2023, $396 million in 2024, and approximately $350 million already in 2025 [6] AI Cycle and Market Position - The company believes it is still early in the AI cycle, with significant improvements needed in AI systems, suggesting a potential for 10x to 100x enhancements in the future [11][12] - Astera Labs aims to grow faster than the market regardless of market conditions [14] Product Insights - The Scorpio product line is designed for scale-up networking, allowing multiple GPUs to function as a single unit, which is crucial for AI workloads [17][21] - Scorpio X has over 10 engagements with various customers, including hyperscalers, indicating strong market interest [23] - The Scorpio P series is expected to drive revenue growth, particularly in applications involving NVIDIA's Grace Blackwell system [29] Competitive Landscape - Astera Labs identifies three major ecosystems: NVIDIA with NVLink, Broadcom with scale-up Ethernet, and UOLINK, which combines the benefits of PCI Express and Ethernet for AI workloads [47][50] - UOLINK is positioned as an open ecosystem, providing flexibility and innovation opportunities for customers [51][54] Retimer Business and Future Growth - The Ares retimer has been a flagship product, widely deployed in AI applications, and is expected to grow over 60% this year [61] - Transitioning to PCI Express Gen 6 is anticipated to increase average selling prices (ASPs) by about 20% due to higher performance requirements [59] Market Opportunities and Projections - The Scorpio product portfolio is projected to capture a $5 billion market, with potential for Scorpio X to exceed initial expectations based on customer engagement [31] - The company aims to increase content per GPU significantly, targeting over $1,000 per accelerator with the Scorpio X series [38] Misunderstandings and Future Vision - Astera Labs seeks to redefine its image beyond being a retimer company, aiming to provide comprehensive rack-level connectivity solutions for AI infrastructure [84] Additional Important Insights - The company emphasizes the importance of connectivity in maximizing the performance of high-cost GPUs, likening it to a race car without tires [81] - Astera Labs is focused on maintaining a first-mover advantage in the competitive landscape, leveraging its software capabilities to address customer needs effectively [69]
挑战Nvlink,华为推出互联技术,即将开源
半导体行业观察· 2025-08-28 01:14
Core Viewpoint - Huawei introduced the UB-Mesh technology at the Hot Chips 2025 conference, aiming to unify all interconnections within AI data centers using a single protocol, which will be made available for free to all users next month [1][5][27]. Summary by Sections UB-Mesh Technology - UB-Mesh is designed to replace multiple existing protocols (PCIe, CXL, NVLink, TCP/IP) to reduce latency, control costs, and enhance reliability in gigawatt-level data centers [1][5]. - The technology allows any port to communicate with others without conversion, simplifying design and reducing conversion delays [5][10]. SuperNode Architecture - Huawei defines SuperNode as an AI architecture for data centers that can integrate up to 1,000,000 processors (CPU, GPU, NPU), pooled memory, SSDs, NICs, and switches into a single system [7][26]. - The architecture aims to increase chip bandwidth from 100 Gbps to 10 Tbps (1.25 TB/s) and reduce jump latency from microseconds to approximately 150 ns [7][10]. Reliability and Cost Efficiency - Huawei acknowledges challenges in transitioning from copper cables to pluggable fiber links, proposing mechanisms to ensure continuous operation even if individual links or modules fail [14][23]. - The cost of traditional interconnects increases linearly with the number of nodes, while UB-Mesh's cost scales sub-linearly, making it more cost-effective as capacity increases [23][27]. Industry Implications - If successful, UB-Mesh could reduce Huawei's reliance on Western standards like PCIe and NVLink, positioning the company to offer a comprehensive data center solution [26][27]. - The industry's interest in adopting UB-Mesh remains uncertain, as competitors like Nvidia and AMD are promoting their own interconnect technologies [27][28].
全球科技-I 供应链:-OCP 峰会要点;AI 工厂分析;Rubin 时间表-Global Technology -AI Supply Chain Taiwan OCP Takeaways; AI Factory Analysis; Rubin Schedule
2025-08-18 01:00
Summary of Key Points from the Conference Call Industry Overview - The conference focused on the AI supply chain, particularly developments in AI chip technology and infrastructure at the Taiwan Open Compute Project (OCP) seminar held on August 7, 2025 [1][2][9]. Core Insights - **AI Chip Technology**: AI chip designers are advancing in scale-up technology, with UALink and Ethernet being key competitors. Broadcom highlighted Ethernet's flexibility and low latency of 250ns, while AMD emphasized UALink's latency specifications for AI workload performance [2][10]. - **Profitability of AI Factories**: Analysis indicates that a 100MW AI factory can generate profits at a rate of US$0.2 per million tokens, potentially yielding annual profits of approximately US$893 million and revenues of about US$1.45 billion [3][43]. - **Market Shift**: The AI market is transitioning towards inference-dominated applications, which are expected to constitute 85% of future market demand [3]. Company-Specific Developments - **NVIDIA's Rubin Chip**: The Rubin chip is on schedule, with the first silicon expected from TSMC in October 2025. Engineering samples are anticipated in Q4 2025, with mass production slated for Q2 2026 [4][43]. - **AI Semi Stock Recommendations**: Morgan Stanley maintains an "Overweight" (OW) rating on several semiconductor companies, including NVIDIA, Broadcom, TSMC, and Samsung, indicating a positive outlook for these stocks [5][52]. Financial Metrics and Analysis - **Total Cost of Ownership (TCO)**: The TCO for a 100MW AI inference facility is estimated to range from US$330 million to US$807 million annually, with upfront hardware investments between US$367 million and US$2.273 billion [31][45]. - **Revenue Generation**: The analysis suggests that NVIDIA's GB200 NVL72 pod leads in performance and profitability among AI processors, with a significant advantage in computing power and memory capability [43][47]. Additional Insights - **Electricity Supply Constraints**: The electricity supply is a critical factor for AI data centers, with a 100MW capacity allowing for approximately 750 server racks [18]. - **Growing Demand for AI Inference**: Major cloud service providers (CSPs) are experiencing rapid growth in AI inference demand, with Google processing over 980 trillion tokens in July 2025, a significant increase from previous months [68]. Conclusion - The AI semiconductor industry is poised for growth, driven by advancements in chip technology and increasing demand for AI applications. Companies like NVIDIA and Broadcom are well-positioned to capitalize on these trends, with robust profitability metrics and strategic developments in their product offerings [43][52].
OCP亚太峰会要点 - 持续升级人工智能数据中心的路线图-APAC Technology Open Compute Project (OCP) APAC Summit Takeaways - A roadmap to continue upgrading the AI data center
2025-08-11 02:58
Summary of Key Points from the OCP APAC Summit Industry Overview - The Open Compute Project (OCP) is an industry consortium focused on redesigning hardware technology for data centers, emphasizing efficiency, scalability, and openness. It has over 400 members as of 2025, initiated by Meta in 2011 [3][2]. Core Insights and Arguments AI Data Center Innovations - The OCP APAC Summit highlighted advancements in AI hardware, infrastructure, and networking, with participation from major tech companies like Google, Meta, Microsoft, TSMC, and AMD [2][7]. - Meta is aggressively launching its Hyperion data center, which is expected to significantly benefit server ODMs like Quanta and Wiwynn [4][29]. - AMD's UALink and Ultra Ethernet are set to enhance networking capabilities, enabling larger clusters and improved performance [9][11]. Power and Cooling Solutions - The power consumption of AI servers is projected to double, with NVIDIA's GPUs expected to reach 3,600W by 2027, necessitating a shift to high-voltage direct current (HVDC) systems for efficiency [23][24]. - Liquid cooling is becoming essential for managing the thermal load of high-density AI racks, with designs evolving to accommodate this need [34][23]. Market Dynamics - The AI hardware market is transitioning from proprietary solutions to a more open, collaborative environment, benefiting specialized hardware vendors [10][11]. - The back-end networking market for AI is projected to exceed $30 billion by 2028, driven by the demand for high-bandwidth communication within AI clusters [18]. Important but Overlooked Content - The shift to panel-level processing by ASE is a critical innovation for manufacturing larger AI packages, improving area utilization and cost-effectiveness [13]. - The integration of retimers in cables is essential for maintaining signal integrity in high-density AI racks, addressing challenges posed by traditional passive copper cables [18]. - MediaTek is positioning itself as a leader in on-device AI integration, which is crucial as the demand for edge computing grows [26][30]. Company-Specific Highlights - **Delta**: Target price raised from $460 to $715 due to strong growth momentum driven by AI power needs [21]. - **Google**: Engaging with OCP to upgrade AI infrastructure, including the introduction of the Mt. Diablo power rack for efficient power distribution [24][33]. - **Seagate**: Emphasized the complementary role of HDDs alongside SSDs for high-capacity storage in AI applications [39][41]. - **TSMC**: Focused on co-development of system-level standards to support higher performance compute systems [40]. Conclusion The OCP APAC Summit underscored the rapid evolution of AI infrastructure, highlighting the importance of collaboration among tech giants to address the challenges of power, cooling, and networking in data centers. The insights gained from this event will shape the future landscape of AI technology and its supporting ecosystem.
PCIe,狂飙20年
半导体行业观察· 2025-08-10 01:52
Core Viewpoint - The release of the PCIe 8.0 standard marks a significant milestone in the evolution of PCIe technology, doubling the data transfer rate to 256GT/s and reinforcing its critical role in high-speed data transfer across various computing environments [1][38]. Group 1: Evolution of PCIe Technology - PCIe, introduced by Intel in 2001, has evolved from the original PCI standard, which had a maximum bandwidth of 133 MB/s, to a series of iterations that have consistently doubled the data transfer rates [3][14]. - The transition from PCI to PCIe represents a shift from parallel bus technology to a serial communication mechanism, significantly enhancing data transfer efficiency and reducing signal interference [9][11]. - The PCIe 1.0 standard initiated the serial interconnect revolution with a transfer rate of 2.5GT/s, while subsequent versions have seen substantial increases, culminating in the upcoming PCIe 8.0 [14][38]. Group 2: Key Features of PCIe - PCIe's architecture includes three core features: serial communication, point-to-point connections, and scalable bandwidth capabilities, which collectively enhance performance and reduce latency [9][11]. - The introduction of advanced signal processing techniques, such as CTLE in PCIe 3.0 and PAM4 modulation in PCIe 6.0, has been pivotal in maintaining signal integrity and supporting higher data rates [18][24]. - PCIe 8.0 is set to introduce new connector technologies and optimize latency and error correction mechanisms, ensuring reliability and efficiency in high-bandwidth applications [42][38]. Group 3: Market Applications and Trends - PCIe technology is predominantly utilized in cloud computing, accounting for over 50% of its market share, with increasing adoption in automotive and consumer electronics sectors [46][49]. - The demand for high-speed interconnects is driven by the growth of AI applications, high-performance computing, and data-intensive workloads, positioning PCIe as a foundational technology in these areas [45][51]. - Predictions indicate that the PCIe market in AI applications could reach $2.784 billion by 2030, with a compound annual growth rate of 22% [51]. Group 4: Competitive Landscape and Challenges - PCIe faces competition from proprietary interconnect technologies like NVLink and CXL, which offer higher bandwidth and lower latency for GPU communications [55][63]. - The establishment of the UALink alliance aims to create open standards for GPU networking, challenging the dominance of proprietary solutions and enhancing interoperability [56]. - Despite its established position, PCIe must navigate challenges related to bandwidth limitations and evolving market demands, necessitating continuous innovation and adaptation [64][71].
Astera Labs, Inc.(ALAB) - 2025 Q2 - Earnings Call Transcript
2025-08-05 21:32
Financial Data and Key Metrics Changes - Astera Labs reported quarterly revenue of $191.9 million, representing a 20% increase from the previous quarter and a 150% increase compared to Q2 of the previous year [7][19] - Non-GAAP gross margin for Q2 was 76%, up 110 basis points from the previous quarter [20] - Non-GAAP operating margin for Q2 was 39.2%, an increase of 550 basis points from the previous quarter [21] - Cash flow from operating activities for Q2 was $135.4 million, with cash, cash equivalents, and marketable securities totaling $1.07 billion at the end of the quarter [21] Business Line Data and Key Metrics Changes - The Scorpio product line exceeded 10% of total revenue, marking it as the fastest ramping product line in the company's history [8] - The Taurus product family showed strong growth driven by demand for AEC supporting the latest merchant GPUs and general-purpose compute platforms [9] - The ADX product family continued to diversify across GPU and custom ASIC-based systems for various applications [8] Market Data and Key Metrics Changes - Astera Labs is engaged with over 10 unique AI platform and cloud infrastructure providers for their scale-up networking requirements [15] - The transition to AI infrastructure 2.0 is expected to create a market opportunity of nearly $5 billion by 2030 for Astera Labs [12] Company Strategy and Development Direction - The company aims to deliver a purpose-built connectivity platform that includes silicon, hardware, and software solutions for rack-scale AI deployments [13] - Astera Labs is focusing on increasing its addressable dollar content in AI servers by expanding its product lines [13] - The company is strategically crafting its roadmaps to lead the transition to AI infrastructure 2.0, which emphasizes open, standard-based, AI rack-scale platforms [10][12] Management's Comments on Operating Environment and Future Outlook - Management expressed confidence in the strong momentum of the business and the prospects for continued diversification and scale [12] - The company anticipates that the Scorpio X Series will begin shipping for customized scale-up architectures in late 2025, with high-volume production expected in 2026 [15] - Management highlighted the importance of partnerships and collaborations with major players like NVIDIA and AMD to support the evolving AI infrastructure [9][10] Other Important Information - The company is committed to supporting its customers as they choose architectures and technologies that best suit their AI performance goals [12] - Astera Labs is actively involved in the UA Link consortium, promoting an open ecosystem for scale-up networking [63] Q&A Session Summary Question: What has been the biggest differentiator for the Scorpio family of switching products? - Management highlighted three key factors: closeness to customers, execution track record, and the use of the Cosmos software suite to optimize product performance [28][30] Question: What is the reception and interest level on UA Link? - Management noted tremendous interest in UA Link due to its technical advantages and the open ecosystem it supports, with over 10 customers exploring its use [34][37] Question: Can you discuss the profile of customers using Scorpio products? - Management indicated a broad base of customers leveraging Scorpio P Series for scale-out connectivity and Scorpio X Series for scale-up networking, with significant interest in additional products [41][43] Question: What is the expected tax rate for the upcoming quarters? - The tax rate for Q3 is expected to be around 20% due to recent tax law changes, normalizing to approximately 15% in Q4 and around 13% long-term [46] Question: How does Astera Labs view the competition from Ethernet in scale-up networking? - Management emphasized that while Ethernet is effective for scale-out, it was not designed for scale-up, and Astera Labs' solutions like UA Link offer significant advantages in terms of performance and ecosystem openness [95][96]
Astera Labs, Inc.(ALAB) - 2025 Q2 - Earnings Call Transcript
2025-08-05 21:30
Financial Data and Key Metrics Changes - Astera Labs reported quarterly revenue of $191.9 million, representing a 20% increase from the previous quarter and a 150% increase compared to Q2 of the previous year [6][20]. - Non-GAAP gross margin for Q2 was 76%, up 110 basis points from the previous quarter [22]. - Non-GAAP operating margin for Q2 was 39.2%, an increase of 550 basis points from the previous quarter [22]. - Cash flow from operating activities for Q2 was $135.4 million, with cash, cash equivalents, and marketable securities totaling $1.07 billion at the end of the quarter [23]. Business Line Data and Key Metrics Changes - The Scorpio product line, particularly the Scorpio PCD switches, exceeded 10% of total revenue, marking it as the fastest ramping product line in the company's history [6][7]. - The Taurus product family showed strong growth driven by AEC demand, supporting the latest merchant GPUs and general-purpose compute platforms [9]. - The ADX product family continued to diversify across GPU and custom ASIC-based systems, contributing to overall revenue growth [8]. Market Data and Key Metrics Changes - Astera Labs is engaged with over 10 unique AI platform and cloud infrastructure providers for their scale-up networking requirements [16]. - The company anticipates that the transition to AI infrastructure 2.0 will create a market opportunity of nearly $5 billion by 2030 [12]. - The company is strategically positioned to support the AI infrastructure transformation, which is still in its early stages [12]. Company Strategy and Development Direction - Astera Labs aims to deliver a comprehensive connectivity platform that includes silicon, hardware, and software solutions for rack-scale AI deployments [13]. - The company is focused on increasing its addressable dollar content in AI servers by expanding its product lines [14]. - Astera Labs is committed to developing and commercializing a broad portfolio of UA Link connectivity solutions, which is expected to be a long-term growth vector [18]. Management's Comments on Operating Environment and Future Outlook - Management expressed confidence in the strong momentum of the business and the prospects for continued diversification and scale [12]. - The transition to AI infrastructure 2.0 is seen as a significant opportunity for revenue growth, with expectations for Scorpio X Series revenue to outgrow Scorpio P Series revenue [16][17]. - The company is optimistic about the adoption of UA Link, with multiple hyperscalers showing strong interest [38]. Other Important Information - Non-GAAP operating expenses for Q2 were $17.7 million, up approximately $5 million from the previous quarter, reflecting continued investment in R&D [22]. - The company expects Q3 revenues to range between $200 million and $210 million, representing a 6% to 9% increase from Q2 [24]. Q&A Session Summary Question: What has been the biggest differentiator for the Scorpio family of switching products? - Management highlighted three key factors: closeness to customers, execution track record, and the use of the Cosmos software suite to optimize product performance [29][30]. Question: What is the reception and interest level on UA Link? - Management noted tremendous interest in UA Link due to its technical advantages and the open ecosystem it supports, with over 10 customers looking to leverage these open standards [36][38]. Question: Can you discuss the profile of the types of customers using Scorpio? - Management indicated a broad base of customers leveraging Scorpio P Series for scale-out connectivity and Scorpio X Series for scale-up networking, with significant interest in surrounding products [44][45]. Question: What is the expected tax rate for the upcoming quarters? - The tax rate for Q3 is expected to be around 20% due to recent tax law changes, with a normalization to approximately 15% in Q4 and a long-term expectation of around 13% [48]. Question: How does the latency of Broadcom's Tomahawk Ultra switch compare to Astera's products? - Management stated that Astera's products achieve lower latencies than Broadcom's offerings, emphasizing the importance of end-to-end latency in AI applications [106].
NVLink, UALink, NeuronLink, SUE, PCIe – Astera Labs Switch
2025-08-05 08:17
Summary of Astera Labs (ALAB US) Conference Call Company Overview - **Astera Labs** is a U.S.-listed company specializing in PCIe retimer and switch chips, with a focus on the upcoming custom Scorpio-X switch chip [1] Key Industry Insights - **Growth Drivers**: Astera Labs' growth is driven by two main products: - Custom **NeuronLink** switch chip for AWS's Trainium series, launching in the second half of the year - Custom **UALink** switch chip for AMD's MI400 series, expected in the second half of next year [2] Technical Comparisons - **UALink vs. NVLink**: - UALink uses SerDes with differential signaling, allowing longer-distance data transmission compared to NVLink's single-ended signaling, which saves chip area but limits distance [3][4] - UALink can connect up to 1,024 nodes, while NVLink is limited to 576 nodes [5] - **UALink Protocol Versions**: - UALink has two versions: 128 Gbps and 200 Gbps, with the latter being suitable for GPU-to-GPU connections only [6][9] - UALink 128G supports mixed connections and is compatible with PCIe Gen7, making it suitable for model inferencing [9] - **Broadcom's SUE**: - SUE is a point-to-point protocol that draws from NVLink's logic but has limitations in heterogeneous expansion compared to UALink [10] Product Development - **AMD's Helios AI Rack**: - The upcoming Helios AI rack will adopt the UALink 200G protocol, with Astera Labs developing a switch chip expected to tape out in Q1 2026 [11][31] - **AWS Trainium Series**: - Astera Labs is developing the Scorpio-X switch chip for AWS's Trainium rack, which will be software-programmable and meet high-performance transmission requirements [13] Financial Projections - **Revenue Estimates**: - For every one million Trainium 2.5 chips deployed, Astera Labs could generate a content dollar value of approximately **$1.75 billion** from both large and small switch chips [22] - For Trainium 3 chips, the estimated content dollar value could reach **$3.3 billion** per million chips [26] - Additional revenue of **$150 million** is projected for every million Trainium 4 chips due to collaboration with Alchip [28] - **AMD's MI400 Series**: - Astera Labs' content dollar value for every one million MI400 GPUs used in the Helios rack is estimated at **$576 million** [32] Conclusion - Astera Labs is positioned to capitalize on the growing demand for advanced interconnect solutions in high-performance computing environments, particularly through its partnerships with AWS and AMD, with significant revenue potential from its innovative switch chip technologies [1][2][22][26][32]
博通发布芯片,给了英伟达一重拳
半导体行业观察· 2025-07-16 00:53
Core Viewpoint - The article discusses the competitive landscape in the GPU and high-performance computing (HPC) market, focusing on the advancements of Broadcom's Tomahawk Ultra technology compared to Nvidia's NVLink and the emerging UALink protocol [3][4][5]. Group 1: Technology Comparison - AMD and other chip suppliers are narrowing the performance gap with Nvidia in terms of GPU FLOPS, memory bandwidth, and HBM capacity, but lack high-speed interconnects like NVLink and NVSwitch, limiting their scalability [3]. - Broadcom is promoting its Scale-up Ethernet (SUE) technology, claiming it can support systems with at least 1024 accelerators on any Ethernet platform, while Nvidia's NVLink can support up to 576 accelerators [4][5]. - The Tomahawk Ultra switch from Broadcom offers a bandwidth of 51.2 Tbps, which is higher than Nvidia's 28.8 Tbps for its fifth-generation NVLink switch, allowing for a vertical scaling architecture with 128 accelerators [7]. Group 2: Performance and Features - Tomahawk Ultra is designed for low latency, achieving as low as 250 nanoseconds, and is optimized for handling smaller data packets common in HPC systems [6]. - The switch includes congestion control mechanisms and supports collective operations, enhancing network efficiency compared to Nvidia's NVLink [6][7]. - Broadcom's Tomahawk Ultra ASIC has begun shipping to customers, and its compatibility with existing switch chassis is expected to facilitate adoption [7]. Group 3: Market Dynamics - The UALink protocol, while still in development, is being integrated into AMD's Helios rack systems, which will utilize both UALink and Ethernet for their expansion architecture [9]. - There are concerns regarding the feasibility of achieving UALink's target latency of 100-150 nanoseconds when transmitted over Ethernet, which may hinder AMD's competitive position against Nvidia's advanced systems [10].