Tomahawk 5芯片

Search documents
博通,悄然称霸
半导体行业观察· 2025-06-28 02:21
Core Viewpoint - The article emphasizes the importance of interconnect architecture in AI infrastructure, highlighting that while GPUs are crucial, the ability to train and run large models relies heavily on effective interconnect systems [1]. Group 1: Interconnect Architecture - Interconnect architecture encompasses various levels, including chip-to-chip communication within packages and system-level networks that support thousands of accelerators [1]. - Nvidia's dominance in the industry is attributed to its expertise in developing and integrating these interconnect architectures [1]. - Broadcom has been quietly advancing various technologies related to interconnect architecture, including Ethernet architectures for large-scale expansion and internal chip interconnect technologies [1][3]. Group 2: Ethernet Switch Technology - Broadcom has introduced high-capacity switches, such as the 51.2Tbps Tomahawk 5 and the recently launched 102.4Tbps Tomahawk 6, which can significantly reduce the number of switches needed for large GPU clusters [3]. - The number of switches required decreases as the switch's port count increases, allowing for more efficient connections among GPUs [3]. - Nvidia has also announced its own 102.4Tbps Ethernet switch, indicating a competitive landscape in high-capacity switch technology [4]. Group 3: Scalable Ethernet Solutions - Broadcom's Tomahawk 6 switches are positioned as a shortcut for rack-level architectures, supporting between 8 to 72 GPUs, with future designs expected to support up to 576 GPUs by 2027 [6]. - Ethernet technology is being utilized for both scalable and large-scale networks, with Intel and AMD also planning to implement Ethernet for their systems [7]. Group 4: Co-Packaged Optics (CPO) Technology - Broadcom has invested in co-packaged optics (CPO) technology, which integrates components typically found in pluggable transceivers into the same package as the switch ASIC, significantly reducing power consumption [9][10]. - The efficiency of Broadcom's CPO technology is reported to be over 3.5 times that of traditional pluggable devices [10]. - The third generation of CPO technology is expected to support up to 512 200Gbps optical ports, with future developments aiming for 400Gbps channels by 2028 [11]. Group 5: Multi-Chip Architecture - As Moore's Law slows, the industry is shifting towards multi-chip architectures, allowing for higher yields and optimized costs by using smaller chips [14]. - Broadcom has developed a 3.5D eXtreme Dimension System in Package (3.5D XDSiP) technology to facilitate the design of multi-chip processors, which is open for licensing to other companies [15]. - The first products based on this design are expected to enter production by 2026, although the specific applications of Broadcom's technology in AI chips may remain undisclosed [15].
102.4 Tb/s的交换机芯片,博通重磅发布
半导体行业观察· 2025-06-04 01:09
如果您希望可以时常见面,欢迎标星收藏哦~ 来源:内容 编译自 nextplatform 。 尽管随着以太网路线图上的每一次减速,更扁平的网络和更快的网络都是可能的,但网络规模仍然 保持着足够快的增长速度,以至于交换机 ASIC 制造商和交换机制造商能够通过产量来弥补这一 不足,并保持交换机业务的增长。 随着 GenAI 的爆发式增长,所有大型 AI 厂商都一致希望摆脱英伟达控制的专有 InfiniBand 技 术,将 InfiniBand 的所有功能移植到全新升级的以太网上,使其能够进一步扩展,并在更扁平的 网络中实现扩展,从而创建规模更大的 AI 集群。超级以太网联盟 (UltraEthernet Consortium) 的 宏伟目标是实现 100 万个 GPU 端点,而要实现这一目标,需要容量更大的交换机 ASIC。 如今,商用芯片市场的行业领导者博通 (Broadcom) 在以太网领域面临着来自思科系统和 Nvidia 的激烈竞争,该公司正在向市场推出其"Tomahawk 6" StrataXGS 以太网交换机 ASIC,该市场将 以 102.4 Tb/秒 ASIC 为主导,并展望 204.8 Tb/秒 ...