HGX AI Server

Search documents
集邦咨询:英伟达(NVDA.US)GB300芯片多项设计规格将提升 预估3Q25后整柜系统将逐步扩大出货规模
Zhi Tong Cai Jing· 2025-03-18 07:40
集邦咨询:英伟达(NVDA.US)GB300芯片多项设计 规格将提升 预估3Q25后整柜系统将逐步扩大出货 规模 在TDP(热设计功耗)部分,2024年NVIDIA主流机种为HGX AI Server,TDP落在60KW与80KW间,目前 主推的GB200 NVL72机柜因运算密度大幅提升,每柜TDP达125KW至130KW。TrendForce集邦咨询预 估,GB300机柜系统功耗将再提升至135KW与140KW间,多数业者将持续采用Liquid-to-Air(液冷散热) 方式,确保散热效果。 至于散热零部件设计,目前GB200的Cold Plate(水冷板)是搭配一颗CPU和两颗GPU的整合模块型态,到 GB300将由整合模块改为各芯片独立搭载Cold Plate,将提高Cold Plate在Compute Tray(运算匣)的价值。 而QD(水冷快接头)部分,由于Cold Plate模块改为各自独立,将大量增加QD用量。而GB200的QD供应 商以欧美业者为主,预期至GB300后将有更多厂商加入供应行列。 TrendForce集邦咨询指出,预期今年GB200和GB300 Rack方案放量情形将受几项因素 ...
研报 | 英伟达GB300芯片多项设计规格将提升,预估3Q25后整柜系统将逐步扩大出货规模
TrendForce集邦· 2025-03-18 07:02
Core Insights - NVIDIA is expected to launch the GB300 chip ahead of schedule in Q2 2025, with improvements in computing performance, memory capacity, network connectivity, and power management compared to the GB200 chip [1] - The GB300 chip and Compute Tray are projected to begin production in May 2025, with ODMs starting initial engineering sample designs [1] - The demand for specialized products in the Chinese market has significantly increased due to the DeepSeek effect [1] Summary by Sections GB300 Specifications - The GB300 NVL72 features upgraded networking specifications to meet higher bandwidth requirements, enhancing overall computing performance [2] - The TDP for the GB300 system is expected to rise to between 135KW and 140KW, with most manufacturers continuing to use liquid cooling methods to ensure effective heat dissipation [2] Cooling Component Design - The cooling plate design for the GB300 will shift from an integrated module to individual chip installations, increasing the value of the cooling plates in the Compute Tray [3] - The change in design will lead to a significant increase in the usage of quick disconnects (QDs), with more suppliers expected to join the market for GB300 [3] Market Dynamics - The GB200 and GB300 Rack solutions' market performance will be influenced by several factors, including the ongoing impact of the DeepSeek effect and potential shifts in customer preferences towards self-developed ASICs or simpler, cost-effective AI server solutions [3]