Workflow
VR200 NVL72
icon
Search documents
未知机构:广发海外电子通信GTC2026前瞻LPXCPO及PCB关键-20260227
未知机构· 2026-02-27 02:50
Summary of Key Points from the Conference Call Industry and Company Involved - The conference call focuses on the semiconductor and electronics industry, specifically highlighting NVIDIA and its upcoming products and technologies. Core Insights and Arguments - **LPX Rack Enhancements**: The LPX (also known as LPU) is expected to utilize SRAM-based on-chip memory, providing rapid token generation and ultra-low latency, thereby strengthening NVIDIA's position in the inference domain [1][2] - **Collaboration with Groq**: Prior to a non-exclusive licensing agreement with Groq in December 2025, the LPX design will feature 64 Groq LPUs interconnected via RealScale chips [1][2] - **Future LPX Developments**: For GTC 2026, an enhanced LPX rack is anticipated to include 256 LPUs, utilizing multi-layer 52LM9 Q-glass PTH PCB, with an estimated PCB value of approximately $200 per LPU [2] - **VR200 NVL72 Performance**: The Rubin architecture is expected to enhance NVIDIA's product leadership, achieving a 5x/3.5x improvement in inference/training performance compared to GB300, aided by HBM4 technology [2] - **CPX Chip Design Changes**: Due to GDDR7 shortages, the CPX chip design is likely to shift to HBM4, with a smaller capacity than the conventional Rubin [3] - **NVL576 Architecture**: The NVL576 is expected to showcase a hybrid CCL orthogonal backplane, with potential designs including various layers of PTFE and Q-glass M9 to improve signal transmission [3] - **Optical Interconnect Solutions**: NVIDIA plans to introduce Scale Up optical interconnect solutions for the NVL576 architecture in the second half of 2027 [4] Additional Important Insights - **Scale Out CPO Switches**: NVIDIA may launch a new generation of Scale Out CPO switches, which are expected to significantly improve thermal performance and cost-performance ratio compared to previous generations [4] - **Sales Projections**: The forecast for NVIDIA's Scale Out CPO switches has been revised upwards to 20,000/100,000 units for 2026/2027, driven by aggressive promotion and bundling strategies [4] - **Beneficiaries of Growth**: Key beneficiaries from these developments include suppliers such as FAU, CW Laser, and various connector manufacturers like Lumentum and Sumitomo, with Chinese FAU suppliers expected to capture significant market share [4] - **Stock Recommendations**: 1. NVIDIA (NVDABuy) due to strong short-term quarterly performance and positive outlook from OpenAI financing [5] 2. Lumentum (LITEBuy) for its leadership in CPO and CW Laser market expansion [5] 3. Other companies like 波若威 and 台虹 are noted for their advancements in CPO and PTFE-CCL development [5] - **PCB Market Outlook**: The PCB market is expected to benefit significantly from increased backplane value, with cabinet value projected at $300,000 and backplane ASP at over $2 [5]
东兴证券:全球超节点竞争格局尚未确立 建议关注发布国产超节点云厂商等
智通财经网· 2026-02-05 06:20
Core Viewpoint - Starting from 2025, supernodes will become a significant technological innovation direction in the AI computing network, with increasing competition among AI chip manufacturers in both chip performance and Scale up network [1][5]. Group 1: Supernode Development - Nvidia has launched mature supernode solutions, with plans to release GH200 NVL72, GB200/GB300 NVL72, and VR200 NVL72 from 2024 to 2026 [1][3]. - The Blackwell architecture standardizes Scale up with GB200 NVL72 stabilizing the scale at 72 GPUs per cabinet, consisting of 18 Compute Trays and 9 Switch Trays [2]. - The Rubin architecture will enhance bandwidth, with the NVLink 6 Switch increasing single GPU interconnect bandwidth to 3.6 TB/s, up from 1.8 TB/s [2]. Group 2: Nvidia's Competitive Advantage - Nvidia maintains a leading position in the supernode market, with a projected shipment of approximately 2,800 units of GB200/300 NVL72 by 2025 [3]. - Future plans include the introduction of Vera Rubin NVL144 and Rubin Ultra NVL576, expanding interconnected GPUs from 72 to 576 [3]. - Innovations such as NVLink and NVLink Switch are crucial for achieving high bandwidth and low latency in AI training clusters, with NVLink 5 Switch supporting a total bandwidth of 130 TB/s for 72 GPUs [4]. Group 3: Industry Landscape and Investment Strategy - The global supernode competition landscape is still being established, with Nvidia currently in a leading position [6]. - The report suggests monitoring Nvidia's supernode supply chain, including components like PCB backplanes, high-speed copper cables, optical modules, and cooling systems [6]. - Chinese manufacturers are actively participating in the supernode and Scale up network sectors, with potential for domestic firms to gain a competitive edge [6].
中国数据中心设备:Rubin聚焦电源与制冷升级-China Data Center Equipment_ Power and cooling upgrades in focus for Rubin
2026-01-15 06:33
Summary of Key Points from the Conference Call Transcript Industry Overview - **Industry**: China Data Center Equipment - **Key Focus**: Power and cooling upgrades, particularly related to NVIDIA's Vera Rubin platform launch in 2026 [2][3] Core Insights and Arguments - **NVIDIA's Rubin Platform**: - The Rubin platform is set to double the rack power compared to the previous Blackwell platform, moving from 80% to 100% liquid cooling [2] - Rubin is currently in full production [2] - **VR200 NVL72 Specifications**: - Delivers approximately 3.5x to 5x the training/inference AI computing power compared to GB300 NVL72, significantly increasing rack-level power demand [3] - Upgrades power shelves to a 3*3U 110 kW configuration, compared to the GB300's 8*1U 33 kW shelves [3] - Introduces a 3+1 redundancy design for power shelves [3] - **Future Developments**: - Anticipation of transitioning to the next-generation Kyber rack design, which will support 800V HVDC to meet rising power requirements driven by AI compute scaling [3] - Potential launch of Rubin Ultra in 2027 could further unleash HVDC demand [3] Investment Opportunities - **Stock Picks**: - **Kstar and Kehua**: Expected to benefit from stronger UPS demand and potential upgrades to HVDC and SST [4] - Kstar is anticipated to have strong order intake from US hyperscalers due to strategic partnerships [4] - Kehua's domestic GPU ramp-up and H200 shipment could enhance order intake from domestic hyperscalers [4] Risks and Valuation - **Downside Risks for Data Center Equipment Sector**: - Slower-than-expected AI data center capacity growth [6] - Slower penetration of high-power density products [6] - Challenges in gaining market share in the overseas AIDC equipment supply chain [6] - **Kehua's Price Target and Risks**: - Price target based on DCF methodology; downside risks include slower IDC capacity expansion and lower overseas shipment expectations [7] - Upside risks include faster IDC capacity expansion and stronger relationships with hyperscalers [7] - **Kstar's Price Target and Risks**: - Similar DCF methodology for price target; downside risks include slower IDC capacity expansion and new entrants in the market [8] - Upside risks include faster IDC capacity expansion and higher overseas shipments [8] Additional Important Information - **Analyst Team**: The report was prepared by UBS Securities Asia Limited, with analysts Yishu Yan, Anna Yuan, and Ken Liu involved [5] - **Valuation Methodology**: The report emphasizes the use of DCF methodology for price targets and highlights the importance of understanding risks before making investment decisions [7][8] This summary encapsulates the critical insights and data from the conference call, focusing on the developments in the China data center equipment industry, particularly regarding NVIDIA's advancements and the implications for investment opportunities in Kstar and Kehua.
X @郭明錤 (Ming-Chi Kuo)
AI server assembler Wiwynn’s recent 4Q25 gross margin miss (7.2% vs. consensus estimates of 8–8.3%) has triggered a share price correction and reignited investor concerns. While rising component costs can mechanically dilute reported gross margins, it is more critical to examine the underlying profitability trends in server assembly through the lens of structural changes in AI server design.Nvidia continues to increase the level of design integration in AI servers to boost token output per unit of space and ...
X @郭明錤 (Ming-Chi Kuo)
近期AI伺服器組裝廠商緯穎公布低於預期的4Q25毛利率 (7.2% vs. 市場預期的8-8.3%),導致股價下跌,此事又引起投資人關注。雖零組件漲價也會稀釋帳面上的毛利率,但更重要的是從AI伺服器設計的本質上去檢視組裝的獲利能力趨勢。Nvidia持續提升AI伺服器的設計整合程度,以提升每單位空間與電力的Token產出,來因應資料中心有限空間與稀缺電力挑戰。此外,高度整合的設計也有助於改善生產效率,進而有利供應鏈管理與降低維修成本。VR200 NVL72就是設計高度整合的例子,我的產業調查顯示,其Compute tray導入無線化設計(Cable-less),零組件項目顯著減少約40% (vs. GB300 NVL72)。但隨整合程度提高,伴隨而來的是Nvidia指定用料比重提升,與客製化空間也遭到壓縮,這些都不利組裝廠商的獲利能力。我的產業調查顯示,VR200 NVL72 compute tray「不允許客製化規格」的零組件項目佔比提升至20-22%,遠高於GB300 NVL72的5-7%,且VR200 NVL72更直接取消「ODM設計」的零組件項目。事實上,提高整合程度的設計趨勢自GB300 NVL72就 ...
X @郭明錤 (Ming-Chi Kuo)
Regarding recent discussions among CCL-focused investors on whether the key CCL spec for VR200 NVL72 could be downgraded from M9, below are my latest surveys and views:1. Nvidia previously began testing 896K2 and 892K2 and prototyping PCBs, which likely gave rise to the market rumors. Test results are expected by the end of 1Q26.2.Nvidia’s current mass-production target remains 896K3. While this has not been finalized, the uncertainty alone is sufficient to impact stock prices that have already priced in th ...
X @郭明錤 (Ming-Chi Kuo)
Key Updates on NVIDIA Vera Rubin/VR200 NVL72(Integrating my latest supply chain checks and NVIDIA CEO Jensen Huang’s CES 2026 keynote)1. NVIDIA has renamed the AI server VR200 NVL144 (die-based) to VR200 NVL72 (package-based). My supply chain checks indicate VR200 NVL72 will be offered in two power profiles: Max Q and Max P.➢ Jensen referenced NVL72, rather than the previously used NVL144 naming.➢ Max Q and Max P share the same hardware design.➢ Max Q GPU/rack power (TGP/TDP): ~1.8/190 (kW).➢ Max P GPU/rack ...
X @郭明錤 (Ming-Chi Kuo)
Nvidia Vera Rubin/VR200 NVL72重點更新,整合我的最新產業調查與Nvidia CEO黃仁勳的2026 CES演講1. Nvidia將AI伺服器VR200 NVL144 (die-based) 改名為VR200 NVL72 (package-based)。我的調查指出,VR200 NVL72將分為Max Q與Max P兩種power profile。➢ 黃仁勳提到了NVL72,而非先前的NVL144➢ Max Q與Max P硬體設計相同➢ Max Q GPU/機櫃功耗 (TGP/TDP):約1.8/190 (kW)➢ Max P GPU/機櫃功耗 (TGP/TDP):約2.3/230 (kW)➢ 兩者均顯著高於GB300 NVL72的1.4/140 (kW)➢ 因應資料中心供電規格,不同power profile提升安裝彈性。此改變也意味著Nvidia開始注意到物理世界對建置AI伺服器的限制,並將其反映在產品規格上2. VR200 NVL72 Max Q與Max P的GPU散熱設計升級。➢ 兩者均採用微通道冷板 (micro-channel cold plate;MCCP) 搭配鍍金散 ...