Workflow
HGX B200
icon
Search documents
集邦咨询:预估Blackwell将占2025年英伟达(NVDA.US)高阶GPU出货逾80%
智通财经网· 2025-07-24 08:59
液冷不仅将成为高效能AI数据中心的标准配置,也将明显带动散热零部件需求升温,加快供应商出货节奏。如Fositek(富世达)已正式出货GB300平台专用 的NV QD(快接头),以搭配母公司AVC设计的冷水板(Cold Plate),用于GB300 NVL72 Rack系统。供应链透露,Fositek已量产出货AWS ASIC液冷所需的快 接头和浮动快接头(Floating Mount),预估其在该平台的快接头供应比例可与Danfoss(丹佛斯)抗衡。 Auras(双鸿)近年同样积极布局数据中心液冷市场,相关业务正逐步成为公司成长核心驱动力,其主要客户包含Oracle、Supermicro与HPE(慧与)等主流服务 器品牌,产品线涵盖冷水板与分歧管(Manifold)模块。Auras亦开始出货液冷产品给Meta,为切入GB200平台液冷系统供应链奠定基础,后续更有望加入 Meta、AWS的第二波冷水板核心供应体系。 智通财经APP获悉,根据TrendForce集邦咨询最新调查,近期整体Server市场转趋平稳,ODM均聚焦AI Server发展,从第二季开始,已针对英伟达 (NVDA.US) GB200 Rac ...
机构:预估Blackwell将占2025年英伟达高阶GPU出货逾80% 液冷散热渗透率续攀升
news flash· 2025-07-24 08:53
《科创板日报》24日讯,据TrendForce集邦咨询最新调查,近期整体Server市场转趋平稳,ODM均聚焦 AI Server发展,从第二季开始,已针对英伟达GB200 Rack、HGX B200等Blackwell新平台产品逐步放 量,更新一代的B300、GB300系列则进入送样验证阶段。因此,TrendForce集邦咨询预估今年Blackwell GPU将占NVIDIA高阶GPU出货比例80%以上。此外,随着GB200/GB300 Rack出货于2025年扩大,液冷 散热方案在高阶AI芯片的采用率正持续升高。 机构:预估Blackwell将占2025年英伟达高阶GPU出货逾80% 液冷散热渗透率续攀升 ...
研报 | 预估Blackwell将占2025年英伟达高阶GPU出货逾80%,液冷散热渗透率续攀升
TrendForce集邦· 2025-07-24 08:46
July 24, 2025 产业洞察 根据TrendForce集邦咨询最新调查,近期整体Server市场转趋平稳,ODM均聚焦AI Server发展,从 第二季开始,已针对NVIDIA(英伟达) GB200 Rack、HGX B200等Blackwell新平台产品逐步放 量,更新一代的B300、GB300系列则进入送样验证阶段。因此, TrendForce集邦咨询预估今年 Blackwell GPU将占NVIDIA高阶GPU出货比例80%以上 。 观察Server ODM近期动态,北美CSP大厂Oracle(甲骨文)扩建AI数据中心,除了主要为Foxconn ( 富 士 康 ) 带 来 订 单 成 长 , 也 利 好 Supermicro ( 超 微 电 脑 ) 和 Quanta ( 广 达 ) 等 业 者 。 预 期 Supermicro今年主要成长动力来自AI Server,近期已斩获部分GB200 Rack项目。Quanta则受惠于 Meta、AWS和Google等大型客户合作的基础,成功拓展GB200/GB300 Rack业务,加上争取到Oracle 订单,近期于AI Server领域表现抢眼。Wiw ...
CoreWeave Becomes First Hyperscaler to Deploy NVIDIA GB300 NVL72 Platform
Prnewswire· 2025-07-03 16:14
Core Viewpoint - CoreWeave is the first AI cloud provider to deploy NVIDIA's latest GB300 NVL72 systems, aiming for significant global scaling of these deployments [1][5] Performance Enhancements - The NVIDIA GB300 NVL72 offers a 10x boost in user responsiveness, a 5x improvement in throughput per watt compared to the previous NVIDIA Hopper architecture, and a 50x increase in output for reasoning model inference [2] Technological Collaboration - CoreWeave collaborated with Dell, Switch, and Vertiv to establish the initial deployment of the NVIDIA GB300 NVL72 systems, enhancing speed and efficiency for AI cloud services [3] Software Integration - The GB300 NVL72 deployment is integrated with CoreWeave's cloud-native software stack, including CoreWeave Kubernetes Service (CKS) and Slurm on Kubernetes (SUNK), along with hardware-level data integration through Weights & Biases' platform [4] Market Leadership - CoreWeave continues to lead in providing first-to-market access to advanced AI infrastructure, expanding its offerings with the new NVIDIA GB300 systems alongside its existing fleet [5] Benchmark Achievement - In June 2025, CoreWeave achieved a record in the MLPerf® Training v5.0 benchmark using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips, completing a complex model in just 27.3 minutes [6] Company Background - CoreWeave, recognized as one of the TIME100 most influential companies and featured in Forbes Cloud 100 ranking in 2024, has been operating data centers across the US and Europe since 2017 [7]
AMD 推进人工智能:MI350X 与 MI400 UALoE72、MI500 UAL256——SemiAnalysis
2025-06-15 16:03
更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 J u n e 1 , 2 0 2 5 A M D A d v a n c i n g A M 5 0X a n d M 0 0 U A L o E 7 2 , M 5 0 0 U A L 2 5 / / S o ft w a r e m p r o v e m e n t, Marketing RDF s, AMD Fostering Neocoud, M55 is not Rack Scae, M00 is UALoE, Not UALink 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9870 更多一手海外资讯和海外投行报告加V:shuinu9 ...
Techlnsights:5月半导体行业整体展现韧性 保持预期增长态势
智通财经网· 2025-06-12 07:52
Group 1: AI Semiconductor Market Overview - The AI semiconductor industry, led by Nvidia, reported a $4.5 billion asset write-down due to export restrictions, yet the overall semiconductor sector remains resilient with expected growth [1] - The global market for AI-driven processor chips and accelerators is projected to reach $457 billion by 2030, with a compound annual growth rate (CAGR) of 23% [1] Group 2: AI Data Center Chip Forecast - GPU accelerators are expected to lead the market, while ASIC accelerators may gain attention from cloud service providers like Google and Amazon [2] - Key challenges in the short term include increasing memory capacity, improving connection protocols, and addressing rising power consumption [2] Group 3: NVIDIA GB100 Chip Analysis - TechInsights analyzed the GPU chip GB102-A01 within the NVIDIA GB100-886N-A1 package, which has been removed from the Supermicro SYS-A22GA-NBRT GPU super server [3] Group 4: Autonomous Driving AI Models - The development of autonomous driving systems involves end-to-end (E2E) or composite AI (CAIS) models, with CAIS offering a more efficient and safer alternative [4] - CAIS architecture divides AI tasks into three components: Primary (P), Guardian (G), and Fallback (F), ensuring safe navigation [4] - Adoption of CAIS is limited as original equipment manufacturers prefer developing their own E2E AI models, though some manufacturers like Volkswagen and Polaris have adopted CAIS [4] Group 5: Advanced Packaging Technology - The high-performance computing (HPC) and AI markets are driving advancements in packaging technology, leading to increased adoption of 2.5D and 3D packaging solutions [5] - New interconnect technologies, such as ultra-low pitch microbumps and through-insulator vias (TIV), are being developed to reduce costs and density [5] Group 6: Semiconductor Capital Expenditure Stability - The global semiconductor supply industry has shown resilience amid macroeconomic turmoil, with AI-driven demand being a key growth driver [6] - Strong revenue growth is reported from major manufacturers like TSMC and MediaTek due to robust demand for 3nm and 5nm process technologies [6] Group 7: Power Specifications Driven by AI - The increasing power demands of AI workloads in data centers are pushing existing 54V distribution systems to their limits, prompting companies like Nvidia to explore high-voltage direct current (HVDC) architectures [7] - Two main strategies are emerging: ±400V HVDC and 800V HVDC, with the latter improving efficiency and reducing wiring needs [7] - Power semiconductor suppliers are preparing to benefit from this transition, emphasizing the need for scalable solutions and cross-market synergies [7]
英伟达(NVDA.US)GB200机架出货向好 大摩给予“增持”评级
智通财经网· 2025-05-13 04:16
出货量有望实现显著的月环比改善,意味着全年出货量将大幅提升。并且,上述数据表明英伟达 4 月季 度出货量未超过需求。摩尔表示:"中国台湾省的原始设计制造商(ODM)团队发现,4 月 3 家 ODM 厂 商机架出货量接近 1500 台,这一转折点与我们对英伟达的预估相符,应能缓解投资者担忧。" 据悉,机架数量反映的是 GB200 的产能和交付能力,而非芯片本身性能。例如,单台机架可能集成多 颗 GB200 芯片,机架出货量增加意味着更多芯片被实际部署到数据中心,直接体现市场对 AI 算力的需 求强度。 智通财经APP获悉,据最新数据显示,英伟达(NVDA.US)GB200 机架出货量显著改善,摩根士丹利认 为这可让投资者放心。该行分析师约瑟夫・摩尔在报告中称:"若截至 4 月已出货 2500 台机架,今年剩 余时间每月出货量维持 1500 台,总出货量将达 1.5 万台左右。"对此,摩尔对英伟达的评级为 "增持", 他认为这些数据能消除对机架数量下降的担忧。 另外,最近的行业检查显示,4 月后情况可能继续改善,存在进一步上涨空间。CoreWeave(CRWV.US) 和 xAI 等公司已要求使用 HGX B20 ...
Super Micro Computer(SMCI) - 2025 Q3 - Earnings Call Presentation
2025-05-07 01:11
Fiscal Q3 2025 Results May 6, 2025 Better Faster Greener © 2025 Supermicro DISCLOSURES Cautionary Statement Regarding Forward Looking Statements Statements contained in this press release that are not historical fact may be forward looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. These forward-looking statements can be identified by the use of forward-looking terminology such as "anticipate," "believe," "continue," "co ...
TechInsights Releases Initial Findings of its NVIDIA Blackwell HGX B200 Platform Teardown
GlobeNewswire News Room· 2025-04-14 14:00
Core Insights - TechInsights released early-stage findings on NVIDIA's Blackwell HGX B200 platform, highlighting its advanced AI and HPC capabilities in data centers [1] - The GB100 GPU features SK hynix's HBM3E memory and TSMC's advanced packaging architecture, marking significant technological advancements [1][2] HBM3E Supplier - The GB100 GPU incorporates eight HBM3E packages, each with eight memory dies in a 3D configuration, achieving a maximum capacity of 192 GB [2] - The per-die capacity of 3 GB represents a 50% increase over the previous generation of HBM [2] CoWoS-L Packaging Technology - The GB100 GPU utilizes TSMC's 4 nm process node and features the first instance of CoWoS-L packaging technology, which significantly enhances performance compared to the previous Hopper generation [3] - The GB100's design includes two GPU dies, nearly doubling the die area compared to its predecessor [3] HGX B200 Server Board - Launched in March 2024, the HGX B200 server board connects eight GB100 GPUs via NVLink, supporting x86-based generative AI platforms [4] - The board supports networking speeds up to 400 Gb/s through NVIDIA's Quantum-2 InfiniBand and Spectrum-X Ethernet platforms [4] TechInsights Overview - TechInsights provides in-depth intelligence on semiconductor innovations, aiding professionals in understanding design features and component relationships [6][7] - The TechInsights Platform serves over 650 companies and 125,000 users, offering extensive reverse engineering and market analysis in the semiconductor industry [8]