GB100 GPU
Search documents
英伟达盯上新型封装,抛弃CoWoS?
半导体行业观察· 2025-07-31 01:20
Core Viewpoint - NVIDIA is considering adopting CoWoP as its next packaging solution for the upcoming Rubin GPU, indicating a potential shift in its packaging strategy from the established CoWoS technology [3][7]. Group 1: CoWoP Technology Overview - CoWoP (Chip-on-Wafer-on-Platform PCB) offers several advantages, including improved signal and power integrity, reduced substrate loss, and enhanced voltage regulation proximity to the main GPU chip [4][5]. - The technology allows for direct contact between cooling solutions and the silicon chip, eliminating the need for a packaging lid, which reduces costs [4][5]. - NVIDIA has begun early testing of CoWoP technology with a sample based on the GB100 GPU, aiming to evaluate manufacturing processes and electrical functionality [4][7]. Group 2: Future Plans and Testing - NVIDIA plans to start testing a fully functional GB100 CoWoP device in August 2025, which will retain the same dimensions and focus on manufacturability and thermal design [4][7]. - The GR100 CoWoP will serve as a testing platform for the GR150 "Rubin" solution, expected to enter production by late 2026, with market availability anticipated in 2027 [7]. Group 3: Market Dynamics and Competition - Morgan Stanley predicts that NVIDIA will dominate the CoWoS wafer demand in 2026, with an estimated 595,000 wafers needed, accounting for about 60% of the global market [10][11]. - The competition for CoWoS capacity is intensifying, with TSMC expected to be the major beneficiary, as global demand for CoWoS wafers is projected to grow significantly from 370,000 in 2024 to 1 million in 2026 [9][11]. - Other tech giants like AMD and Broadcom are also expected to secure significant shares of CoWoS capacity, indicating a competitive landscape in the AI chip market [10][11].
TechInsights Releases Initial Findings of its NVIDIA Blackwell HGX B200 Platform Teardown
GlobeNewswire News Room· 2025-04-14 14:00
Core Insights - TechInsights released early-stage findings on NVIDIA's Blackwell HGX B200 platform, highlighting its advanced AI and HPC capabilities in data centers [1] - The GB100 GPU features SK hynix's HBM3E memory and TSMC's advanced packaging architecture, marking significant technological advancements [1][2] HBM3E Supplier - The GB100 GPU incorporates eight HBM3E packages, each with eight memory dies in a 3D configuration, achieving a maximum capacity of 192 GB [2] - The per-die capacity of 3 GB represents a 50% increase over the previous generation of HBM [2] CoWoS-L Packaging Technology - The GB100 GPU utilizes TSMC's 4 nm process node and features the first instance of CoWoS-L packaging technology, which significantly enhances performance compared to the previous Hopper generation [3] - The GB100's design includes two GPU dies, nearly doubling the die area compared to its predecessor [3] HGX B200 Server Board - Launched in March 2024, the HGX B200 server board connects eight GB100 GPUs via NVLink, supporting x86-based generative AI platforms [4] - The board supports networking speeds up to 400 Gb/s through NVIDIA's Quantum-2 InfiniBand and Spectrum-X Ethernet platforms [4] TechInsights Overview - TechInsights provides in-depth intelligence on semiconductor innovations, aiding professionals in understanding design features and component relationships [6][7] - The TechInsights Platform serves over 650 companies and 125,000 users, offering extensive reverse engineering and market analysis in the semiconductor industry [8]