Investment Rating - The report maintains an "Outperform" rating for the industry [2] Core Viewpoints - NVIDIA unveiled the GB200 NVL4 superchip and H200 NVL at the 2024 USA Supercomputing Conference (SC24) [10] - The GB200 NVL4 superchip integrates four Blackwell GPUs and two Grace CPUs, offering up to 2x performance for scientific computing, training, and inference, with availability expected in late 2025 [10][19] - The H200 NVL, based on Hopper architecture, is designed for low-power, air-cooled data centers, offering 1.5x memory and 1.2x bandwidth increase over H100 NVL, enabling large language model fine-tuning in hours and a 1.7x inference performance boost [11][20] - NVIDIA is collaborating with Foxconn to scale production in the USA, Mexico, and Taiwan, leveraging the NVIDIA Omniverse platform to accelerate factory setup and reduce costs, with Foxconn expecting significant cost savings and over 30% power reduction annually at its Mexico plant [12][21] - The report recommends continued monitoring of the AI server industry chain as the Blackwell series ramps up [12][22] Industry Insights - The GB200 NVL4 superchip is expected to significantly enhance performance for AI and scientific computing applications, with a focus on training and inference [10][19] - The H200 NVL is tailored for data centers with flexible configurations, particularly those using air cooling, which accounts for about 70% of enterprise racks [11][20] - Foxconn's use of NVIDIA Omniverse for virtual integration and testing is expected to optimize production efficiency and reduce costs, particularly in its Mexico facility [12][21]
电子元器件行业信息点评:SC24:英伟达发布GB200NVL4超级芯片及H200NVL,富士康正扩大Blackwell生产规模
海通国际·2024-12-02 14:20