芯片巨头CES竞逐“千倍算力”,AI竞赛走向系统级对决
Jin Rong Jie·2026-01-09 06:34

Group 1: Nvidia's Rubin Platform - Nvidia's new AI supercomputing platform, Rubin, has entered full production, featuring the Rubin GPU with a performance increase of 5 times over the previous Blackwell platform [1][2] - The Rubin platform integrates six new chips, achieving a 3.5 times increase in training performance and a 2.8 times increase in HBM4 memory bandwidth, while reducing the cost of generating tokens to about one-tenth of the previous generation [2] Group 2: AMD's Performance Enhancements - AMD announced the MI455X AI chip, which offers a 10 times performance improvement over the MI355X, and set an ambitious goal of achieving a 1000 times increase in AI performance over the next four years [3] - The MI455X chip is based on 2/3nm process technology and features 320 billion transistors, with the flagship model equipped with 432GB of HBM4 memory [3] Group 3: Memory Technology Breakthroughs - SK Hynix showcased its next-generation HBM4 product, featuring 16 layers and 48GB capacity, aimed at training models at the GPT-5 level [3] - HBM4 is crucial for AI chip manufacturers, with Nvidia's Rubin GPU architecture utilizing a Vera CPU and two Rubin 85 HBM4 GPUs, each with eight HBM4 interfaces [3] Group 4: Intel's Process Return and PC Market Competition - Intel launched the Panther Lake third-generation Core Ultra processor, its first consumer-grade product using 18A process technology, achieving a 60% improvement in multi-threaded performance compared to the previous generation [4] - The first consumer laptops featuring the third-generation Core Ultra processors will begin pre-sales on January 6, 2026, with global availability starting January 27, 2026 [4] Group 5: System Integration as a New Battlefield - The AI competition has evolved from a focus on chip performance to a comprehensive system efficiency competition, encompassing storage, interconnect, cooling, and software [5] - AMD's Helios platform features a liquid-cooled, modular design with AI computing power reaching 2.9 Exaflops and equipped with 31TB of HBM4 memory [5][6] - Nvidia also demonstrated its system-level capabilities with the Alpamayo open inference model series for autonomous vehicle development, highlighting the shift towards system integration as a core competitive factor in AI infrastructure [6]