Workflow
Rubin系列衍生芯片
icon
Search documents
黄仁勋预告“世界前所未见”芯片,称所有技术已逼近极限
21世纪经济报道· 2026-02-19 12:48
Core Viewpoint - Nvidia's CEO Jensen Huang announced the unveiling of "unprecedented" new chips at the upcoming GTC 2026 conference, which is expected to further solidify Nvidia's leadership in the AI infrastructure sector [1]. Group 1: Upcoming Products - The new products are speculated to focus on two main directions: the Rubin series derivative chips, such as the previously exposed Rubin CPX, and the next-generation Feynman architecture chips, which are considered revolutionary and may utilize broader SRAM integration and 3D stacking technology [3][4]. - Nvidia has already launched the Vera Rubin AI series at CES 2026, with six chips entering full-scale production [3]. Group 2: Strategic Partnerships - Nvidia emphasizes that extensive acquisitions and collaborations are key to maintaining its lead in the AI race, highlighting partnerships with excellent collaborators and startups across the entire AI technology stack [3]. - A recent strategic partnership with Meta was announced, focusing on local deployment, cloud, and AI infrastructure, which will support Meta's large-scale data center optimized for training and inference [4].
黄仁勋预告“前所未见”的芯片新品,下一代Feynman架构或成焦点
Hua Er Jie Jian Wen· 2026-02-19 07:34
Core Insights - NVIDIA's CEO Jensen Huang announced that the company will unveil "world's first" new chip products at the upcoming GTC conference, sparking significant market interest in NVIDIA's next-generation product roadmap [1] - The GTC keynote will take place on March 15 in San Jose, California, focusing on the next phase of the AI infrastructure race [1] Potential New Products - The new products are speculated to fall into two main categories: 1. Derivative chips from the Rubin series, such as the previously leaked Rubin CPX, following the recent launch of the Vera Rubin AI series, which includes six chips now in full production [2] 2. The potentially revolutionary Feynman architecture chip, which may utilize broader SRAM integration and possibly 3D stacking technology for Language Processing Units (LPU), although this has not been officially confirmed [2] Market Demand and Product Evolution - NVIDIA is responding to changing computational demands, with a shift from pre-training to inference capabilities becoming central, as indicated by the introduction of Grace Blackwell Ultra and Vera Rubin [3] - The Feynman architecture is expected to be deeply optimized for inference scenarios, addressing performance bottlenecks related to latency and memory bandwidth, which will significantly impact cloud service providers and enterprise customers reliant on AI inference capabilities [3] - Huang emphasized the importance of broader partnerships and investment strategies, indicating NVIDIA's transition from a chip supplier to an AI ecosystem builder, aiming to maintain a leading position in the AI infrastructure competition through acquisitions and collaborations [3]