Core Viewpoint - Meta and Nvidia have established a long-term partnership focusing on local deployment, cloud, and AI infrastructure, marking a significant expansion of their technological collaboration [1][5]. Group 1: Partnership Details - Meta will build ultra-large-scale data centers optimized for training and inference to support its long-term AI infrastructure roadmap [3]. - The collaboration will involve the deployment of millions of Blackwell and Rubin GPUs, as well as Nvidia's Grace CPUs, with Nvidia's Spectrum-X Ethernet switches integrated into Facebook's open switching system [3]. - This partnership represents the first large-scale deployment of Nvidia's Grace and aims to enhance the energy efficiency of AI computing [3][6]. Group 2: Strategic Implications - Meta is expected to allocate a significant portion of its projected capital expenditure of up to $135 billion this year towards expanding Nvidia's data centers [7]. - The large-scale adoption of Nvidia's chips validates Nvidia's "full-stack" infrastructure strategy, which includes both CPU and GPU [7]. - Despite the partnership, Meta is also exploring alternatives, including the potential use of Google's Tensor Processing Units (TPUs) in its data centers by 2027 [7][8]. Group 3: Market Reactions - Following the announcement, both Meta and Nvidia's stock prices rose in after-hours trading, while AMD's stock fell over 4% [3].
数百万颗芯片!英伟达、Meta达成重磅合作