GB200/GB300Rack

Search documents
TrendForce:预计2025年八大CSP的总资本支出达4200亿美元 同比增长61%
Zhi Tong Cai Jing· 2025-10-13 05:45
Core Insights - The demand for AI servers is rapidly expanding, leading major cloud service providers (CSPs) to increase their procurement of NVIDIA GPU solutions and expand data center infrastructure, with a projected capital expenditure of over $420 billion by 2025, representing a 61% year-on-year increase compared to 2023 and 2024 combined [1] - By 2026, total capital expenditure for the eight major CSPs is expected to reach over $520 billion, marking a 24% year-on-year growth, as spending shifts from revenue-generating equipment to servers and GPUs, prioritizing long-term competitiveness over short-term profits [1] Group 1: AI Server Demand and Capital Expenditure - The eight major CSPs, including Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu, are expected to see a combined capital expenditure surpassing $420 billion by 2025, driven by the demand for AI server solutions [1] - The demand for the GB200/GB300 Rack AI solutions is anticipated to grow beyond expectations, with significant interest from North America's top four CSPs and other companies like Tesla and Coreweave [4] - The capital expenditure structure is shifting towards assets like servers and GPUs, indicating a focus on strengthening long-term market share and competitiveness [1] Group 2: In-house Chip Development - North America's top four CSPs are intensifying their AI ASIC development to enhance autonomy and cost control in generative AI and large language model computations [5] - Google is collaborating with Broadcom on the TPU v7p, expected to ramp up in 2026, which will replace the TPU v6e as the core AI acceleration platform [6] - AWS is set to deploy the Trainium v2 by the end of 2025, with a projected doubling of its in-house ASIC shipments in 2025, the highest growth rate among the major players [6] - Meta is enhancing its collaboration with Broadcom, anticipating the mass production of MTIA v2 by Q4 2025, which will significantly improve inference performance [6] - Microsoft plans to produce Maia v2 with GUC's assistance, but its in-house chip shipment volume is expected to be limited in the short term due to delays in Maia v3 production [6]