Maia v2

Search documents
机构:预计今年八大CSP资本支出将逾4200亿美元, 同比增长61%
Zheng Quan Shi Bao Wang· 2025-10-13 11:00
10月13日,TrendForce集邦咨询发布CSP资本支出最新调查报告。 报告显示,随着AI Server需求快速扩张,全球大型云端服务业者(CSP)正扩大采购NVIDIA(英伟 达)GPU整柜式解决方案、扩建数据中心等基础建设,并加速自研AI ASIC,预估将带动2025年Google (谷歌)、AWS(亚马逊云科技)、Meta、Microsoft(微软)、Oracle(甲骨文)和Tencent(腾讯)、 Alibaba(阿里巴巴)、Baidu(百度)等八大CSP的合计资本支出突破4200亿美元,约为2023年与2024 年资本支出相加的水平,年增幅更高达61%。 Meta则加强与Broadcom合作,预计于2025年第四季量产MTIA v2,提升推理效能与降低延迟。 TrendForce集邦咨询预估2025年MTIA出货主要部署于Meta内部AI平台与推荐系统,待2026年采用HBM 的MTIA v3推出,整体出货规模将呈双倍以上成长。 Microsoft规划由GUC协助量产Maia v2,预计于2026年上半启动。此外,Maia v3因设计调整延后量产时 程,预计短期内Microsoft自研芯片出货量 ...
TrendForce:预计2025年八大CSP的总资本支出达4200亿美元 同比增长61%
Zhi Tong Cai Jing· 2025-10-13 05:45
Core Insights - The demand for AI servers is rapidly expanding, leading major cloud service providers (CSPs) to increase their procurement of NVIDIA GPU solutions and expand data center infrastructure, with a projected capital expenditure of over $420 billion by 2025, representing a 61% year-on-year increase compared to 2023 and 2024 combined [1] - By 2026, total capital expenditure for the eight major CSPs is expected to reach over $520 billion, marking a 24% year-on-year growth, as spending shifts from revenue-generating equipment to servers and GPUs, prioritizing long-term competitiveness over short-term profits [1] Group 1: AI Server Demand and Capital Expenditure - The eight major CSPs, including Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu, are expected to see a combined capital expenditure surpassing $420 billion by 2025, driven by the demand for AI server solutions [1] - The demand for the GB200/GB300 Rack AI solutions is anticipated to grow beyond expectations, with significant interest from North America's top four CSPs and other companies like Tesla and Coreweave [4] - The capital expenditure structure is shifting towards assets like servers and GPUs, indicating a focus on strengthening long-term market share and competitiveness [1] Group 2: In-house Chip Development - North America's top four CSPs are intensifying their AI ASIC development to enhance autonomy and cost control in generative AI and large language model computations [5] - Google is collaborating with Broadcom on the TPU v7p, expected to ramp up in 2026, which will replace the TPU v6e as the core AI acceleration platform [6] - AWS is set to deploy the Trainium v2 by the end of 2025, with a projected doubling of its in-house ASIC shipments in 2025, the highest growth rate among the major players [6] - Meta is enhancing its collaboration with Broadcom, anticipating the mass production of MTIA v2 by Q4 2025, which will significantly improve inference performance [6] - Microsoft plans to produce Maia v2 with GUC's assistance, but its in-house chip shipment volume is expected to be limited in the short term due to delays in Maia v3 production [6]
研报 | 2026年CSP资本支出预计将高达5,200亿美元,GPU采购与ASIC研发成创新高核心驱动力
TrendForce集邦· 2025-10-13 04:08
Oct. 13, 2025 产业洞察 根据TrendForce集邦咨询最新调查,随着AI Server需求快速扩张,全球大型云端服务业者(CSP)正 扩 大 采 购 NVIDIA ( 英 伟 达 ) GPU 整 柜 式 解 决 方 案 、 扩 建 数 据 中 心 等 基 础 建 设 , 并 加 速 自 研 AI ASIC,预估将带动2025年Google(谷歌)、AWS(亚马逊云科技)、Meta、Microsoft(微软)、 Oracle(甲骨文)和Tencent(腾讯)、Alibaba(阿里巴巴)、Baidu(百度)等八大CSP的合计资本 支出突破4,200亿美元,约为2023年与2024年资本支出相加的水平,年增幅更高达61%。 TrendForce集邦咨询表示, 2026年在GB/VR等AI机柜方案持续放量下,八大CSP的总资本支出有望 再创新高,年增达24%,来到5,200亿美元以上 。除此之外,支出结构已从能直接创造收益的设备, 转向Server、GPU等资产,意味着巩固中长期竞争力与市占率优先于改善短期获利。 2025年GB200/GB300 Rack为CSP重点布局的整柜式AI方案,需求量成长将 ...
IP 设计服务展望:2026 年 ASIC 市场动态
2025-05-22 05:50
Summary of Conference Call Notes Industry Overview - The conference call focuses on the ASIC (Application-Specific Integrated Circuit) market dynamics, particularly involving major players like AWS, Google, Microsoft, and META, with projections extending into 2026 and beyond [1][2][5]. Key Company Insights AWS - AWS has resolved issues with Trainium 3 and continues to secure orders from downstream suppliers. The development of Trainium 4 has commenced, with expectations for a contract signing soon [2][5]. - The specifications for AWS's TPU chips are significantly higher than competitors, with TPU v6p and TPU v7p expected to have ASPs of US$8,000 and higher, respectively [2]. Google - Google is progressing steadily with its TPU series, with TPU v6p featuring advanced specifications including multiple compute and I/O dies. The company is anticipated to become a top customer for GUC due to its rapid ramp-up in CPU development [2][10]. - The revenue from Google's 3nm server CPU is expected to contribute to GUC's revenue sooner than previously anticipated, moving from Q4 2025 to Q3 2025 [10]. Microsoft - Microsoft is working on its Maia v2 ASIC, with a target of ramping 500,000 chips in 2026. However, the project has faced delays, pushing the tape-out timeline from Q1 2025 to Q2 2025 [3][4]. - The allocation of chips has shifted, with expectations of 40-60k chips for MSFT/GUC and 400k chips for Marvell in 2026 [3]. META - META is transitioning from MTIA v2 to MTIA v3, with expectations of ramping 100-200k chips for MTIA v2 and 200-300k chips for MTIA v3 in 2026 [2]. Non-CSPs - Companies like Apple, OpenAI, and xAI are entering the ASIC server market, with many expected to tape out in 2H25 and ramp in 2H26. These companies are likely to collaborate with Broadcom for high-end ASIC specifications [7][8][9]. Financial Projections - GUC's FY25 revenue is expected to exceed previous forecasts, driven by contributions from Google and crypto projects. However, concerns remain about FY26 growth without crypto revenue, with a projected 50% YoY growth in MP revenue [10][11]. - The revenue contribution from various ASIC projects in 2026 includes significant figures such as US$16,756 million from TPU v6p and US$2,616 million from Trainium 3 [18]. Additional Insights - The competitive landscape for ASIC design services is intensifying, with Broadcom and MediaTek entering the fray alongside existing players like Marvell and GUC [4][15]. - The potential impact of geopolitical factors on HBM2E clients was discussed, highlighting the resilience of Faraday in the face of possible restrictions [14]. Conclusion - The ASIC market is poised for significant growth, driven by advancements in technology and increasing demand from both CSPs and non-CSPs. Key players are adapting their strategies to navigate challenges and capitalize on emerging opportunities in the sector [1][5][7].
研报 | AI芯片自主化进程加速,云端巨头竞相自研ASIC
TrendForce集邦· 2025-05-15 07:15
Core Insights - The article discusses the accelerating trend of AI server demand driving major North American Cloud Service Providers (CSPs) to develop their own Application-Specific Integrated Circuits (ASICs) to reduce reliance on external suppliers like NVIDIA and AMD [1][2][3][4][5]. Group 1: North American CSP Developments - Google has launched the TPU v6 Trillium, focusing on energy efficiency and optimization for large AI models, with plans to significantly replace the TPU v5 by 2025 [2]. - AWS is collaborating with Marvell on the Trainium v2, which supports generative AI and large language model training, and is expected to see substantial growth in ASIC shipments by 2025 [2]. - Meta is developing the next-generation MTIA v2 in partnership with Broadcom, emphasizing energy efficiency and low-latency architecture for AI inference workloads [3]. - Microsoft is accelerating its ASIC development with the Maia series chips, optimizing for Azure cloud services, and is collaborating with Marvell for the Maia v2 design [3]. Group 2: Chinese AI Supply Chain Autonomy - Huawei is actively developing the Ascend chip series for domestic markets, targeting applications in LLM training and smart city infrastructure, which may challenge NVIDIA's market position in China [4]. - Cambricon's MLU AI chip series is aimed at cloud service providers for AI training and inference, with plans to advance its solutions to the cloud AI market by 2025 [4]. - Chinese CSPs like Alibaba, Baidu, and Tencent are rapidly developing their own AI ASICs, with Alibaba's T-head launching the Hanguang 800 AI inference chip and Baidu working on the Kunlun III chip [5].