Core Insights - The article discusses the accelerating trend of AI server demand driving major North American Cloud Service Providers (CSPs) to develop their own Application-Specific Integrated Circuits (ASICs) to reduce reliance on external suppliers like NVIDIA and AMD [1][2][3][4][5]. Group 1: North American CSP Developments - Google has launched the TPU v6 Trillium, focusing on energy efficiency and optimization for large AI models, with plans to significantly replace the TPU v5 by 2025 [2]. - AWS is collaborating with Marvell on the Trainium v2, which supports generative AI and large language model training, and is expected to see substantial growth in ASIC shipments by 2025 [2]. - Meta is developing the next-generation MTIA v2 in partnership with Broadcom, emphasizing energy efficiency and low-latency architecture for AI inference workloads [3]. - Microsoft is accelerating its ASIC development with the Maia series chips, optimizing for Azure cloud services, and is collaborating with Marvell for the Maia v2 design [3]. Group 2: Chinese AI Supply Chain Autonomy - Huawei is actively developing the Ascend chip series for domestic markets, targeting applications in LLM training and smart city infrastructure, which may challenge NVIDIA's market position in China [4]. - Cambricon's MLU AI chip series is aimed at cloud service providers for AI training and inference, with plans to advance its solutions to the cloud AI market by 2025 [4]. - Chinese CSPs like Alibaba, Baidu, and Tencent are rapidly developing their own AI ASICs, with Alibaba's T-head launching the Hanguang 800 AI inference chip and Baidu working on the Kunlun III chip [5].
研报 | AI芯片自主化进程加速,云端巨头竞相自研ASIC
TrendForce集邦·2025-05-15 07:15