博通遥遥领先,Marvell承压
半导体行业观察·2026-01-30 02:43

Group 1 - The competition for custom AI chips is accelerating, with major cloud and AI providers rapidly expanding their deployment of AI server computing systems based on Application-Specific Integrated Circuits (ASICs) to handle specialized training and inference workloads [2] - Counterpoint Research predicts that the shipment volume of AI server computing ASICs from the top 10 hyperscale data center operators will double between 2024 and 2027, driven by the demand for Google's Tensor Processing Units (TPUs), AWS Trainium clusters, and the increased production of Meta's MTIA and Microsoft's Maia chips [2][3] - Despite competition from the growing Google-MediaTek alliance, Broadcom is expected to remain the top AI server computing ASIC design partner, capturing about 60% market share by 2027, while Marvell Technology Inc. is anticipated to see a decline in design service share to around 8% [3] Group 2 - The market for AI server computing ASICs is undergoing a structural transformation, shifting from a concentrated duopoly dominated by Google and AWS in 2024 to a more diversified landscape by 2027, with significant contributions from Meta and Microsoft in accelerating internal chip projects [3] - The broader strategy of hyperscale data center operators is to reduce reliance on commercial GPUs and utilize custom chips tailored for specific workloads to optimize performance per watt [4] - TSMC continues to dominate in manufacturing, being the preferred foundry for nearly all of the top 10 AI server computing ASIC manufacturers, covering both front-end and most back-end production [4]