Investment Rating - The report indicates a strong growth potential for the ASIC market, with a projected market size of $42.9 billion by 2028, reflecting a CAGR of 45.4% [4][25]. Core Insights - ASICs are tailored for specific algorithms and applications, providing high computational efficiency and energy savings, particularly in AI workloads [5][38]. - The demand for ASICs is driven by the rapid growth of AI applications and the need for optimized computing solutions in data centers [4][25]. - Major cloud service providers (CSPs) are significantly increasing their capital expenditures, with a total of $170.8 billion in 2024, a 56% year-over-year increase, indicating a competitive landscape for AI capabilities [13]. Summary by Sections 1. ASIC Chip Market Outlook - The ASIC market is expected to grow significantly, with a forecasted market size of $42.9 billion by 2028, up from approximately $6.6 billion in 2023, representing a 25% market share of data center accelerated computing chips [25][26]. - The increasing demand for AI computing is expected to enhance the market share of ASICs from 16% in 2023 to 25% by 2028 [25]. 2. Comparison of ASIC and GPU - ASICs are designed for specific tasks, offering superior energy efficiency compared to GPUs, which are more general-purpose [32][38]. - The unit cost of computing power for ASICs is lower than that of GPUs, with Google’s TPU v5 and Amazon’s Trainium 2 costing 70% and 60% of NVIDIA's H100, respectively [39][40]. - ASICs are primarily used in inference scenarios and are beginning to penetrate training applications, while GPUs remain dominant in training due to their flexibility and parallel processing capabilities [46][48]. 3. Self-Developed AI ASICs by Major CSPs - Google’s TPU has evolved through multiple generations, with the latest TPU v6 expected to deliver significant performance improvements [58]. - Amazon's Trainium 2 chip has achieved 430 TFLOPS of FP16/BF16 performance, with a 4x performance increase over its predecessor [76]. - Microsoft’s Maia 100 chip is designed for AI workloads on Azure, boasting 3200 TFLOPS performance and a high memory bandwidth of 1.8 TB/s [88]. - Meta's MTIA v2 chip, released in 2024, has significantly improved performance metrics, with dense and sparse computing capabilities reaching 354 TFLOPS and 708 TFLOPS, respectively [99]. 4. Related Companies - Broadcom is positioned as a leading player in the AI ASIC market, with a target of achieving over $10 billion in revenue from AI chips by 2024, representing 35% of its total revenue [118]. - Marvell is recognized as a top-tier ASIC manufacturer, providing customized computing products for major North American cloud providers [5]. - The report highlights the strategic collaborations between Broadcom and its clients, enabling rapid product development and deployment in the ASIC space [125].
科技前瞻专题:AI ASIC:算力芯片的下一篇章
Southwest Securities·2024-12-16 13:22