115万片晶圆,决定2026年的“芯片战”,苹果、联发科、OpenAI火线入局
3 6 Ke·2026-01-09 12:15

Core Insights - The article discusses the ongoing competition between GPGPU (General-Purpose Graphics Processing Units) and ASIC (Application-Specific Integrated Circuits) in the AI chip market, emphasizing the critical role of TSMC's CoWoS (Chip on Wafer on Substrate) advanced packaging capacity in determining the future landscape of AI computing power [1][10][36] - Huang Renxun predicts that data center revenues will reach $500 billion over the next six quarters, highlighting the significant financial stakes involved in this competition [1] Group 1: Competition Dynamics - The demand for computing power in AI is expanding, with advanced architectures, process technologies, and advanced packaging being the three key paths to progress [3][4] - NVIDIA has established itself as the leader in GPGPU through its CUDA ecosystem, while Google’s TPU represents a successful ASIC approach, showcasing the efficiency of custom architectures for specific algorithms [3][4] - The competition between GPGPU and ASIC is not merely about performance but also involves considerations of total cost of ownership (TCO) and the ability to optimize financial outcomes for large-scale users [27][28] Group 2: CoWoS Capacity and Supply Chain - TSMC's CoWoS capacity is projected to increase significantly, from approximately 12,000 wafers per month in December 2023 to an estimated 120,000 wafers per month by December 2026, equating to a total capacity of about 1.15 million wafers for AI chips [12][13] - The allocation of CoWoS capacity will be influenced by a complex interplay of technology, business, and geopolitical factors, with NVIDIA expected to secure nearly 60% of the capacity due to its early investments and strong demand [13][15] - The distribution of CoWoS wafers among major clients indicates that NVIDIA will receive around 660,000 wafers, while AMD and the ASIC camp will receive significantly less, highlighting the competitive advantage held by NVIDIA [16][20] Group 3: Performance and Revenue Implications - The performance of AI chips is closely tied to the area of the silicon interposer, with larger interposer areas allowing for more transistors and higher performance [25][26] - NVIDIA's GPUs are expected to command higher prices, with projections of $30,000 to $50,000 per unit, while ASICs like Google's TPU are priced significantly lower, impacting revenue dynamics in the AI chip market [26] - The article concludes that NVIDIA, leveraging its CoWoS capacity, is positioned to capture over 70% of the AI acceleration chip market revenue and more than 90% of the profits, reinforcing its dominant market position [26][34] Group 4: Future Outlook - The future of AI computing is likely to be a hybrid model combining both GPGPU and ASIC technologies, with cloud giants using NVIDIA GPUs for cutting-edge model training and self-developed ASICs for cost-sensitive large-scale inference [35] - The ongoing competition is characterized as a "boundary war," where both GPGPU and ASIC ecosystems will coexist, with TSMC as the ultimate beneficiary due to its critical role in providing CoWoS capacity [36]