AI Chip Market & Custom Silicon - Broadcom's AI chip run rate is almost $20 billion, with over half from Google's TPU [1] - OpenAI aims to reduce costs by 30-40% per gigawatt by partnering to ramp up custom silicon like Google's TPU [2] - Broadcom helps lower chip costs, which are the highest component in a gigawatt buildout, by 30-40% [3] - Custom silicon, like Google's TPU, offers performance advantages that merchant silicon cannot match [9][10] Hyperscaler Strategies & Competition - Google's infrastructure costs are the lowest among hyperscalers due to its success with Broadcom's TPUs [4][6] - Amazon is using Marvell for Cranium, and Microsoft is also working with Marvell, but they haven't achieved the same success as Google with Broadcom [5][6] - OpenAI is pursuing custom silicon with Broadcom, similar to Google's approach, to optimize performance and cost [6][10] Data Center Cost & Efficiency - Chips account for 60-70% of data center costs, making cost optimization crucial [6] - Custom silicon allows for optimization of performance per watt, addressing power constraints and minimizing latency [9] - Companies are exploring ways to run large language models more efficiently, including tiny recursive models [8][9]
OpenAI Targets Custom Silicon in Broadcom Deal