OpenAI partners with Broadcom to build custom AI chips, adding to Nvidia and AMD deals
CNBC·2025-10-13 13:04

Core Insights - OpenAI and Broadcom have officially announced a partnership to develop and deploy 10 gigawatts of custom AI accelerators, enhancing AI infrastructure across the industry [2][3] - Following the announcement, Broadcom's shares increased by over 10% in premarket trading [2] - OpenAI has been collaborating with Broadcom for 18 months, with plans to start deploying OpenAI-designed chips by late next year [3] Group 1: Partnership Details - The partnership aims to create a comprehensive system that includes networking, memory, and compute, all tailored for OpenAI's workloads [5] - OpenAI's strategy to design its own chips is expected to reduce compute costs and optimize infrastructure spending [5] - The estimated cost for a 1-gigawatt data center is around $50 billion, with $35 billion typically allocated for chips based on current Nvidia pricing [5] Group 2: Market Impact - Broadcom has significantly benefited from the generative AI boom, with its custom AI chips, referred to as XPUs, being in high demand from major tech companies [8] - Broadcom's stock has risen 40% this year, following a more than doubling in 2024, with its market capitalization exceeding $1.5 trillion [8] Group 3: Future Projections - OpenAI President Greg Brockman highlighted the use of AI models to enhance chip design efficiency, achieving significant area reductions [9] - Broadcom's CEO emphasized the necessity of advanced compute capacity for developing better frontier models and superintelligence [10] - OpenAI currently operates on just over 2 gigawatts of compute capacity, which has been sufficient for scaling ChatGPT and launching new services, but demand is rapidly increasing [11]