OpenAI自研芯片内幕曝光!18个月前开始用AI优化芯片设计,比人类工程师更快
BroadcomBroadcom(US:AVGO) 量子位·2025-10-14 05:39

Core Viewpoint - OpenAI and Broadcom have announced a strategic collaboration to deploy a 10GW scale AI accelerator, marking a significant step in building the infrastructure necessary to unlock AI potential and address computational demands [5][12][43] Group 1: Collaboration Details - The partnership involves OpenAI designing AI accelerators and systems, while Broadcom will assist in their development and deployment, with full deployment expected by the end of 2029 [5][6] - The 10GW scale is equivalent to 10,000MW, which can power approximately 100 million 100-watt light bulbs, indicating the substantial power requirements for AI operations [10][11] - OpenAI's CEO emphasized that this collaboration is crucial for creating infrastructure that benefits humanity and businesses, while Broadcom's CEO highlighted its significance in the pursuit of general artificial intelligence [12][13] Group 2: Strategic Importance - The collaboration underscores the importance of custom accelerators and Ethernet as core technologies in AI data centers, enhancing Broadcom's leadership in AI infrastructure [13] - For OpenAI, this partnership helps alleviate computational constraints, especially given the nearly 800 million active users of ChatGPT each week [14] Group 3: Insights from Leadership - OpenAI's President discussed the reasons for developing in-house chips, including a deep understanding of workloads, the necessity of vertical integration, and challenges faced with external collaborations [18][21] - The decision to self-develop chips is driven by the need to address specific computational tasks that existing chips do not adequately cover, emphasizing the importance of vertical integration [21][30] - OpenAI's leadership has recognized that scaling is essential for achieving optimal results, as demonstrated in their past experiences with reinforcement learning [27][28] Group 4: Future Implications - The self-developed chips are expected to enhance efficiency, leading to better performance and cost-effectiveness in AI models [31] - AI is playing a significant role in optimizing chip design, reportedly outperforming human engineers in speed and efficiency [32][34] - OpenAI's strategy of "self-development + collaboration" has been in the works for nearly two years, with ongoing efforts to design a dedicated inference chip [43]