首次使用“非英伟达”芯片!OpenAI租用谷歌TPU,降低推理计算成本
Hua Er Jie Jian Wen·2025-06-28 03:29

Group 1 - OpenAI has begun renting Google's TPU chips for the first large-scale use of non-NVIDIA chips, aiming to reduce reliance on Microsoft data centers and challenge NVIDIA's GPU market dominance [1] - OpenAI's demand for computing power has surged, with paid ChatGPT subscribers increasing from 15 million at the beginning of the year to over 25 million, alongside hundreds of millions of free users [1] - Companies like Amazon, Microsoft, OpenAI, and Meta are developing their own inference chips to reduce dependence on NVIDIA and lower long-term costs [1] Group 2 - OpenAI spent over $4 billion on NVIDIA server chips last year, with training and inference costs each accounting for half, and projected spending on AI chip servers to approach $14 billion by 2025 [2] - The shift to Google's TPU was driven by the explosive popularity of ChatGPT's image generation tool, which put immense pressure on OpenAI's inference servers at Microsoft [2] - Google has been developing TPU chips for about a decade and has provided this service to cloud customers since 2017, with other companies like Apple and Meta also renting Google's TPU [2] Group 3 - Google Cloud also rents out NVIDIA-supported servers, as NVIDIA chips are the industry standard, generating more revenue than renting TPUs [3] - Google has ordered over $10 billion worth of the latest Blackwell server chips from NVIDIA and began providing them to select customers in February [3]

Nvidia-首次使用“非英伟达”芯片!OpenAI租用谷歌TPU,降低推理计算成本 - Reportify