Workflow
Gemini大语言模型
icon
Search documents
OpenAI不再“All In”英伟达(NVDA.US) 转投谷歌(GOOGL.US)TPU破芯片霸权!
智通财经网· 2025-06-30 02:20
Core Insights - OpenAI has begun utilizing Google-made AI chips, specifically Tensor Processing Units (TPUs), for its products including ChatGPT, marking a significant shift from its previous reliance on NVIDIA chips [1][2] - This collaboration indicates OpenAI's strategy to diversify its suppliers, as it has historically depended on NVIDIA for both training AI models and executing inference calculations [1] - OpenAI anticipates that leasing TPUs from Google Cloud will help reduce inference-related costs, potentially positioning TPUs as a cheaper alternative to NVIDIA GPUs [1] Group 1 - The partnership between OpenAI and Google represents a surprising collaboration between two major competitors in the AI field, aimed at addressing OpenAI's growing computational needs [1] - Morgan Stanley has released a report supporting Google, suggesting that if the agreement is confirmed, it would reflect Google's confidence in its long-term search business and accelerate the growth of Google Cloud, with a valuation exceeding 18 times [1] Group 2 - For Google, this collaboration coincides with its efforts to expand the external availability of its self-developed TPUs, which were previously used mainly for internal projects [2] - The partnership has attracted interest from other tech giants like Apple and competitors of ChatGPT, indicating a broader market interest in Google's TPU technology [2] - However, Google has reportedly not leased its most powerful TPU models to OpenAI, indicating a strategy to reserve its advanced versions for internal projects, including its own Gemini large language model [2]