Core Viewpoint - OpenAI has shifted some of its AI computing power orders from NVIDIA to Google, utilizing Google's Tensor Processing Units (TPUs) for its AI products, primarily to reduce operational costs amid rising demand and high computing costs [1][2]. Group 1: Shift in AI Computing Power - OpenAI's recent transition to Google’s TPUs for products like ChatGPT marks a significant change in its supply chain strategy, moving away from reliance on NVIDIA's GPUs provided by partners like Microsoft and Oracle [2][3]. - The decision to adopt Google’s TPUs is driven by the high prices and supply constraints of NVIDIA's GPUs, making TPUs a more cost-effective option for OpenAI [2]. Group 2: Market Implications - Google aims to open its TPU chips to more cloud infrastructure providers, potentially challenging NVIDIA's dominance in the high-performance AI chip market if successful [2]. - The collaboration between OpenAI and Google indicates a diversification in OpenAI's supply chain and partnership strategy, suggesting a shift in power dynamics within the AI computing market [3]. Group 3: Google's AI Ecosystem - Google has developed a vertically integrated AI ecosystem, combining hardware (TPUs), software (Gemini models), and applications (like Gmail and Google Search), enhancing its competitive position in the AI landscape [2].
英伟达,被谷歌挖了墙角