Core Insights - Morgan Stanley reports that OpenAI plans to use Google's TPU for AI inference tasks, which could signify a landmark event in the AI chip sector and have profound implications for tech giants like Google, NVIDIA, and Amazon [1][2] - OpenAI is reportedly in talks with Google Cloud to rent TPU for AI workloads, although a spokesperson denied any large-scale deployment plans, indicating early testing of some TPUs [1] - Despite the denial, Morgan Stanley views this collaboration as a significant endorsement of Google's AI infrastructure capabilities, highlighting its leadership in custom AI chips [1] Company Analysis - OpenAI's choice of TPU may be influenced by NVIDIA's GPU supply constraints, as NVIDIA's rack-level products are sold out, making Google's TPU a viable alternative [2] - While NVIDIA remains dominant in the AI training market, with projected revenue growth to over $20 billion by 2025 from Google, the surge in demand for inference may drive companies to explore other options [2] - OpenAI's AI workloads are now distributed across Google Cloud, Microsoft Azure, Oracle, and CoreWeave, but Amazon AWS is notably absent, suggesting potential capacity limitations at AWS [2] - The decision to use Google's older TPUs instead of AWS's custom Trainium chips indicates a perception of lower technical credibility for AWS's offerings [2] Market Implications - Morgan Stanley maintains an "Overweight" rating on Google with a target price of $185, believing that the potential growth of Google Cloud's business is not fully reflected in its stock price [2] - The commercialization of TPU is expected to act as a new catalyst for valuation growth for Google [2]
大摩:OpenAI拥抱TPU 对谷歌(GOOGL.US)、英伟达(NVDA.US)和亚马逊(AMZN.US)意味着什么?