Core Insights - Morgan Stanley indicates that OpenAI, supported by Microsoft, may utilize Google's Tensor Processing Units (TPUs) for its AI inference tasks, marking a significant endorsement of Google's hardware technology [1] - The use of Google's TPUs signifies a diversification of OpenAI's suppliers, which previously relied solely on NVIDIA's chips for training and inference calculations [1][2] - This partnership is expected to accelerate the growth of Google Cloud's business and enhance market confidence in Google's AI chip capabilities [1] Company and Industry Analysis - OpenAI is recognized as one of the most notable TPU customers, alongside Apple, Safe Superintelligence, and Cohere, highlighting Google's decade-long development of AI infrastructure [2] - Despite not being able to access Google's most advanced TPUs, OpenAI's choice to collaborate with Google underscores the latter's leading position in the broader Application-Specific Integrated Circuit (ASIC) ecosystem [2] - The decision to use Google's TPUs may be influenced by the limited supply of NVIDIA GPUs due to high demand, which could negatively impact Amazon's AWS and its custom Trainium chips [2] - OpenAI's collaboration with Google allows it to run AI workloads across major cloud service providers, including Google Cloud, Microsoft Azure, Oracle, and CoreWeave, with Amazon being a notable absence [2]
大摩:OpenAI合作彰显谷歌(GOOGL.US)AI芯片实力