Core Insights - Amazon Web Services (AWS) and OpenAI have entered into a multi-year strategic partnership worth up to $38 billion, aimed at providing infrastructure support for core AI workloads [1][2] - OpenAI will immediately utilize AWS's computing resources, including hundreds of thousands of NVIDIA GPUs, to support various AI tasks, with full deployment expected by the end of 2026 [1] - The collaboration enhances OpenAI's operational efficiency and scalability for its AI workloads, particularly in model training and inference [1][2] Company Summaries - AWS will provide a newly designed AI computing cluster optimized for data center network bandwidth and GPU interconnect efficiency, enabling flexible resource scheduling for OpenAI [1] - OpenAI's CEO, Sam Altman, emphasized the importance of stable and efficient computing support for breakthroughs in frontier AI, while AWS's CEO, Matt Garman, highlighted AWS's experience in large-scale AI infrastructure [2] - This partnership continues the previous collaboration between AWS and OpenAI, where OpenAI's open-source models were made available on AWS's Bedrock platform, enhancing service offerings for enterprise clients [2] Industry Implications - The partnership signifies a major win for AWS in the global AI infrastructure competition and indicates OpenAI's further expansion under a multi-cloud strategy, strengthening its long-term computing capabilities in the generative AI sector [2]
OpenAI豪掷380亿美元牵手亚马逊:史上最大AI算力合作启动