Core Insights - Microsoft is launching a new chapter in its AI infrastructure by creating a distributed "AI super factory" that connects large data centers across different states, aiming to accelerate AI model training at an unprecedented scale and speed [1][2] - The company plans to double its data center footprint in the next two years to meet the surging demand for computing power, highlighting its core position in the AI infrastructure sector [1][2] Group 1: AI Super Factory Concept - The "AI super factory" concept integrates geographically dispersed data centers into a virtual single supercomputer, differing from traditional data center designs [3] - This distributed network will connect multiple sites, consolidating tens of thousands of advanced GPUs, exabyte-scale storage, and millions of CPU cores to support future AI model training with trillions of parameters [3][4] Group 2: New Data Center Design and Technology - The new "Fairwater" series data centers are specifically designed for AI workloads, with the Atlanta facility covering 85 acres and over 1 million square feet [4] - Key features include high-density architecture, advanced chip systems with NVIDIA's GB200 NVL72, efficient liquid cooling systems, and high-speed internal connectivity [4][5] Group 3: AI WAN and Power Distribution Strategy - Microsoft has deployed 120,000 miles of dedicated fiber optic cables to create an AI WAN, allowing data to be transmitted at near-light speed without congestion [6] - The decision to build across states rather than centralizing power is driven by land and electricity supply considerations, ensuring that no single grid is overburdened [6] Group 4: Competitive Landscape - Microsoft is not alone in this race; competitors like Amazon, Meta Platforms, and Oracle are also making significant investments in data center infrastructure [7] - By connecting data centers into a unified distributed system, Microsoft is preparing to meet the substantial demands of top AI companies [7]
微软第一座“AI超级工厂”投入运营:将两座数据中心连接,构建分布式网络