Workflow
美国的数据中心分布
傅里叶的猫·2025-07-09 14:49

Core Insights - The article provides a comprehensive overview of AI data centers in the U.S., detailing their locations, chip types, and operational statuses, highlighting the growing investment in AI infrastructure by major companies [1][2]. Company Summaries - Nvidia: Operates 16,384 H100 chips in the U.S. for its DGX Cloud service [1]. - Amazon Web Services (AWS): Plans to build over 200,000 Trainium chips for Anthropic and has existing GPU data centers in Phoenix [1]. - Meta: Plans to bring online over 100,000 chips in Louisiana by 2025 for training Llama 4, with current operations of 24,000 H100 chips for Llama 3 [1]. - Microsoft/OpenAI: Investing in a facility in Wisconsin for OpenAI, with plans for 100,000 GB200 chips, while also operating data centers in Phoenix and Iowa [1]. - Oracle: Operates 24,000 H100 chips for training Grok 2.0 [1]. - Tesla: Partially completed a cluster in Austin with 35,000 H100 chips, aiming for 100,000 by the end of 2024 [2]. - xAl: Has a partially completed cluster in Memphis with 100,000 H100 chips and plans for a new data center that could hold 350,000 chips [2]. Industry Trends - The demand for AI data centers is increasing, with several companies planning significant expansions in chip capacity [1][2]. - The introduction of new chip types, such as GB200, is being adopted by major players like Oracle, Microsoft, and CoreWeave, indicating a shift in technology [5]. - The competitive landscape is intensifying as companies like Tesla and xAl ramp up their AI capabilities with substantial investments in chip infrastructure [2][5].