马斯克:3年内,太空会是部署AI最便宜的地方
财联社·2026-02-06 00:44

Core Viewpoint - The article discusses Elon Musk's predictions regarding the future of AI deployment in space, emphasizing that space will become the most cost-effective location for artificial intelligence operations within the next 30 to 36 months [2][5]. Group 1: Space Data Centers - Musk argues that the primary reason for moving data centers to space is the mismatch between the exponential growth of chip production and the flat growth of power supply [2]. - He predicts that by the end of this year, there will be a critical point where large clusters of chips cannot be powered due to insufficient electricity [2]. - The deployment of solar panels in space is highlighted as more efficient, eliminating the need for additional battery systems and avoiding complex regulatory approvals for ground-based solar farms [2]. Group 2: Economic Viability of Space GPU - Musk states that solar panels in space can generate approximately five times the power compared to those on Earth, and there are no costs associated with battery systems for nighttime operation [4]. - He asserts that deploying AI in space will be the most economically attractive option, with significant cost advantages expected to emerge within 30 to 36 months [4][5]. - Maintenance of space-based GPUs is manageable, as initial faults can be resolved on Earth before deployment, ensuring reliability after a testing phase [4]. Group 3: Challenges in Power Supply - Musk expresses concerns about the inability to scale up co-located power generation due to supply chain issues, particularly with gas turbines and high tariffs on imported solar panels [5]. - He notes that the bottleneck in gas turbine production lies in the specialized manufacturing of turbine blades, which complicates scaling efforts [5]. - The cost of solar panels for space deployment is projected to be 5 to 10 times cheaper than terrestrial versions due to the absence of weather-related requirements [5]. Group 4: Energy Consumption in Data Centers - Musk highlights the significant energy consumption associated with operating data centers, which includes not only powering chips but also network hardware and cooling systems [6]. - He mentions that in his Memphis data center, cooling alone increases energy consumption by 40%, necessitating a power capacity of about 1 gigawatt to support 330,000 GB300 servers [6]. - Concerns are raised about the rising prices of memory chips, indicating that the path to acquiring sufficient memory for logic chips is more challenging than producing the chips themselves [6].