一个被英伟达掩盖的、中美AI最残酷的物理真相
NvidiaNvidia(US:NVDA) 虎嗅APP·2026-01-21 10:01

Core Viewpoint - The article discusses the contrasting energy challenges faced by the US and China in the context of AI development, highlighting that while China has a significant surplus in electricity supply, it faces efficiency issues in converting that energy into computational power, particularly due to semiconductor manufacturing limitations [4][18][22]. Group 1: Energy Supply and Demand - By 2030, the incremental electricity demand for AI development in China will only account for 1% to 5% of its new power generation capacity over the past five years, while in the US, it will consume 50% to 70% of the same [6][7]. - In 2023, the US added approximately 51 GW of new power generation capacity, whereas China added an impressive 429 GW, showcasing an 8-fold difference in capacity expansion [9][10]. Group 2: Efficiency and Cost Challenges - Despite having cheaper electricity costs (0.08 USD per kWh in China vs. 0.12 USD in the US), the energy cost for AI computation in China could be 140% higher than in the US due to lower chip efficiency [22][23]. - Chinese AI infrastructure may consume 100% more energy than US counterparts for the same computational output, highlighting a significant efficiency gap [21]. Group 3: Strategic Responses - The US is attempting to innovate its energy technology to bypass outdated grid infrastructure, focusing on decentralized solutions and nuclear energy revival [30][31]. - China is leveraging its advanced UHV transmission technology to transport surplus renewable energy from the west to eastern computational hubs, aiming to integrate AI into its energy systems [32][33]. Group 4: Future Implications - The competition in AI is not solely about chip technology but also about energy infrastructure and efficiency, with both countries facing unique challenges that will shape their technological trajectories over the next decade [47][48].