Core Viewpoint - The discussion around AGI (Artificial General Intelligence) is fundamentally flawed as it ignores the physical limitations of computing resources and hardware, making AGI an unattainable goal [1][17]. Group 1: Hardware Limitations - The performance peak of GPUs was reached in 2018, and further improvements are limited, with significant optimizations expected to exhaust their potential by 2027 [14][15]. - The cost of moving information increases exponentially with distance, which affects the efficiency of computation [5]. - Current AI architectures, such as Transformers, are nearing the physical limits of hardware optimization, indicating that further advancements will be minimal [8]. Group 2: Resource Consumption - Achieving linear improvements in AI performance requires exponential increases in resources, making it increasingly impractical [9][16]. - The cost of collecting data from the physical world is prohibitively high, which complicates the development of AGI that can handle complex real-world tasks [18]. - The assumption that scaling up models will enhance AI performance is flawed, as the diminishing returns on resource investment will soon become evident [16]. Group 3: Future of AI - The future of AI lies in gradual improvements within physical constraints, focusing on practical applications that enhance productivity rather than pursuing the elusive AGI [20]. - The approach in the U.S. tends to focus on achieving superintelligence through significant investment, while China emphasizes practical applications and productivity enhancements through subsidies [21][22].
CMU教授万字反思:西方式AGI永远到不了
量子位·2025-12-20 07:38