遥遥无期的AGI是画大饼吗?两位教授「吵起来了」
机器之心·2025-12-21 04:21

Core Viewpoint - The article discusses the limitations of achieving Artificial General Intelligence (AGI) due to physical and resource constraints, emphasizing that scaling alone is not sufficient for significant advancements in AI [3][20][32]. Group 1: Limitations of AGI - Tim Dettmers argues that AGI will not happen because computation is fundamentally physical, and there are inherent limitations in hardware improvements and scaling laws [8][10][12]. - The article highlights that as transistor sizes shrink, while computation becomes cheaper, memory access becomes increasingly expensive, leading to inefficiencies in processing power [11][17]. - The concept of "superintelligence" is critiqued as a flawed notion, suggesting that improvements in intelligence require substantial resources, and thus, any advancements will be gradual rather than explosive [28][29][30]. Group 2: Hardware and Scaling Challenges - The article points out that GPU advancements have plateaued, with significant improvements in performance per cost ceasing around 2018, leading to diminishing returns on hardware investments [16][17]. - Scaling AI models has become increasingly costly, with the need for linear improvements requiring exponential resource investments, indicating a nearing physical limit to scaling benefits [20][22]. - The efficiency of current AI infrastructure is heavily reliant on large user bases to justify the costs of deployment, which poses risks for smaller players in the market [21][22]. Group 3: Divergent Approaches in AI Development - The article contrasts the U.S. approach of "winner-takes-all" in AI development with China's focus on practical applications and productivity enhancements, suggesting that the latter may be more sustainable in the long run [23][24]. - It emphasizes that the core value of AI lies in its utility and productivity enhancement rather than merely achieving higher model capabilities [24][25]. Group 4: Future Directions and Opportunities - Despite the challenges, the article suggests that there are still significant opportunities for improvement in AI systems through better hardware utilization and innovative model designs [39][45][67]. - It highlights the potential for advancements in training efficiency and inference optimization, indicating that current models are not yet fully optimized for existing hardware capabilities [41][43][46]. - The article concludes that the path to more capable AI systems is not singular, and multiple avenues exist for achieving substantial improvements in performance and utility [66][69].