唐杰、杨植麟、林俊旸、姚顺雨:他们眼中的 AGI 三个转折点
虎嗅APP·2026-01-11 09:52

Core Insights - The article discusses the evolving landscape of Artificial General Intelligence (AGI) and highlights three key trends shaping its future development in China and the U.S. [10] Group 1: Trends in AGI Development - Trend One: Beyond Scaling, a New Paradigm is Emerging - The discussion around Scaling has shifted from whether to continue expanding model sizes to questioning the value of such investments. Efficiency has become a critical concern as the marginal returns on increased computational power diminish [14][15]. - Trend Two: Token Efficiency is Becoming a Decisive Factor - Token efficiency has emerged as a crucial variable in determining the potential of large models. The ability to utilize tokens effectively is now seen as essential for achieving higher intelligence levels and completing complex tasks [20][22][24]. - Trend Three: Diverging Evolution Paths for Chinese and American Models - The development of large models in the U.S. is increasingly focused on productivity and enterprise applications, while in China, the emphasis is on cost sensitivity and stability. This divergence reflects different market demands and cultural approaches to research and development [26][28][29]. Group 2: Key Discussions and Insights - The AGI-Next summit gathered leading figures in AI to discuss the future of AGI, emphasizing a shift from application-level discussions to foundational questions about the direction of next-generation AGI [6][10]. - The consensus among researchers indicates that the next phase of AGI development will require a reevaluation of existing paradigms, with a focus on efficiency and the role of token utilization in model performance [10][11][20]. - The cultural differences between U.S. and Chinese AI research environments contribute to the distinct paths taken by their respective large model developments, with U.S. labs often pursuing high-risk, high-reward projects, while Chinese labs focus on practical applications and efficiency [29].