Group 1 - The large model industry is experiencing a "scaling law" trend, with tech companies and research institutions investing heavily to achieve better model performance through larger data scales [1][2] - Scholars P.V. Coveney and S. Succi warn that the scaling law has significant flaws in improving the predictive uncertainty of large language models (LLMs), suggesting that blindly expanding data may lead to "Degenerative AI," characterized by catastrophic accumulation of errors and inaccuracies [2][4] - The core mechanism supporting LLM learning, which generates non-Gaussian output from Gaussian input, may be the fundamental cause of error accumulation and information disasters [5] Group 2 - Current LLMs exhibit impressive capabilities in natural language processing, but the research team argues that machine learning fundamentally operates as a "black box" and lacks understanding of underlying physics, which limits its application in scientific and social fields [7][9] - Only a few AI tech companies can train large state-of-the-art LLMs, with their energy demands being extremely high, yet performance improvements appear to be limited [10][11] - The research team identifies a low scaling exponent as a root cause of poor LLM performance, indicating that the ability to improve with larger datasets is extremely limited [14] Group 3 - Despite the hype surrounding large models, even advanced AI chatbots produce significant errors, which do not meet the precision standards required in most scientific applications [15][23] - The research team illustrates that even with increased computational resources, accuracy may not improve and could significantly decline once a certain threshold is crossed, indicating the presence of "barriers" to scalability [16][17] - The accuracy of machine learning applications is highly dependent on the homogeneity of training datasets, and issues with accuracy can arise even in homogeneous training scenarios [18][19] Group 4 - The limitations of LLMs in reliability and energy consumption are evident, yet discussions on their technical details are scarce [24] - The tech industry is exploring the use of large reasoning models (LRMs) and agentic AI to enhance output credibility, although these approaches still rely heavily on empirical foundations [25][26] - The research team suggests that a more constructive direction would be to leverage LLMs for generative tasks, guiding uncertainty into exploratory value [27][28] Group 5 - The concept of "Degenerative AI" poses a significant risk, particularly in LLMs trained on synthetic data, leading to catastrophic error accumulation [29][30] - While the current scaling exponent is low but positive, indicating that the industry has not yet entered a phase where more data leads to less information, it is in a stage of "extreme diminishing returns" [32] - The research team emphasizes that relying solely on brute force and unsustainable computational expansion could lead to the reality of Degenerative AI [33][34]
Scaling Law再遭质疑:“退化式AI”竟成终局?
Hu Xiu·2025-08-04 12:14