Workflow
长程任务执行
icon
Search documents
谁说Scaling Law到头了?新研究:每一步的微小提升会带来指数级增长
3 6 Ke· 2025-09-16 07:46
Core Insights - The Scaling Law is being questioned due to perceived diminishing returns in model training, but recent research suggests that small improvements in accuracy can lead to exponential growth in task completion length, which may hold more economic value in real-world applications [1][2][4] Group 1: Research Findings - A recent paper from Cambridge University indicates that while there are diminishing returns in metrics like test loss, the real-world value of large language models (LLMs) often comes from their ability to complete longer tasks [2][4] - The paper highlights that the long-term execution of tasks has been a significant weakness in deep learning, with LLMs struggling to perform complex, lengthy tasks despite improvements in reasoning capabilities [4][6] - The authors propose that the failures in long tasks are primarily due to execution challenges rather than reasoning or planning limitations, emphasizing the need for more focus on execution capabilities in LLM research [6][20] Group 2: Experimental Insights - The study measures LLMs' long-horizon execution capabilities by isolating execution from planning and knowledge retrieval, revealing that larger models can significantly increase the number of successful execution rounds [6][23][25] - The concept of self-conditioning is introduced, where the model's performance deteriorates as it builds on its previous errors, leading to a decline in accuracy over multiple rounds [8][26][30] - The research shows that while increasing model size improves task execution, it does not alleviate the self-conditioning effect, which remains a challenge for LLMs in long-term tasks [27][30] Group 3: Implications for Investment - The findings suggest that the economic value of LLMs may not be accurately reflected in short-task benchmarks, as the ability to complete longer tasks is a more reliable indicator of their potential [18][20] - The paper encourages further investment in scaling models, as the ability to perform longer tasks could justify continued financial commitment despite short-term performance metrics suggesting stagnation [10][18] - The research calls for the design of new benchmarks that better assess the execution depth of models, highlighting a potential area for future investment and development in the AI sector [10][18]
谁说Scaling Law到头了?新研究:每一步的微小提升会带来指数级增长
机器之心· 2025-09-16 04:01
Core Viewpoint - The article discusses the ongoing debate regarding the diminishing returns of scaling models in AI, particularly in the context of large language models (LLMs). It presents a new perspective that, despite slower improvements in single-step accuracy, these incremental gains can lead to exponential growth in task completion length, which may hold greater economic value in real-world applications [1][3]. Group 1: Scaling Law and Economic Value - The scaling law indicates that while there may be diminishing returns in metrics like test loss, the real-world value of LLMs often comes from their ability to complete longer tasks. Larger models can compound small improvements in single-step accuracy, resulting in exponential increases in task length [3][6]. - The paper titled "The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs" argues that the economic value of an AI agent is derived from the length of tasks it can complete, rather than short task benchmarks that may suggest stagnation in progress [5][19]. Group 2: Long-Horizon Execution Challenges - Long-term task execution has historically been a significant weakness for deep learning models. The paper highlights that while LLMs have improved in complex reasoning tasks, they still struggle with executing longer tasks reliably [6][11]. - The authors propose that failures in long-term execution are often misattributed to reasoning or planning deficiencies, when in fact, execution remains a critical and under-researched challenge [7][22]. Group 3: Self-Conditioning Effect - The study identifies a self-conditioning effect where the error rate in long tasks increases with each step, leading to a compounding effect of mistakes. This phenomenon contrasts with human performance, where practice typically leads to improvement [9][30]. - The authors found that larger models do not necessarily mitigate the self-conditioning effect, which can lead to a decline in performance over extended tasks [29][32]. Group 4: Impact of Thinking Models - Recent thinking models have shown the ability to correct for self-conditioning limitations, allowing for significantly longer task execution in single rounds. For instance, the GPT-5 thinking version can execute over 1000 steps, far surpassing competitors [10][36]. - The research emphasizes the importance of reasoning before action, as models that utilize thinking chains can perform better in executing longer tasks compared to those that do not [36][37]. Group 5: Experimental Insights - The experiments conducted reveal that increasing model size significantly enhances the number of rounds a model can successfully execute, demonstrating a clear scaling trend [27][28]. - The findings suggest that while larger models can improve task execution, they still face challenges due to self-conditioning, which remains a critical area for future research [29][37].