Workflow
稀疏化解码树
icon
Search documents
无需训练、只优化解码策略,DTS框架让大模型推理准确率提升6%,推理长度缩短23%
机器之心· 2025-11-21 02:04
Core Insights - The article discusses the advancements in Large Reasoning Models (LRMs) and introduces DTS (Decoding Tree Sketching), a new inference framework that addresses the issue of "overthinking" in models, which leads to longer and often incorrect reasoning paths [2][8][26]. Group 1: Problem Identification - The "overthinking" problem in reasoning models results in longer reasoning chains that are more prone to errors and self-repetition, decreasing accuracy [8][11]. - Existing methods to mitigate this issue often rely on additional training or aggressive pruning, which can be costly and unstable [8][11]. Group 2: DTS Framework - DTS employs two key strategies: high uncertainty branching reasoning and early stopping upon the first completion of a path, aiming to approximate the shortest and correct reasoning path [2][8][26]. - The framework does not require additional training or modifications to model weights, making it a plug-and-play solution [8][26]. Group 3: Empirical Results - In AIME2024/2025, DTS achieved an average accuracy improvement of 6% and a reduction in average reasoning length by approximately 23%, along with a 10% decrease in endless repetition rates [4][20]. - The empirical findings indicate a significant negative correlation between reasoning chain length and accuracy, with shorter reasoning chains often yielding higher correctness rates [9][11]. Group 4: Methodology - The reasoning process is conceptualized as a decoding tree, where nodes represent generated tokens and paths represent complete chains of thought (CoT) [12][13]. - DTS focuses on branching only at "key tokens" where uncertainty is high, thereby avoiding unnecessary complexity in the decoding tree [15][16]. Group 5: Conclusion and Future Directions - DTS provides a lightweight optimization route for reasoning models, allowing them to "think less but more accurately" [26][27]. - The approach is expected to integrate with multi-step reasoning, calibration, and uncertainty estimation, paving the way for more efficient and reliable reasoning in LRMs [27].