Workflow
策略熵
icon
Search documents
10行代码,AIME24/25提高15%!揭秘大模型强化学习熵机制
机器之心· 2025-06-05 07:14
Core Insights - The article discusses the entropy collapse problem in reinforcement learning for large language models (LLMs) and proposes solutions to enhance exploration capabilities during training [3][5][24]. Group 1: Entropy Collapse in Reinforcement Learning - The core challenge in reinforcement learning is the trade-off between exploitation and exploration, where policy entropy is a key indicator of exploration potential [4]. - A significant finding is that policy entropy rapidly decreases to near zero within a few training steps, indicating a loss of exploration ability, which leads to performance stagnation [4][5]. - The relationship between policy entropy and downstream performance is quantitatively analyzed, revealing that performance is entirely determined by policy entropy in the absence of entropy interventions [4][5]. Group 2: Mechanisms Behind Entropy Changes - The study identifies the driving factors behind the changes in policy entropy during reinforcement learning, focusing on the covariance between action probabilities and their corresponding advantages [5][13]. - It is found that high-advantage and high-probability actions reduce policy entropy, while rare high-advantage actions increase it [13][17]. Group 3: Proposed Solutions for Enhancing Entropy - The article introduces two simple yet effective entropy-enhancing reinforcement learning strategies, Clip-Cov and KL-Cov, which can be implemented with minimal code changes [5][22]. - Experimental results demonstrate that these methods significantly improve performance, achieving a 6.4% increase on Qwen2.5-32B and up to 15% on challenging datasets like AIME24/25 [22][24]. - The research emphasizes the importance of maintaining exploration capabilities to achieve scalable reinforcement learning, suggesting that merely increasing computational power may yield limited benefits without addressing the entropy bottleneck [7][24].