Core Insights - Reinforcement Learning (RL) has become a key technology paradigm for enhancing the complex reasoning and problem-solving capabilities of Large Language Models (LLMs) [2] - The main challenge in RL for LLMs is the mismatch between sequence-level rewards and token-level optimization objectives, raising concerns about theoretical soundness and training stability [2][5] - A new RL formulation method proposed by Alibaba's Qianwen team focuses on optimizing the expected value of sequence-level rewards using a surrogate token-level objective as a first-order approximation [2][11] Methodology - The team defines an autoregressive LLM represented by a policy π_θ, focusing on sequence-level rewards where a scalar reward R(x, y) is assigned to the entire response y [6] - The decision to avoid value function methods stems from the difficulty in constructing a general, scalable, and reliable value model [7] - Directly optimizing the expected sequence-level reward is challenging due to numerical differences between training and inference [9] Key Findings - The team conducted extensive experiments using a 30 billion parameter MoE model, consuming hundreds of thousands of GPU hours [4] - The introduction of on-policy training with importance sampling correction achieved the highest training stability [10] - In off-policy updates, both clipping and Routing Replay are essential for maintaining training stability, as their absence leads to performance degradation [23] Experimental Results - The MiniRL algorithm, which incorporates importance sampling, demonstrated the best performance and stability during training [22] - The removal of importance sampling correction during training led to rapid collapse and a sharp decrease in entropy, confirming its critical role in the first-order approximation [22] - Different cold-start initialization methods yielded similar final performance, indicating that the focus should be on the RL methods themselves rather than initialization details [27]
LLM强化学习不稳定之谜,被Qwen团队从「一阶近似」视角解开