Core Insights - Recent advancements in reinforcement learning (RL) technology have significantly improved the reasoning capabilities of language models [1] - The true extent to which post-training expands model reasoning capabilities or merely uncovers existing potential remains unclear [2] - A key challenge is the lack of controllability in modern training processes, with large-scale pre-training corpora being opaque and mid-training often insufficiently studied [2] Group 1: Research Framework and Methodology - Researchers from Carnegie Mellon University developed a controllable synthetic data framework based on GSM-Infinite to quantitatively analyze the causal impact of pre-training, mid-training, and RL on model reasoning generalization [2][5] - The framework allows for the decoupling of reasoning structure and surface context, enabling precise quantification of reasoning complexity and the examination of whether models genuinely learn reasoning logic or merely memorize specific text patterns [10][12] Group 2: Key Findings on Training Interactions - The effectiveness of RL depends on the "capability margin"; RL can only enhance reasoning abilities when tasks are challenging yet within the model's exploration range [16][17] - Pre-training utilized 10 billion tokens focusing on basic reasoning primitives, while mid-training serves as a bridge to align the model's internal representations for RL readiness [20] - A minimal amount of target context data during pre-training can significantly enhance cross-context generalization during RL post-training [22] Group 3: Training Efficiency and Performance - Mid-training is crucial for computational efficiency, with findings indicating that combining mid-training with RL yields better performance than using RL alone [26][27] - The introduction of process-level rewards can mitigate reward hacking and improve reasoning fidelity, particularly in complex reasoning tasks [29][30] Group 4: Practical Guidelines for Training - RL data design should target the model's capability margin, avoiding overly easy or difficult tasks [31] - Pre-training strategies must ensure at least 1% coverage of atomic capabilities in long-tail domains to provide interfaces for RL [32] - The allocation of computational resources should be dynamically adjusted based on task difficulty, with more RL for tackling challenging problems and more mid-training for stability [33]
RL是「点金石」还是「挖掘机」?CMU 用可控实验给出答案
机器之心·2025-12-15 01:44