Workflow
刚刚,Thinking Machines Lab博客提出在策略蒸馏,Qwen被cue 38次
机器之心·2025-10-28 00:41

Core Viewpoint - Thinking Machines Lab (TML) has introduced a new training method called on-policy distillation, which combines reinforcement learning (RL) error correlation with supervised fine-tuning (SFT) reward density, achieving superior performance at a lower cost compared to other methods [1][2][27]. Group 1: Methodology and Advantages - On-policy distillation allows small models to exhibit strong domain performance and continuous learning capabilities [1][2]. - The training process is divided into three stages: pre-training for general capabilities, mid-training for domain knowledge, and post-training for guiding target behaviors [6][7]. - On-policy training samples trajectories from the student model itself, providing direct feedback to avoid errors, while off-policy training relies on external sources [8][9][12]. Group 2: Comparison with Other Methods - On-policy distillation combines the advantages of on-policy training's reliability and the dense reward signals from SFT, making it a cost-effective alternative to traditional RL methods [28][92]. - In experiments, on-policy distillation achieved a score of 74.4% on the AIME'24 benchmark with significantly lower computational costs compared to RL, which required 17,920 GPU hours for a score of 67.6% [47][46]. Group 3: Applications and Future Directions - The method has been successfully applied to train models for mathematical reasoning and to develop assistant models with domain knowledge and instruction-following capabilities [26][27]. - TML aims to continue exploring new applications of on-policy distillation, improving teacher supervision methods, and enhancing data efficiency and continuous learning [92][93].