Workflow
有监督微调(SFT)
icon
Search documents
RL 将如何提高具身大模型 VLA 泛化性?清华大学团队NeurIPS 2025文章分析 RL 与 SFT 泛化性差异
机器之心· 2025-10-12 02:41
Core Insights - The article discusses the potential of Vision-Language-Action (VLA) large models in embodied intelligence, highlighting the limitations of current supervised fine-tuning (SFT) methods in generalization to new environments and tasks. It emphasizes the advantages of Reinforcement Learning (RL) in enhancing the generalization capabilities of VLA models [2][4]. Group 1: Research Findings - A new evaluation benchmark was created to address the limited generalization of VLA models, comparing the performance of RL and SFT in enhancing model robustness across various visual, semantic, and execution challenges [4]. - Experiments revealed that using RL algorithms like Proximal Policy Optimization (PPO) significantly improved the model's robustness in semantic understanding and task execution, maintaining performance comparable to SFT in visually varied scenarios [4][11]. Group 2: RL Methodology - The research team tested three RL algorithms: PPO, Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO). The results showed that PPO outperformed DPO and GRPO in multi-step decision tasks due to the partially observable Markov decision process (POMDP) characteristics of robotic tasks [9][11]. - To enhance the efficiency of PPO training on VLA models, three key innovations were introduced: a shared Actor-Critic architecture reducing memory usage by 45% and increasing training speed by 35%, a preheating strategy using 140 high-quality trajectories to improve convergence speed by 50%, and minimizing PPO training epochs to just one, which reduced training time significantly [13][15]. Group 3: Comparison of SFT and RL - The research explored the data scale limits of SFT, finding that performance saturation occurred at around 16,000 demonstration trajectories. In contrast, RL achieved a 42.6% performance improvement on out-of-distribution tasks, indicating superior generalization capabilities [18][19]. - A comprehensive evaluation benchmark was constructed to dissect the generalization differences between SFT and RL across visual, semantic, and execution dimensions, with RL showing clear advantages in semantic understanding and execution robustness [21][23]. Group 4: Practical Implications - The study underscores the core value of RL in developing truly generalizable embodied agents, which is increasingly important as robotic applications become more complex and variable. The team has open-sourced a large-scale RL framework for embodied intelligence, RLinf, to facilitate further research [25]. - Visual analysis of specific cases revealed deeper differences, such as RL's ability to maintain task stability under noise and effectively handle unseen objects, contrasting with SFT's tendency to get stuck in repetitive actions [26].
揭秘LLM“思考”之谜:推理即“梯度下降”,元学习框架解构训练过程,还给优化提供新思路
量子位· 2025-06-10 04:05
Core Insights - The article introduces the Reasoning as Meta-Learning (RaML) framework, which aims to reveal how large language models (LLMs) "think" by drawing parallels between reasoning and gradient descent optimization [1][2] - RaML posits that the reasoning trajectory generated by LLMs during problem-solving acts as a form of implicit parameter updates, leading to improved model performance [2][4] Group 1: RaML Framework and Mechanism - RaML's core insight is that the reasoning trajectory in LLMs resembles a "pseudo-gradient descent" process, where each reasoning step adjusts the model's internal state towards a better solution [2] - The framework decomposes the training process of LLMs into two levels: "inner-loop optimization" for specific tasks and "outer-loop optimization" for learning strategies across multiple tasks [8][9] - The study emphasizes that longer reasoning trajectories typically lead to better optimization outcomes, akin to more iterations in traditional optimization algorithms [14] Group 2: Empirical Validation and Performance - The QwQ-32B model's reasoning on the AIME24 dataset demonstrated that confidence in correct answers increases with the decoding of reasoning trajectories, supporting the idea of parameter updates through reasoning [3][4] - The comparison between supervised fine-tuning (SFT) and reinforcement learning (RL) models showed that SFT models outperform RL models in mathematical benchmarks, highlighting the benefits of guided learning [10][12] Group 3: Reflection Tokens and Optimization - The article discusses the role of "reflection" tokens in reasoning trajectories, which help the model reassess its outputs and improve performance by escaping local optima [15][17] - It contrasts "thinking" and "non-thinking" modes, indicating that forced early termination of reasoning can lead to suboptimal solutions, similar to premature stopping in gradient descent [18][20] Group 4: Generalization and Meta-Learning - The research indicates that LLMs trained on specific reasoning tasks can generalize to unseen tasks, leveraging learned universal features from various problems [21][23] - The RaML framework provides practical strategies for enhancing training performance by increasing the number of reasoning trajectories for each problem, akin to expanding the support set in meta-learning [25] Group 5: Future Directions and Efficiency - The article suggests exploring methods to extract shorter, equivalent optimization trajectories from longer reasoning paths to reduce decoding overhead while maintaining performance [27][30] - Initial experiments show that summarizing long reasoning trajectories can yield comparable results with significantly reduced computational costs, indicating a potential area for future research [30][31] Conclusion - The RaML framework offers a novel perspective on understanding LLM reasoning and training, revealing the intricate connections between reasoning, meta-learning, and gradient descent [32]