强化学习(RL)
Search documents
RLINF-VLA:一种用于 VLA+RL 训练的统一高效框架
具身智能之心· 2025-10-22 06:02
Core Insights - The article presents the RLinf-VLA framework, a unified and efficient framework for training Visual-Language-Action (VLA) models using Reinforcement Learning (RL), addressing the limitations of existing models that rely on supervised fine-tuning [2][53] - The framework significantly enhances training efficiency and generalization capabilities, achieving high success rates in various simulation tasks and demonstrating superior performance in real-world applications compared to traditional supervised methods [5][53] Framework Design - The RLinf-VLA framework integrates multiple simulators, algorithms, and VLA architectures, optimizing resource allocation through flexible execution modes and system-level enhancements [4][53] - It supports three GPU allocation strategies: colocated, disaggregated, and hybrid, allowing users to easily switch modes via configuration files, thus reducing system customization costs [10][11] Model Compatibility - The framework supports LoRA for efficient parameter tuning, reducing memory consumption and accelerating training while maintaining performance [12] - It is compatible with OpenVLA and its extension OpenVLA-OFT, which have shown strong performance in various robotic operation benchmarks [12][22] Multi-Simulator Support - The framework emphasizes the importance of simulators in RL, utilizing ManiSkill and LIBERO as primary simulators to achieve diverse task capabilities [13] - It provides a unified interface for different simulators, facilitating the implementation of various tasks and supporting multiple RL algorithms, initially focusing on PPO and GRPO [13][14] Algorithm Design - The framework incorporates advanced techniques for advantage function and log-probability calculations, allowing for flexible integration of block-level and action-level definitions [14][15] - It supports various optimization strategies, including trajectory length normalization and effective action masking, to enhance training stability and performance [19][20] Experimental Results - The RLinf-VLA framework demonstrated significant performance improvements, with success rates increasing by 45% to 70% in various tasks compared to baseline models [22][24] - In LIBERO tasks, the framework achieved an average success rate of 98.11%, showcasing its capability for large-scale multi-task reinforcement learning [28] High Efficiency Performance - The framework's efficiency is evaluated based on throughput, achieving substantial improvements in training speed across different GPU configurations [30][35] - The hybrid allocation mode outperformed traditional methods, demonstrating the benefits of pipeline overlapping in resource utilization [35][37] Real-World Deployment - The RLinf-VLA framework was successfully deployed in real-world environments, showing superior zero-shot generalization capabilities compared to supervised fine-tuning strategies [51][53] - The experiments indicated that RL-trained models could adapt better to real-world tasks, achieving higher success rates in object manipulation tasks [51] Conclusion - The RLinf-VLA framework represents a significant advancement in the field of embodied intelligence, providing a robust foundation for future research and development in VLA training [53]
GPT-5≈o3.1!OpenAI首次详解思考机制:RL+预训练才是AGI正道
量子位· 2025-10-20 03:46
Core Insights - The article discusses the evolution of OpenAI's models, particularly focusing on GPT-5 as an iteration of the o3 model, suggesting that it represents a significant advancement in AI capabilities [1][4][23]. Model Evolution - Jerry Tworek, OpenAI's VP of Research, views GPT-5 as an iteration of o3, emphasizing the need for a model that can think longer and interact autonomously with multiple systems [4][23]. - The transition from o1 to o3 marked a structural change in AI development, with o3 being the first truly useful model capable of utilizing tools and contextual information effectively [19][20]. Reasoning Process - The reasoning process of models like GPT-5 is likened to human thought, involving calculations, information retrieval, and self-learning [11]. - The concept of "thinking chains" has become prominent since the release of the o1 model, allowing models to articulate their reasoning in human language [12]. - Longer reasoning times generally yield better results, but user feedback indicates a preference for quicker responses, leading OpenAI to offer models with varying reasoning times [13][14]. Internal Structure and Research - OpenAI's internal structure combines top-down and bottom-up approaches, focusing on a few core projects while allowing researchers freedom within those projects [31][33]. - The company has rapidly advanced from o1 to GPT-5 in just one year due to its efficient operational structure and talented workforce [33]. Reinforcement Learning (RL) - Reinforcement learning is crucial for OpenAI's models, combining pre-training with RL to create effective AI systems [36][57]. - Jerry explains RL as a method of training models through rewards and penalties, similar to training a dog [37][38]. - The introduction of Deep RL by DeepMind has significantly advanced the field, leading to the development of meaningful intelligent agents [39]. Future Directions - Jerry believes that the future of AI lies in developing agents capable of independent thought for complex tasks, with a focus on aligning model behavior with human values [53][54]. - The path to AGI (Artificial General Intelligence) will require both pre-training and RL, with the addition of new components over time [56][58].
过去一个月高强度RL的实践和思考 - 如何涨点?
自动驾驶之心· 2025-10-19 23:32
Core Insights - The article discusses the recent advancements and challenges in Reinforcement Learning (RL) for Visual Language Models (VLM), emphasizing the importance of foundational work and iterative improvements in achieving performance gains [2][4]. RL Goals - The primary objectives for RL in VLM include achieving a 1-2 point increase in overall performance on SFT model versions and exceeding 1-2 points in specific benchmarks such as mathematics and instruction adherence [5]. RL Overall Approach - The essence of RL is to enhance sampling efficiency rather than enabling the base model to learn new knowledge. It is noted that the base model can outperform RL models in terms of correct response probability when given unlimited attempts [7][8]. Challenges in VLM RL - Key challenges include the selection of efficient RL algorithms, the need for high infrastructure requirements, and the sensitivity of RL to data quality and organization [10][12]. Data Organization - Effective data organization is crucial, requiring a balanced mix of tasks and high-quality input data. The output length is also significantly related to the RL algorithm used, necessitating careful consideration of training data characteristics [13][14]. Key Findings and Conclusions - Short responses negatively impact training effectiveness, and it is essential to construct pairs of responses with clear distinctions between acceptable and rejectable outputs. The importance of meticulous data checking and the absence of a "silver bullet" solution are emphasized [19][24].
GPT-5 核心成员详解 RL:Pre-training 只有和 RL 结合才能走向 AGI
海外独角兽· 2025-10-18 12:03
Core Insights - The article discusses the limitations of current large language models (LLMs) and emphasizes the importance of reinforcement learning (RL) as a more viable path toward achieving artificial general intelligence (AGI) [2][3][50] - It highlights the interplay between pre-training and RL, suggesting that both are essential for the development of advanced AI systems [16][50] Group 1: Reinforcement Learning (RL) Insights - Richard Sutton argues that the current LLM approach, which primarily relies on imitation, has fundamental flaws and is a "dead end" for achieving AGI, while RL allows models to interact with their environment and learn from experience [2] - Andrej Karpathy points out that traditional RL is inefficient and that future intelligent systems will not rely solely on RL [2] - Jerry Tworek emphasizes that RL must be built on strong pre-training, and that the two processes are interdependent [3][16] Group 2: Reasoning and Thought Processes - The reasoning process in AI is likened to human thinking, where models must search for unknown answers rather than simply retrieving known ones [7][9] - The concept of "chain of thought" (CoT) is introduced, where language models express their reasoning steps in human language, enhancing their ability to solve complex problems [10][11] - The balance between output quality and response time is crucial, as longer reasoning times generally yield better results, but users prefer quicker responses [12][13] Group 3: Model Development and Iteration - The evolution of OpenAI's models is described as a series of scaling experiments aimed at improving reasoning capabilities, with each iteration building on the previous one [13][15] - The transition from the initial model (o1) to more advanced versions (o3 and GPT-5) reflects significant advancements in reasoning and tool usage [15][16] - The integration of RL with pre-training is seen as a necessary strategy for developing more capable AI systems [16][19] Group 4: Challenges and Future Directions - The complexity of RL is highlighted, with the need for careful management of rewards and penalties to train models effectively [20][33] - The potential for online RL, where models learn in real-time from user interactions, is discussed, though it poses risks that need to be managed [36][38] - The ongoing challenge of achieving alignment in AI, ensuring models understand right from wrong, is framed as a critical aspect of AI development [39][47]
聊聊 AI Agent 到底有多大创新?
自动驾驶之心· 2025-10-18 04:00
Core Insights - The article discusses the current limitations and challenges faced by AI agent technologies, particularly in comparison to traditional task bots, highlighting that the user experience has not significantly improved over the past decade [1][2]. Group 1: Planning Challenges - The planning phase is time-consuming, and as the number of tools increases, the accuracy of turbo models declines, necessitating the use of flagship models, which further increases latency [2][5]. - The quality of planning is insufficient; the workflows generated by models are less effective than those designed by humans, particularly in complex scenarios [2][8]. - The core issue with slow planning is the underestimation of the costs associated with tool discovery and parameter alignment, leading to a complex optimization problem when dynamically selecting tools [5][21]. Group 2: Reflection Issues - Reflection processes can lead to self-reinforcing cycles of inefficiency due to a lack of fine-grained computable signals and clear stopping conditions [3][15]. - Current models rely on weak feedback mechanisms, which can result in reinforcing incorrect assumptions rather than correcting errors [15][20]. - Proposed solutions include structured reflection processes that allow models to learn from mistakes and improve their performance through reinforcement learning [18][20]. Group 3: Engineering Solutions - Suggestions for improving planning quality include decomposing plans into milestones and local prompts, which can enhance stability and reusability [8][10]. - Implementing parallel execution of tasks can reduce overall processing time, with evidence showing a 20% reduction in time for non-dependent tool calls [6][21]. - The introduction of routing strategies can streamline task execution by directing simpler tasks to specialized executors, reserving complex planning for stronger reasoning models [6][21]. Group 4: Future Directions - The article emphasizes the importance of combining reinforcement learning with agent models to enhance their reasoning and execution capabilities, indicating a trend towards end-to-end learning approaches [20][21]. - The potential for AI agents to become valuable applications of large language models (LLMs) in real-world scenarios is highlighted, with ongoing improvements expected as models evolve [21].
NeurIPS 2025|清华团队分析RL将如何提升VLA泛化性
具身智能之心· 2025-10-15 04:00
Core Insights - The article discusses the potential of Vision-Language-Action (VLA) models in embodied intelligence and highlights the limitations of current supervised fine-tuning (SFT) methods in achieving human-like generalization. It emphasizes the advantages of Reinforcement Learning (RL) in enhancing the generalization capabilities of VLA models [1][3]. Group 1: Research Findings - A new evaluation benchmark was created to address the limited generalization of VLA models, comparing the performance of RL and SFT in enhancing model robustness across various visual, semantic, and execution challenges [3][19]. - Experiments revealed that using RL algorithms like Proximal Policy Optimization (PPO) significantly improved the model's robustness in semantic understanding and task execution, while maintaining performance in visually varied scenarios comparable to SFT [3][12]. Group 2: Methodology - The research utilized the open-source OpenVLA model, which is fine-tuned from Llama2-7b, to conduct experiments involving RGB images and action tokens for robotic control [6]. - Three RL methods were tested: PPO, Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO), with PPO showing notable advantages in multi-step decision tasks [8][15]. Group 3: PPO Training Innovations - The research team proposed three key innovations for efficient PPO training: 1. A shared Actor-Critic architecture that reduced memory usage by 45% and improved training speed by 35% [12][14]. 2. A preheating strategy using 140 high-quality trajectories that enhanced convergence speed by 50% [14]. 3. Minimizing PPO training epochs to just one, which was sufficient for performance without increasing training time [14]. Group 4: Comparison of SFT and RL - The study found that while SFT performance plateaued with 16,000 demonstration trajectories, RL achieved a 42.6% performance improvement in out-of-distribution tasks, indicating superior generalization capabilities [17][18]. - A comprehensive evaluation benchmark was developed to dissect the differences in generalization capabilities between SFT and RL across visual, semantic, and execution dimensions [19][21]. Group 5: Practical Implications - The research underscores the core value of RL in building truly generalizable embodied agents, which is increasingly important as robotic applications become more complex and varied [25].
RL 将如何提高具身大模型 VLA 泛化性?清华大学团队NeurIPS 2025文章分析 RL 与 SFT 泛化性差异
机器之心· 2025-10-12 02:41
Core Insights - The article discusses the potential of Vision-Language-Action (VLA) large models in embodied intelligence, highlighting the limitations of current supervised fine-tuning (SFT) methods in generalization to new environments and tasks. It emphasizes the advantages of Reinforcement Learning (RL) in enhancing the generalization capabilities of VLA models [2][4]. Group 1: Research Findings - A new evaluation benchmark was created to address the limited generalization of VLA models, comparing the performance of RL and SFT in enhancing model robustness across various visual, semantic, and execution challenges [4]. - Experiments revealed that using RL algorithms like Proximal Policy Optimization (PPO) significantly improved the model's robustness in semantic understanding and task execution, maintaining performance comparable to SFT in visually varied scenarios [4][11]. Group 2: RL Methodology - The research team tested three RL algorithms: PPO, Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO). The results showed that PPO outperformed DPO and GRPO in multi-step decision tasks due to the partially observable Markov decision process (POMDP) characteristics of robotic tasks [9][11]. - To enhance the efficiency of PPO training on VLA models, three key innovations were introduced: a shared Actor-Critic architecture reducing memory usage by 45% and increasing training speed by 35%, a preheating strategy using 140 high-quality trajectories to improve convergence speed by 50%, and minimizing PPO training epochs to just one, which reduced training time significantly [13][15]. Group 3: Comparison of SFT and RL - The research explored the data scale limits of SFT, finding that performance saturation occurred at around 16,000 demonstration trajectories. In contrast, RL achieved a 42.6% performance improvement on out-of-distribution tasks, indicating superior generalization capabilities [18][19]. - A comprehensive evaluation benchmark was constructed to dissect the generalization differences between SFT and RL across visual, semantic, and execution dimensions, with RL showing clear advantages in semantic understanding and execution robustness [21][23]. Group 4: Practical Implications - The study underscores the core value of RL in developing truly generalizable embodied agents, which is increasingly important as robotic applications become more complex and variable. The team has open-sourced a large-scale RL framework for embodied intelligence, RLinf, to facilitate further research [25]. - Visual analysis of specific cases revealed deeper differences, such as RL's ability to maintain task stability under noise and effectively handle unseen objects, contrasting with SFT's tendency to get stuck in repetitive actions [26].
听说,大家都在梭后训练?最佳指南来了
机器之心· 2025-10-09 02:24
Core Insights - The article emphasizes the shift in focus from pre-training to post-training in large language models (LLMs), highlighting the diminishing returns of scaling laws as model sizes reach hundreds of billions of parameters [2][3][11]. Group 1: Importance of Post-Training - Post-training is recognized as a crucial phase for enhancing the reasoning capabilities of models like OpenAI's series, DeepSeek R1, and Google Gemini, marking it as a necessary step towards advanced intelligence [3][11]. - The article introduces various innovative post-training methods such as Reinforcement Learning from Human Feedback (RLHF), Reinforcement Learning from AI Feedback (RLAIF), and Reinforcement Learning with Verifiable Rewards (RLVR) [2][3][12]. Group 2: Transition from Pre-Training to Post-Training - The evolution from pre-training to instruction fine-tuning is discussed, where foundational models are trained on large datasets to predict the next token, but often lack practical utility in real-world applications [7][8]. - Post-training aims to align model behavior with user expectations, focusing on quality over quantity in the datasets used, which are typically smaller but more refined compared to pre-training datasets [11][24]. Group 3: Supervised Fine-Tuning (SFT) - Supervised Fine-Tuning (SFT) is described as a process that transforms a pre-trained model into one that can follow user instructions effectively, relying on high-quality instruction-answer pairs [21][24]. - The quality of the SFT dataset is critical, as even a small number of low-quality samples can negatively impact the model's performance [25][26]. Group 4: Reinforcement Learning Techniques - Reinforcement Learning (RL) is highlighted as a complex yet effective method for model fine-tuning, with various reward mechanisms such as RLHF, RLAIF, and RLVR being employed to enhance model performance [39][41]. - The article outlines the importance of reward models in RLHF, which are trained using human preference data to guide model outputs [44][46]. Group 5: Evaluation of Post-Training Models - The evaluation of post-training models is multifaceted, requiring a combination of automated and human assessments to capture various quality aspects [57][58]. - Automated evaluations are cost-effective and quick, while human evaluations provide a more subjective quality measure, especially for nuanced tasks [59][60].
梁文锋执笔的R1论文登上Nature封面!首次回应外界三大质疑
AI前线· 2025-09-18 02:28
Core Viewpoint - The article highlights the significant breakthrough of DeepSeek's AI model, DeepSeek-R1, which has successfully passed peer review and is recognized as the first large language model to achieve this milestone, marking a notable advancement for domestic AI research on the global stage [3][8]. Summary by Sections Model Development and Features - DeepSeek-R1 utilizes reinforcement learning (RL) to develop reasoning capabilities without relying on extensive human-annotated data, showcasing a novel approach in AI model training [3][12]. - The model was built on DeepSeek-V3 Base, with a focus on rewarding correct predictions to enhance the generation of longer and more logical responses [3][12]. - The training cost for DeepSeek-R1 was approximately $294,000, significantly lower than competitors that often spend tens of millions [6][12]. Peer Review Process - The peer review process for DeepSeek-R1 involved eight external experts over five months, resulting in a comprehensive review document that was three times the length of the original paper [9][12]. - The review addressed various aspects, including originality, methodology, and robustness, leading to improvements in the final published version [9][12]. Data and Safety Measures - The pre-training data for DeepSeek-V3 Base was sourced entirely from the internet, with a significant effort made to clean the data to avoid contamination, removing around 6 million potentially polluted samples [6][12]. - DeepSeek-R1 has implemented external risk control mechanisms and real-time audits, demonstrating superior safety performance compared to other mainstream models like Claude-3.7-Sonnet and GPT-4o [6][12]. Impact and Future Directions - The innovative use of pure reinforcement learning in DeepSeek-R1 is expected to influence future research in large language models, with many researchers looking to apply similar methods to enhance reasoning capabilities across various domains [12][14]. - Despite some concerns regarding the transparency of training data composition, the model has shown competitive performance in balancing accuracy and cost in scientific task challenges [14][12].
华人 AI 招聘 2 年 ARR 超 1000 万美金,Mercor 年化收入已 5 亿美金
投资实习所· 2025-09-16 05:38
Core Insights - The article discusses the shift in demand from Generalist AI Tutors to Specialist AI Tutors across various fields such as STEM, finance, medicine, and security, as evidenced by the rapid growth of Mercor [2][6] - Mercor's annual revenue run rate has surged from 1 million to 500 million USD in just 17 months, indicating a significant acceleration in growth [2][3] - The emergence of new knowledge-based jobs focused on training AI agents is highlighted, suggesting a transformation in the job market [3][12] Group 1: Industry Trends - The overall demand in the AI industry is evolving towards specialized roles, reflecting a broader trend in the economy becoming a reinforcement learning environment [2][6] - The fear of job loss due to technological advancements is contrasted with the creation of new job categories, particularly in training AI agents [6][12] - Historical context is provided, noting that previous technological revolutions have also led to new job categories despite initial fears of unemployment [6][12] Group 2: Company Performance - Mercor's revenue growth is notable, with a reported annual revenue run rate of over 500 million USD, showcasing a rapid increase in demand for AI recruitment services [2][3] - The company is currently paying over 1 million USD daily to experts across various fields, indicating a robust recruitment model [3][14] - Mercor's positioning as an AI recruitment platform is emphasized, with a focus on providing talent for AI companies, particularly in reinforcement learning [14][15] Group 3: Future of Work - The future of work is expected to center around training AI agents, with a significant market for human labor in creating and validating training environments for AI [11][12] - The article posits that as long as there are tasks that humans can perform but AI cannot, there will be a need for human involvement in training and evaluation [11][12] - The concept of an "experience era" is introduced, where models learn to optimize rewards in real-world scenarios, necessitating human feedback and guidance [13]