在线强化学习
Search documents
刷新NAVSIM SOTA!端到端自动驾驶新框架Masked Diffusion
自动驾驶之心· 2025-12-26 03:32
来源 | 机器之心 原文链接: 刷新NAVSIM SOTA,复旦引望提出Masked Diffusion端到端自动驾驶新框架 点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 >>自动驾驶前沿信息获取 → 自动驾驶之心知识星球 本文只做学术分享,如有侵权,联系删文 随着 VLA(Vision-Language-Action)模型的兴起,端到端自动驾驶正经历从「模块化」向「大一统」的范式转移。然而,将感知、推理与规划压缩进单一模型 后,主流的自回归(Auto-regressive)生成范式逐渐显露出局限性。现有的自回归模型强制遵循「从左到右」的时序生成逻辑,这与人类驾驶员的思维直觉存在本 质差异 —— 经验丰富的驾驶员在处理复杂路况时,往往采用「以终为始」的策略,即先确立长期的驾驶意图(如切入匝道、避让行人、靠边停靠),再反推当前 的短期操控动作。此外,基于模仿学习的模型容易陷入「平均司机」陷阱,倾向于拟合数据分布的均值,导致策略平庸化,难以在激进博弈与保守避让之间灵活切 换。 针对上述痛点, 复旦大学与引望智能联合提出了 WAM-Diff 框架 。该研究创新 ...
刷新NAVSIM SOTA,复旦提出端到端自动驾驶新框架
具身智能之心· 2025-12-26 00:55
编辑丨 机器之心 点击下方 卡片 ,关注" 具身智能之心 "公众号 >> 点击进入→ 具身 智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区: 具身智能之心知识星球(戳我) ,这里包含所有你想要的! 随着 VLA(Vision-Language-Action)模型的兴起,端到端自动驾驶正经历从「模块化」向「大一统」的范式转移。然而,将感知、推理与规划压缩进单一模型 后,主流的自回归(Auto-regressive)生成范式逐渐显露出局限性。现有的自回归模型强制遵循「从左到右」的时序生成逻辑,这与人类驾驶员的思维直觉存在本 质差异 —— 经验丰富的驾驶员在处理复杂路况时,往往采用「以终为始」的策略,即先确立长期的驾驶意图(如切入匝道、避让行人、靠边停靠),再反推当 前的短期操控动作。此外,基于模仿学习的模型容易陷入「平均司机」陷阱,倾向于拟合数据分布的均值,导致策略平庸化,难以在激进博弈与保守避让之间灵 活切换。 针对上述痛点, 复旦大学与引望智能联合提出了 WAM-Diff 框架 。该研究创新性地将 离散掩码扩散模型 (Discrete Masked Diffusion)引入 VLA 自动 ...
刷新NAVSIM SOTA,复旦引望提出Masked Diffusion端到端自动驾驶新框架
机器之心· 2025-12-25 03:12
随着 VLA(Vision-Language-Action)模型的兴起,端到端自动驾驶正经历从「模块化」向「大一统」的范 式转移。然而,将感知、推理与规划压缩进单一模型后,主流的自回归(Auto-regressive)生成范式逐渐显 露出局限性。现有的自回归模型强制遵循「从左到右」的时序生成逻辑,这与人类驾驶员的思维直觉存在 本质差异 —— 经验丰富的驾驶员在处理复杂路况时,往往采用「以终为始」的策略,即先确立长期的驾驶 意图(如切入匝道、避让行人、靠边停靠),再反推当前的短期操控动作。此外,基于模仿学习的模型容 易陷入「平均司机」陷阱,倾向于拟合数据分布的均值,导致策略平庸化,难以在激进博弈与保守避让之 间灵活切换。 针对上述痛点, 复旦大学与引望智能联合提出了 WAM-Diff 框架 。该研究创新性地将 离散掩码扩散模型 (Discrete Masked Diffusion)引入 VLA 自动驾驶规划,并结合 稀疏混合专家(MoE)架构与在线强化学习 (GSPO) ,构建了一套不再受限于单向时序的生成式规划系统。 在权威评测基准 NAVSIM 中,WAM-Diff 展现了卓越的性能,在 NAVSIM-v1 ...
华科&小米联合提出MindDrive:首个证实在线强化学习有效性的VLA框架......
自动驾驶之心· 2025-12-17 00:03
点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 >>自动驾驶前沿信息获取 → 自动驾驶之心知识星球 论文作者 | Haoyu Fu等 编辑 | 自动驾驶之心 华科&小米的一篇新工作MindDrive,提出了一种基于在线强化学习的VLA框架。 相比RecogDrive、ORION提升了不少,在Qwen2-0.5B的基座上效果挺不错的。 当前自动驾驶领域VLA的相关工作主要依赖模仿学习,这会带来分布偏移和因果混淆等固有挑战。在线强化学习通过试错学习为解决这些问题提供了一条极具潜力的 途径。然而,将在线强化学习应用于自动驾驶视觉-语言-动作模型时,面临着连续动作空间中探索效率低下的难题。为克服这一限制, 华科和小米的团队提出了 MindDrive——一种包含大语言模型(LLM)的视觉-语言-动作框架,该模型配备两组不同的LoRA参数。 其中一组大语言模型充当决策专家,负责场景推理和驾驶 决策;另一组则作为动作专家,将语言决策动态映射为可行驶轨迹。通过将轨迹级奖励反馈至推理空间,MindDrive能够在有限的离散语言驾驶决策集合上进行试错 学习,而非直接在连续动作 ...
聊聊在线强化学习是怎么微调π0和π0.5的?为什么性能最高能提升50%以上?
具身智能之心· 2025-11-10 03:30
Core Viewpoint - The article discusses the introduction of the πRL framework, which enhances flow-based vision-language-action (VLA) models through online reinforcement learning (RL) fine-tuning, significantly improving their performance and generalization capabilities [5][7]. Group 1: Introduction to VLA Models - VLA models enable robots to understand and execute complex tasks through multimodal inputs, but large-scale RL applications face challenges due to the difficulty in handling action log-likelihood during the iterative denoising process [5]. Group 2: πRL Framework - The πRL framework, developed by teams from Tsinghua University and Peking University, addresses the challenges of applying large-scale RL to flow-based VLA models by training them in parallel simulations [6]. Group 3: RL Algorithms in πRL - πRL implements two RL algorithms: 1. FlowNoise models the denoising process as a discrete-time Markov Decision Process (MDP) using a learnable noise network for precise log-likelihood calculations [7]. 2. Flow-SDE combines the denoising process with agent-environment interaction, constructing a dual-layer MDP that transitions from ODE to SDE for efficient RL exploration [7]. Group 4: Performance Evaluation - In benchmark tests, πRL significantly improved the performance of few-shot SFT models π0 and π0.5 from 57.6% to 97.6% and from 77.1% to 98.3% on the LIBERO dataset, respectively [7]. - In the ManiSkill benchmark, πRL demonstrated scalable multi-task RL capabilities across 4,352 grasping and placing tasks using 320 parallel environments [7]. Group 5: Conclusion - Overall, πRL shows substantial performance enhancements and stronger generalization compared to SFT models, validating the effectiveness of online RL in flow-based VLA models [7].
Figma 如何战胜 Adobe 等六篇 | 42章经 AI Newsletter
42章经· 2025-10-26 13:42
Group 1: Figma vs Adobe - Figma's success is attributed to its focus on "collaboration" as a core feature, contrasting with Adobe's file-centric approach [2][3] - Adobe's collaboration is based on file transfer, while Figma allows real-time editing on a shared canvas, enabling true synchronous collaboration [3] - Existing giants like Adobe struggle to adapt due to their historical success paths and internal resistance to change [3] Group 2: Online Reinforcement Learning - Cursor's use of online reinforcement learning (RL) optimizes its code completion feature, Tab, by treating user interactions as feedback signals for real-time training [6][10] - The model's suggestion volume has decreased by 21%, while the acceptance rate has increased by 28%, indicating improved performance [6] Group 3: Plaud's Success - Plaud's success is rooted in recognizing the value of context, viewing conversations as a form of intelligence and a significant data source [12][14] - The company designs its hardware and software to effectively capture and analyze user context, positioning itself as a context collector rather than just a recording device [15] - Plaud's approach emphasizes a "reverse thinking" strategy, focusing on how AI can serve users by prompting them for context rather than the other way around [16][18] Group 4: Creating Delight in Products - Delight in products is defined as a combination of joy and surprise, with three main strategies: exceeding expectations, anticipating needs, and removing friction [25][27] - A systematic approach to creating delight involves redefining user categories based on motivations, transforming those motivations into opportunities, and ensuring that delight becomes an organizational capability [28][30] Group 5: Evaluating AI Product Retention - A16Z suggests that AI companies should measure retention starting from the third month (M3) to better understand their true user base, as early data may include many transient users [34][35] - The new metric M12/M3 is proposed to assess long-term retention quality, indicating how many users remain after a year compared to the third month [36][39] Group 6: Palantir's FDE Model - The Forward Deployed Engineer (FDE) model involves engineers embedded at client sites to bridge the gap between product capabilities and client needs, focusing on product exploration [42][46] - FDE teams consist of Echo (consulting analysts) and Delta (deployment engineers), each with distinct roles to ensure effective client engagement and product development [49][50] - The FDE model is particularly relevant in the AI era, where high-value contracts justify deep client integration and where product-market fit is often unclear [53][54]
AI在线强化学习“边做边学”,斯坦福团队让7B小模型性能飙升,甚至超越GPT-4o
3 6 Ke· 2025-10-24 12:45
Core Insights - AgentFlow introduces a new paradigm for online reinforcement learning, enhancing the reasoning capabilities of agent systems through real-time optimization and collaboration among specialized agents [1][11][14]. Performance Metrics - AgentFlow, based on the Qwen-2.5-7B-Instruct model, shows significant improvements across various benchmark tests: 14.9% in search tasks, 14.0% in agentic reasoning tasks, 14.5% in mathematical reasoning, and 4.1% in scientific reasoning [4][19][21]. - The performance of AgentFlow surpasses that of larger models, including GPT-4o and Llama3.1-405B, demonstrating that effective system design can outperform sheer model size [21][25]. System Architecture - The architecture of AgentFlow consists of four specialized agents: a planner for task analysis and tool selection, an executor for tool invocation, a verifier for evaluating intermediate results, and a generator for synthesizing final outputs [11][13][14]. - The system employs a shared memory design that facilitates collaboration and reduces error propagation in multi-step reasoning processes [7][14]. Learning Mechanism - The on-policy optimization of the planner within the agent interaction flow is crucial for adapting to environmental changes and feedback, leading to a robust and self-evolving reasoning process [13][14][22]. - The Flow-GRPO algorithm addresses the challenges of multi-turn credit assignment in reinforcement learning, enhancing training efficiency and stability in complex reasoning tasks [15][19]. Research Findings - The study reveals that online learning in real interaction environments is essential for achieving efficient reasoning, as opposed to offline supervised learning, which can lead to performance declines [22][25]. - AgentFlow's training allows the system to autonomously discover new tool combinations and usage patterns, enhancing its problem-solving capabilities [25][29]. Future Implications - AgentFlow represents a shift from seeking a single comprehensive model to enabling agents to adapt and learn continuously within a system, highlighting the potential of collaborative intelligence in addressing complex tasks [29].
AI在线强化学习“边做边学”,斯坦福团队让7B小模型性能飙升,甚至超越GPT-4o
量子位· 2025-10-24 03:53
Core Insights - The article discusses the introduction of AgentFlow, a new paradigm in online reinforcement learning that enhances the reasoning capabilities of intelligent systems, outperforming models like GPT-4o and Llama3.1-405B [1][4][23]. Group 1: AgentFlow Overview - AgentFlow consists of a team of specialized agents including a planner, executor, verifier, and generator, which collaborate through shared memory to optimize decision-making in real-time [1][14][18]. - The Flow-GRPO method allows for on-policy optimization of the planner agent, enabling adaptive decision-making based on environmental changes and feedback from other agents [19][16]. Group 2: Performance Metrics - AgentFlow, based on the Qwen-2.5-7B-Instruct model, shows significant improvements across various benchmark tests: 14.9% in search tasks, 14.0% in agentic reasoning, 14.5% in math reasoning, and 4.1% in scientific reasoning [3][25][27]. - The model's performance surpasses that of larger models, demonstrating that effective system design and training methods can be more impactful than simply increasing model size [27]. Group 3: Learning Mechanisms - The article emphasizes the importance of "learning in the flow," indicating that online learning in real interactive environments is crucial for achieving efficient reasoning [28][29]. - AgentFlow's architecture allows for rapid error correction and improved task planning through real-time training, enhancing overall system performance [30][29]. Group 4: Innovations and Findings - The system autonomously discovers new solution paths, such as combining different search tools to enhance information retrieval, showcasing its ability to adapt and innovate [33]. - AgentFlow maintains performance improvements without significantly increasing the average reasoning steps, indicating efficient handling of complex tasks [35]. Group 5: Future Implications - The article concludes that AgentFlow presents a novel approach to intelligent agent training, advocating for systems that adapt and learn continuously rather than relying on a single comprehensive model [37][38]. - Despite the distance from research to practical application, the potential for Agentic AI remains significant, suggesting a promising future for intelligent systems [39].
GUI智能体训练迎来新范式!半在线强化学习让7B模型媲美GPT-4o
量子位· 2025-09-23 11:01
Core Viewpoint - The article discusses the introduction of a new training paradigm called Semi-online Reinforcement Learning (Semi-online RL) by Zhejiang University and Tongyi Laboratory's Mobile-Agent team, which enhances the performance of models in dynamic multi-turn tasks without relying on real environment interactions [1][2][4]. Group 1: Methodology - The Semi-online RL framework combines the stability of offline training with the long-term optimization capabilities of online learning, significantly improving model performance in dynamic tasks [2][10]. - The framework utilizes offline data to simulate online interactions, allowing the model to experience contextual changes from its own actions during training [12][15]. - A patching mechanism is introduced to adaptively correct sampling biases when the model deviates from expert trajectories, enhancing the learning process [17][19]. Group 2: Key Technologies - The Semi-online RL framework consists of three core technologies: 1. Semi-online mechanism that simulates online interactions using offline data [12]. 2. Patching Module that self-adaptively repairs sampling biases [17]. 3. Long-term reward modeling that estimates advantages from step-level to trajectory-level [20]. Group 3: Evaluation and Results - The new evaluation metric SOP (Semi-online Performance) is proposed to better reflect the model's performance in multi-turn tasks, aligning closely with real online performance [22][23]. - Experimental results show that the UI-S1-7B model outperforms baseline models, achieving a task success rate of 34.0% in the AndroidWorld task, closely approaching the performance of top proprietary models [25][26]. - The model maintains a +7.1% gain in single-turn tasks, indicating that the semi-online training does not sacrifice local accuracy while optimizing for long-term performance [28]. Group 4: Component Analysis - The patching mechanism significantly enhances data utilization and maintains training stability, allowing for effective error correction and promoting policy diversity [30][37]. - Ablation studies confirm that the combination of trajectory-level and step-level advantage functions, along with multi-frame historical observations, positively impacts the model's decision-making capabilities in complex GUI interactions [44].
全球双榜SOTA!明略科技专有大模型 Mano开启GUI智能操作新时代
机器之心· 2025-09-21 05:26
Core Viewpoint - Minglue Technology's proprietary GUI model, Mano, has achieved record-breaking SOTA results in the recognized benchmarks Mind2Web and OSWorld, establishing a new paradigm for GUI intelligent agents through innovations in online reinforcement learning and automatic data collection [1][14][23]. Group 1: Performance Achievements - Mano achieved a success rate of 40.1% in the OSWorld-Verified benchmark, surpassing other models such as qwen and GUI-Owl [10][19]. - In the Mind2Web benchmark, Mano demonstrated superior performance across various metrics, including element accuracy and step success rate, significantly outperforming all other SOTA methods [18][15]. - The model's success rate in OSWorld-Verified reached 41.6±0.7%, marking an approximate 7 percentage point improvement over competitors [21][19]. Group 2: Innovations and Methodology - Mano introduces online reinforcement learning as a novel training paradigm in the GUI interaction field, enhancing its performance in dynamic environments [22][23]. - The model's architecture consists of three main components: exploration module, processing flow, and optimization process, which collectively improve its reasoning and adaptability [25][26]. - The automatic data collection method developed by the technical team significantly enhances the efficiency and accuracy of data acquisition, allowing for the generation of high-quality interaction trajectory data [48][49]. Group 3: Market Context and Future Directions - The demand for AI agents is expected to surge by 2025, positioning Mano as a key player in differentiated competition by accessing data sources that other agents cannot reach [59][63]. - Minglue Technology plans to continue exploring areas such as data collection, training integration, and CAPTCHA handling to further optimize Mano for real-world applications [66].