Workflow
强化学习
icon
Search documents
重磅分享!VR-Robo:real2sim2real助力真实场景下的机器人导航和运动控制
具身智能之心· 2025-07-08 09:31
Core Viewpoint - The article discusses the limitations of foot robots in real-world applications due to the gap between simulation and reality, particularly in high-level tasks requiring RGB perception. It introduces a "Real-Sim-Real" framework that enhances visual navigation and motion control through a digital twin simulation environment [2]. Group 1 - The movement control of foot robots benefits from the combination of reinforcement learning and physical simulation, but is hindered by the lack of realistic visual rendering [2]. - The proposed "Real-Sim-Real" framework utilizes multi-view images for 3D Gaussian splatting (3DGS) scene reconstruction, creating a simulation environment that combines photo-realism with physical interaction characteristics [2]. - Experiments in the simulator demonstrate that the method supports the transfer of reinforcement learning strategies from simulation to reality using pure RGB input, facilitating rapid adaptation and efficient exploration in new environments [2]. Group 2 - The framework shows potential applications in home and factory settings, indicating its relevance for practical deployment in various environments [2]. - The paper titled "VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and Locomotion" is linked for further reading [3]. - Additional project details can be found on the provided project link [3].
多模态模型学会“按需搜索”,少搜30%还更准!字节&NTU新研究优化多模态模型搜索策略
量子位· 2025-07-08 07:30
MMSearch-R1团队 投稿 量子位 | 公众号 QbitAI 多模态模型学会"按需搜索"! 字节&NTU最新研究, 优化 多模态模型搜索策 略 —— 通过搭建网络搜索工具、构建多模态搜索数据集以及涉及简单有效的奖励机制,首次尝试 基于端到端强化学习的多模态模型自主搜索训练 。 经过训练的模型能够自主判断搜索时机、搜索内容并处理搜索结果,在真实互联网环境中执行多轮按需搜索。 实验结果表明,在知识密集型视觉问答任务 (Visual Question Answering, VQA) 中,MMSearch-R1系统展现出显著优势: 其性能不仅超越同规模模型在传统检索增强生成 (RAG) 工作流下的性能,更 在减少约30%搜索次数的前提 下 , 达 到了更大规模规模模 型做传统RAG的性能水平。 下文将详细解析该研究的研究方法以及实验发现。 具体怎么做到的? 近年来,随着视觉-语言训练数据集在规模和质量上的双重提升,多模态大模型 (Large Multimodal Models, LMMs) 在跨模态理解任务中 展现出卓越的性能,其文本与视觉知识的对齐能力显著增强。 然而,现实世界的信息具有高度动态性和复杂性,单 ...
突破全模态AI理解边界:引入上下文强化学习,赋能全模态模型“意图”推理新高度
量子位· 2025-07-08 07:30
Core Viewpoint - The article emphasizes the increasing need for deep understanding and analysis of human intent in the context of multimodal large language models (MLLMs) and highlights the challenges faced in applying reinforcement learning (RL) effectively to complex multimodal data and formats [1][4]. Group 1: Challenges in Multimodal Reasoning - Insufficient global context understanding leads to incorrect answers when models fail to accurately identify or misinterpret multimodal evidence and contextual information [3]. - The shortcut problem arises when models overlook key clues and provide answers without fully considering multimodal information, resulting in suboptimal or partial outcomes [4]. Group 2: Innovations and Advantages - HumanOmniV2 introduces a mandatory context summarization before reasoning, ensuring models do not skip critical multimodal input and providing comprehensive global background support [12]. - A multidimensional reward mechanism is implemented, including context reward, format reward, and accuracy reward, to guide models in accurately understanding multimodal context [13][14]. - The model encourages complex logical reasoning by evaluating whether the reasoning process successfully integrates multimodal information and employs advanced logical analysis techniques [15]. Group 3: Model Design and Training Strategies - The model is based on Qwen2.5-Omni-Thinker, with improvements to the Group Relative Policy Optimization (GRPO) method to enhance training efficiency, fairness, and robustness [19][20]. - Token-level loss is introduced to address the imbalance in long sequence training, ensuring balanced optimization for each token [19]. - The removal of question-level normalization terms promotes consistency in the optimization process across different problem difficulties [19]. - Dynamic KL divergence is utilized to enhance exploration capabilities and training stability throughout the training cycle [20]. Group 4: High-Quality Datasets and Benchmarks - A comprehensive multimodal reasoning training dataset has been created, incorporating image, video, and audio understanding tasks with rich contextual information [23]. - IntentBench, a new multimodal benchmark, evaluates models' abilities to understand human behavior and intent in complex scenarios, featuring 633 videos and 2,689 related questions [23]. Group 5: Experimental Results - HumanOmniV2 achieved breakthrough results across multiple benchmark datasets, attaining 58.47% on Daily-Omni, 47.1% on WorldSense, and 69.33% on the newly introduced IntentBench, outperforming existing open-source multimodal models [24].
RL 圈的夏夜之约!12 人唠嗑局:当强化学习撞上大模型 Agent
机器之心· 2025-07-08 04:09
Core Viewpoint - The article promotes an event titled "Reinforcement Learning New Paradigm Exploration Night," emphasizing the integration of reinforcement learning (RL) with large model agents, highlighting its significance in the current technological landscape [2][3]. Event Details - The event is scheduled for July 26, 2025, from 19:00 to 21:10, located near the Shanghai Expo Exhibition Center, aiming for an intimate gathering of only 12 participants to facilitate deep discussions [3][4]. - The event will cover three main topics: the synergy between reinforcement learning and large model agents, the dilemma of exploration versus stability in training strategies, and the challenges of aligning and evaluating intelligent agents [4]. Target Audience - The event is designed for individuals from academia, industry, and entrepreneurship, encouraging participants to bring their latest research, practical experiences, and product challenges for collaborative discussions [5][6]. - The focus is on fostering an environment for lively exchanges of ideas rather than formal presentations, aiming for a dynamic and engaging atmosphere [6][7]. Participation Information - Interested participants are encouraged to scan a QR code to express their identity (academic, industry, or entrepreneurial) and the specific RL challenges they wish to discuss, with limited spots available [8]. - The article emphasizes the importance of engaging in meaningful technical discussions and debates, suggesting that the event will provide a unique opportunity for networking and collaboration [9].
复盘国内外AI,兼论恒生科技
小熊跑的快· 2025-07-07 09:45
Market Overview - After April 7, both the US and Chinese stock markets experienced a rally, with the Nasdaq rising by 32.9%, the Hang Seng Tech Index ETF (513180) increasing by 11.57%, and the Shanghai Composite Index gaining 12.16% [1] AI Chip Market Dynamics - The focus has shifted from training GPUs to AI inference ASIC chips, driven by a slowdown in the iteration of foundational models under the transformer architecture [3][5] - The rental prices for training chips like H100 and H200 have declined since February, influenced by the industry's pivot towards reinforcement learning (RL) [5][6] - The upcoming GPT-5 model is expected to emphasize RL, which has a smaller demand compared to the pre-training phase [5] Data Source Considerations - A significant portion of the training data for GPT-5 is synthetic, raising concerns about the quality and sourcing of training data for future models [6] - The competition in the coding domain, particularly between Claude4 and Cursor, highlights the necessity for models to specialize in industry-specific data to maintain value [6] Token Usage Growth - Microsoft reported a token volume exceeding 100 trillion in Q1 2025, a fivefold increase year-on-year, while Google's monthly token processing surged from 9.7 trillion to 480 trillion, a growth of approximately 50 times [7] - Domestic AI models, such as Doubao, saw daily token usage exceed 16.4 trillion in May, marking a growth of over 4 times compared to the end of 2024 [7] ASIC Chip Outlook - The current market environment favors the development of inference ASIC chips, as existing models are sufficiently accurate for application [8][9] - The anticipated return of ASIC chips in Q3 is expected to alleviate supply issues faced in the first two quarters [9][10] - The overall sentiment towards the Hang Seng Tech Index is cautiously optimistic, with expectations of a rebound in capital expenditures (capex) [10] Future Projections - The ASIC chip market is projected to see significant growth from 2025 to 2027, coinciding with the next major architectural shift in foundational models [10] - Companies like Microsoft and Amazon are expected to continue their ASIC chip design efforts, with no immediate acknowledgment of failures in early generations [10]
6大基准全面碾压!TW-GRPO刷新视频推理天花板,CLEVRER准确率突破50.4%!
机器人大讲堂· 2025-07-06 05:23
Core Viewpoint - The rapid development of multi-modal large language models (MLLMs) is significantly enhancing video reasoning capabilities, driven by reinforcement learning (RL) as a key engine for this technological revolution [1] Group 1: TW-GRPO Framework Introduction - The TW-GRPO framework is proposed to address challenges in reasoning quality and reward granularity in video reasoning tasks, inspired by the traditional GRPO framework [2] - TW-GRPO integrates focused thinking and multi-level soft reward mechanisms for multi-choice QA tasks [3] Group 2: Key Improvements in TW-GRPO - The framework enhances information weighting and reward mechanism design, applying a soft reward mechanism from video localization to video reasoning tasks [4] - A dynamic weighting mechanism prioritizes high information density tokens, improving reasoning accuracy and efficiency by focusing on key content [4] - The multi-level reward mechanism redefines rewards, allowing for partial correctness in answers, thus improving training stability and efficiency [5] Group 3: Data Augmentation and Training Efficiency - TW-GRPO introduces a question-answer inversion (QAI) data augmentation technique to convert single-choice tasks into multi-choice formats, effectively expanding the training data pool [6] - This approach disrupts traditional equal treatment of tokens, enhancing training efficiency and reasoning performance through differentiated information processing [6] Group 4: Experimental Validation - Extensive experiments demonstrate TW-GRPO's effectiveness in video reasoning and general understanding tasks, outperforming Video-R1 by 18.8%, 1.8%, and 1.6% in various benchmarks [12][15] - The framework shows faster convergence and more stable learning processes compared to traditional GRPO, with shorter output sequences indicating more efficient reasoning [11][17] Group 5: Qualitative Analysis of Reasoning Paths - A qualitative comparison of reasoning paths between T-GRPO and TW-GRPO illustrates significant improvements in accuracy and efficiency in dynamic visual cue reasoning tasks [22]
港大强化学习驱动连续环境具身导航方法:VLN-R1
具身智能之心· 2025-07-04 09:48
Core Viewpoint - The article presents the VLN-R1 framework, which utilizes large vision-language models (LVLM) for continuous navigation in real-world environments, addressing limitations of previous discrete navigation methods [5][15]. Research Background - The VLN-R1 framework processes first-person video streams to generate continuous navigation actions, enhancing the realism of navigation tasks [5]. - The VLN-Ego dataset is constructed using the Habitat simulator, providing rich visual and language information for training LVLMs [5][6]. - The importance of visual-language navigation (VLN) is emphasized as a core challenge in embodied AI, requiring real-time decision-making based on natural language instructions [5]. Methodology - The VLN-Ego dataset includes natural language navigation instructions, historical frames, and future action sequences, designed to balance local details and overall context [6]. - The training method consists of two phases: supervised fine-tuning (SFT) to align action predictions with expert demonstrations, followed by reinforcement fine-tuning (RFT) to optimize model performance [7][9]. Experimental Results - In the R2R task, VLN-R1 achieved a success rate (SR) of 30.2% with the 7B model, significantly outperforming traditional models without depth maps or navigation maps [11]. - The model demonstrated strong cross-domain adaptability, outperforming fully supervised models in the RxR task with only 10K samples used for RFT [12]. - The design of predicting future actions was found to be crucial for performance, with the best results obtained by predicting six future actions [14]. Conclusion and Future Work - VLN-R1 integrates LVLM and reinforcement learning fine-tuning, achieving state-of-the-art performance in simulated environments and showing potential for small models to match larger ones [15]. - Future research will focus on validating the model's generalization capabilities in real-world settings and exploring applications in other embodied AI tasks [15].
小米社招&校招 | 自动驾驶与机器人具身智能算法研究员 (VLA方向)
具身智能之心· 2025-07-03 13:36
职位描述 我们正在寻找一位杰出的研究员/科学家,加入我们的前沿探索团队,共同定义和构建下一代自动驾驶与机器人 的"大脑"。您将致力于突破性的具身基座模型 (Embodied Foundation Model) 的研究,该模型将深度融合视觉-语 言-行动 (VLA) 能力,并具备卓越的空间感知与空间推理能力。 核心职责包括 前沿算法研究与构建:负责设计和实现领先的具身多模态大模型。您的研究将不仅限于现有的VLA框架,更将 探索如何构建能够理解复杂三维世界、并进行长时序、多步骤任务规划的世界模型 (World Model)。 核心模型能力攻关:主导模型在以下关键能力上的突破: 多模态场景理解:融合视觉、语言、雷达等多源信息,实现对动态、开放环境的深刻理解和空间感知。 学习与适应机制:深入研究强化学习 (RL)、模仿学习 (IL) 及自监督学习方法,使模型能从海量数据和与环境的 交互中持续学习和进化。 技术愿景与路线图:主导构建可泛化、高效率的具身智能基座模型,为未来1-3年的技术演进提供核心支撑,并 探索其在自动驾驶和通用机器人领域的统一应用潜力。 复杂语义推理与决策:让模型能够理解模糊、抽象的人类指令,并结合对 ...
你被哪个后来知道很致命的BUG困扰过一周以上吗?
自动驾驶之心· 2025-07-03 12:41
Core Insights - The article discusses the challenges and experiences in training AI models using reinforcement learning, highlighting the importance of reward design and the pitfalls that can arise during the process [1][2]. Group 1: Reinforcement Learning Challenges - The author shares experiences from a project where a robot was trained to run, illustrating how different reward structures led to unexpected behaviors, such as jumping too far and falling [1]. - The design of learning objectives is crucial, as poorly defined goals can lead to models that do not perform as intended, such as generating repetitive outputs or failing to learn effectively [2]. Group 2: AI Model Training Insights - The robustness of neural networks allows them to continue iterating despite bugs in the code, which can lead to unexpected improvements when the bugs are eventually removed [2]. - The article emphasizes the collaborative nature of deep learning projects, where introducing bugs can inspire creative solutions from team members [2]. Group 3: Community and Learning Resources - The article mentions a community of nearly 4,000 members, including over 300 companies and research institutions in the autonomous driving sector, providing a platform for learning and sharing knowledge [3]. - Various technical areas related to autonomous driving are covered, including perception, mapping, and control, indicating a comprehensive approach to education in this field [3].
OpenAI 研究员 Noam Brown:Mid-training 是新的 pre-training
海外独角兽· 2025-07-02 11:03
Core Insights - The article discusses the emergence of reasoning capabilities in AI models, highlighting a shift from mere pattern matching to complex cognitive reasoning, which is essential for scientific discovery and decision-making [4][5]. Group 1: Reasoning as an Emergent Capability - Reasoning is an emergent ability that models can only benefit from once pre-training reaches a certain level [5][11]. - The analogy of "fast thinking and slow thinking" is used to explain the relationship between non-reasoning and reasoning models, where the former corresponds to intuitive responses and the latter to deliberate reasoning [8][11]. - The performance of models in multi-modal tasks depends on their ability to integrate complex information and logical reasoning [12][13]. Group 2: Need for a Universal Reasoning Paradigm - Achieving superintelligence requires a universal reasoning paradigm, as merely scaling pre-training is insufficient [20][21]. - OpenAI's leadership recognized the need for a shift towards reasoning paradigms and reinforcement learning, leading to significant resource allocation in these areas [21][24]. Group 3: Efficient Data Utilization through Reinforcement Learning - Reinforcement learning can enhance the efficiency of data usage, which is crucial as data becomes scarcer than computational power [25]. - Current machine learning models require significantly more samples than humans to learn new concepts, highlighting the need for improved sample efficiency [25][26]. Group 4: Non-Consensus Views on Reasoning Ability - Reasoning is not limited to tasks with clear reward functions; it can also excel in subjective fields where results are harder to quantify [33]. - The alignment of AI with user preferences is critical, and reasoning capabilities can help achieve this alignment while mitigating ethical risks [34][35]. Group 5: Bottlenecks in Test-Time Compute Development - Test-time compute faces cost limitations similar to those encountered during pre-training scaling, where increased model size leads to exponentially rising costs [36]. - The absolute time constraints on model responses hinder the speed of experimental iterations, impacting research efficiency [37][38]. Group 6: Mid-Training as a New Pre-Training Phase - Mid-training is introduced as a phase that adds new capabilities to models before the completion of pre-training, enhancing their generalization and practicality [40][41]. - OpenAI has adopted mid-training strategies in its model training processes to improve alignment and safety [41][42]. Group 7: Insights from The Bitter Lesson for Multi-Agent Systems - The concept of multi-agent systems may lead to the emergence of an "AI civilization" through long-term collaboration and competition among AI agents [44]. - Noam's team is exploring a principled research path that contrasts with traditional heuristic-based approaches in multi-agent research [45][46].