强化学习

Search documents
RL 圈的夏夜之约!12 人唠嗑局:当强化学习撞上大模型 Agent
机器之心· 2025-07-08 04:09
Core Viewpoint - The article promotes an event titled "Reinforcement Learning New Paradigm Exploration Night," emphasizing the integration of reinforcement learning (RL) with large model agents, highlighting its significance in the current technological landscape [2][3]. Event Details - The event is scheduled for July 26, 2025, from 19:00 to 21:10, located near the Shanghai Expo Exhibition Center, aiming for an intimate gathering of only 12 participants to facilitate deep discussions [3][4]. - The event will cover three main topics: the synergy between reinforcement learning and large model agents, the dilemma of exploration versus stability in training strategies, and the challenges of aligning and evaluating intelligent agents [4]. Target Audience - The event is designed for individuals from academia, industry, and entrepreneurship, encouraging participants to bring their latest research, practical experiences, and product challenges for collaborative discussions [5][6]. - The focus is on fostering an environment for lively exchanges of ideas rather than formal presentations, aiming for a dynamic and engaging atmosphere [6][7]. Participation Information - Interested participants are encouraged to scan a QR code to express their identity (academic, industry, or entrepreneurial) and the specific RL challenges they wish to discuss, with limited spots available [8]. - The article emphasizes the importance of engaging in meaningful technical discussions and debates, suggesting that the event will provide a unique opportunity for networking and collaboration [9].
复盘国内外AI,兼论恒生科技
小熊跑的快· 2025-07-07 09:45
Market Overview - After April 7, both the US and Chinese stock markets experienced a rally, with the Nasdaq rising by 32.9%, the Hang Seng Tech Index ETF (513180) increasing by 11.57%, and the Shanghai Composite Index gaining 12.16% [1] AI Chip Market Dynamics - The focus has shifted from training GPUs to AI inference ASIC chips, driven by a slowdown in the iteration of foundational models under the transformer architecture [3][5] - The rental prices for training chips like H100 and H200 have declined since February, influenced by the industry's pivot towards reinforcement learning (RL) [5][6] - The upcoming GPT-5 model is expected to emphasize RL, which has a smaller demand compared to the pre-training phase [5] Data Source Considerations - A significant portion of the training data for GPT-5 is synthetic, raising concerns about the quality and sourcing of training data for future models [6] - The competition in the coding domain, particularly between Claude4 and Cursor, highlights the necessity for models to specialize in industry-specific data to maintain value [6] Token Usage Growth - Microsoft reported a token volume exceeding 100 trillion in Q1 2025, a fivefold increase year-on-year, while Google's monthly token processing surged from 9.7 trillion to 480 trillion, a growth of approximately 50 times [7] - Domestic AI models, such as Doubao, saw daily token usage exceed 16.4 trillion in May, marking a growth of over 4 times compared to the end of 2024 [7] ASIC Chip Outlook - The current market environment favors the development of inference ASIC chips, as existing models are sufficiently accurate for application [8][9] - The anticipated return of ASIC chips in Q3 is expected to alleviate supply issues faced in the first two quarters [9][10] - The overall sentiment towards the Hang Seng Tech Index is cautiously optimistic, with expectations of a rebound in capital expenditures (capex) [10] Future Projections - The ASIC chip market is projected to see significant growth from 2025 to 2027, coinciding with the next major architectural shift in foundational models [10] - Companies like Microsoft and Amazon are expected to continue their ASIC chip design efforts, with no immediate acknowledgment of failures in early generations [10]
6大基准全面碾压!TW-GRPO刷新视频推理天花板,CLEVRER准确率突破50.4%!
机器人大讲堂· 2025-07-06 05:23
Core Viewpoint - The rapid development of multi-modal large language models (MLLMs) is significantly enhancing video reasoning capabilities, driven by reinforcement learning (RL) as a key engine for this technological revolution [1] Group 1: TW-GRPO Framework Introduction - The TW-GRPO framework is proposed to address challenges in reasoning quality and reward granularity in video reasoning tasks, inspired by the traditional GRPO framework [2] - TW-GRPO integrates focused thinking and multi-level soft reward mechanisms for multi-choice QA tasks [3] Group 2: Key Improvements in TW-GRPO - The framework enhances information weighting and reward mechanism design, applying a soft reward mechanism from video localization to video reasoning tasks [4] - A dynamic weighting mechanism prioritizes high information density tokens, improving reasoning accuracy and efficiency by focusing on key content [4] - The multi-level reward mechanism redefines rewards, allowing for partial correctness in answers, thus improving training stability and efficiency [5] Group 3: Data Augmentation and Training Efficiency - TW-GRPO introduces a question-answer inversion (QAI) data augmentation technique to convert single-choice tasks into multi-choice formats, effectively expanding the training data pool [6] - This approach disrupts traditional equal treatment of tokens, enhancing training efficiency and reasoning performance through differentiated information processing [6] Group 4: Experimental Validation - Extensive experiments demonstrate TW-GRPO's effectiveness in video reasoning and general understanding tasks, outperforming Video-R1 by 18.8%, 1.8%, and 1.6% in various benchmarks [12][15] - The framework shows faster convergence and more stable learning processes compared to traditional GRPO, with shorter output sequences indicating more efficient reasoning [11][17] Group 5: Qualitative Analysis of Reasoning Paths - A qualitative comparison of reasoning paths between T-GRPO and TW-GRPO illustrates significant improvements in accuracy and efficiency in dynamic visual cue reasoning tasks [22]
港大强化学习驱动连续环境具身导航方法:VLN-R1
具身智能之心· 2025-07-04 09:48
Core Viewpoint - The article presents the VLN-R1 framework, which utilizes large vision-language models (LVLM) for continuous navigation in real-world environments, addressing limitations of previous discrete navigation methods [5][15]. Research Background - The VLN-R1 framework processes first-person video streams to generate continuous navigation actions, enhancing the realism of navigation tasks [5]. - The VLN-Ego dataset is constructed using the Habitat simulator, providing rich visual and language information for training LVLMs [5][6]. - The importance of visual-language navigation (VLN) is emphasized as a core challenge in embodied AI, requiring real-time decision-making based on natural language instructions [5]. Methodology - The VLN-Ego dataset includes natural language navigation instructions, historical frames, and future action sequences, designed to balance local details and overall context [6]. - The training method consists of two phases: supervised fine-tuning (SFT) to align action predictions with expert demonstrations, followed by reinforcement fine-tuning (RFT) to optimize model performance [7][9]. Experimental Results - In the R2R task, VLN-R1 achieved a success rate (SR) of 30.2% with the 7B model, significantly outperforming traditional models without depth maps or navigation maps [11]. - The model demonstrated strong cross-domain adaptability, outperforming fully supervised models in the RxR task with only 10K samples used for RFT [12]. - The design of predicting future actions was found to be crucial for performance, with the best results obtained by predicting six future actions [14]. Conclusion and Future Work - VLN-R1 integrates LVLM and reinforcement learning fine-tuning, achieving state-of-the-art performance in simulated environments and showing potential for small models to match larger ones [15]. - Future research will focus on validating the model's generalization capabilities in real-world settings and exploring applications in other embodied AI tasks [15].
小米社招&校招 | 自动驾驶与机器人具身智能算法研究员 (VLA方向)
具身智能之心· 2025-07-03 13:36
职位描述 我们正在寻找一位杰出的研究员/科学家,加入我们的前沿探索团队,共同定义和构建下一代自动驾驶与机器人 的"大脑"。您将致力于突破性的具身基座模型 (Embodied Foundation Model) 的研究,该模型将深度融合视觉-语 言-行动 (VLA) 能力,并具备卓越的空间感知与空间推理能力。 核心职责包括 前沿算法研究与构建:负责设计和实现领先的具身多模态大模型。您的研究将不仅限于现有的VLA框架,更将 探索如何构建能够理解复杂三维世界、并进行长时序、多步骤任务规划的世界模型 (World Model)。 核心模型能力攻关:主导模型在以下关键能力上的突破: 多模态场景理解:融合视觉、语言、雷达等多源信息,实现对动态、开放环境的深刻理解和空间感知。 学习与适应机制:深入研究强化学习 (RL)、模仿学习 (IL) 及自监督学习方法,使模型能从海量数据和与环境的 交互中持续学习和进化。 技术愿景与路线图:主导构建可泛化、高效率的具身智能基座模型,为未来1-3年的技术演进提供核心支撑,并 探索其在自动驾驶和通用机器人领域的统一应用潜力。 复杂语义推理与决策:让模型能够理解模糊、抽象的人类指令,并结合对 ...
你被哪个后来知道很致命的BUG困扰过一周以上吗?
自动驾驶之心· 2025-07-03 12:41
Core Insights - The article discusses the challenges and experiences in training AI models using reinforcement learning, highlighting the importance of reward design and the pitfalls that can arise during the process [1][2]. Group 1: Reinforcement Learning Challenges - The author shares experiences from a project where a robot was trained to run, illustrating how different reward structures led to unexpected behaviors, such as jumping too far and falling [1]. - The design of learning objectives is crucial, as poorly defined goals can lead to models that do not perform as intended, such as generating repetitive outputs or failing to learn effectively [2]. Group 2: AI Model Training Insights - The robustness of neural networks allows them to continue iterating despite bugs in the code, which can lead to unexpected improvements when the bugs are eventually removed [2]. - The article emphasizes the collaborative nature of deep learning projects, where introducing bugs can inspire creative solutions from team members [2]. Group 3: Community and Learning Resources - The article mentions a community of nearly 4,000 members, including over 300 companies and research institutions in the autonomous driving sector, providing a platform for learning and sharing knowledge [3]. - Various technical areas related to autonomous driving are covered, including perception, mapping, and control, indicating a comprehensive approach to education in this field [3].
OpenAI 研究员 Noam Brown:Mid-training 是新的 pre-training
海外独角兽· 2025-07-02 11:03
Core Insights - The article discusses the emergence of reasoning capabilities in AI models, highlighting a shift from mere pattern matching to complex cognitive reasoning, which is essential for scientific discovery and decision-making [4][5]. Group 1: Reasoning as an Emergent Capability - Reasoning is an emergent ability that models can only benefit from once pre-training reaches a certain level [5][11]. - The analogy of "fast thinking and slow thinking" is used to explain the relationship between non-reasoning and reasoning models, where the former corresponds to intuitive responses and the latter to deliberate reasoning [8][11]. - The performance of models in multi-modal tasks depends on their ability to integrate complex information and logical reasoning [12][13]. Group 2: Need for a Universal Reasoning Paradigm - Achieving superintelligence requires a universal reasoning paradigm, as merely scaling pre-training is insufficient [20][21]. - OpenAI's leadership recognized the need for a shift towards reasoning paradigms and reinforcement learning, leading to significant resource allocation in these areas [21][24]. Group 3: Efficient Data Utilization through Reinforcement Learning - Reinforcement learning can enhance the efficiency of data usage, which is crucial as data becomes scarcer than computational power [25]. - Current machine learning models require significantly more samples than humans to learn new concepts, highlighting the need for improved sample efficiency [25][26]. Group 4: Non-Consensus Views on Reasoning Ability - Reasoning is not limited to tasks with clear reward functions; it can also excel in subjective fields where results are harder to quantify [33]. - The alignment of AI with user preferences is critical, and reasoning capabilities can help achieve this alignment while mitigating ethical risks [34][35]. Group 5: Bottlenecks in Test-Time Compute Development - Test-time compute faces cost limitations similar to those encountered during pre-training scaling, where increased model size leads to exponentially rising costs [36]. - The absolute time constraints on model responses hinder the speed of experimental iterations, impacting research efficiency [37][38]. Group 6: Mid-Training as a New Pre-Training Phase - Mid-training is introduced as a phase that adds new capabilities to models before the completion of pre-training, enhancing their generalization and practicality [40][41]. - OpenAI has adopted mid-training strategies in its model training processes to improve alignment and safety [41][42]. Group 7: Insights from The Bitter Lesson for Multi-Agent Systems - The concept of multi-agent systems may lead to the emergence of an "AI civilization" through long-term collaboration and competition among AI agents [44]. - Noam's team is exploring a principled research path that contrasts with traditional heuristic-based approaches in multi-agent research [45][46].
小米社招&校招 | 自动驾驶与具身智能算法研究员 (VLA/具身方向)
自动驾驶之心· 2025-07-01 12:58
点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近15个 方向 学习 路线 职位描述 我们正在寻找一位杰出的研究员/科学家,加入我们的前沿探索团队,共同定义和构建下一代自 动驾驶与机器人的"大脑"。您将致力于突破性的具身基座模型 (Embodied Foundation Model) 的 研究,该模型将深度融合视觉-语言-行动 (VLA) 能力,并具备卓越的空间感知与空间推理能 力。 多模态场景理解:融合视觉、语言、雷达等多源信息,实现对动态、开放环境的深刻理解和空间 感知。 复杂语义推理与决策:让模型能够理解模糊、抽象的人类指令,并结合对物理世界的空间推理, 生成安全、合理、可解释的行动序列。 学习与适应机制:深入研究强化学习 (RL)、模仿学习 (IL) 及自监督学习方法,使模型能从海量 数据和与环境的交互中持续学习和进化。 技术愿景与路线图:主导构建可泛化、高效率的具身智能基座模型,为未来1-3年的技术演进提 供核心支撑,并探索其在自动驾驶和通用机器人领域的统一应用潜力。 学术影响力与合作:与全球顶尖高校及研究机构合作,探索表征学习、因果推理、世界模型等长 期议题。在CVPR、 ...
小米社招&校招 | 自动驾驶与机器人具身智能算法研究员 (VLA方向)
具身智能之心· 2025-07-01 12:07
核心职责包括 前沿算法研究与构建:负责设计和实现领先的具身多模态大模型。您的研究将不仅限于现有的VLA框架,更将 探索如何构建能够理解复杂三维世界、并进行长时序、多步骤任务规划的世界模型 (World Model)。 核心模型能力攻关:主导模型在以下关键能力上的突破: 多模态场景理解:融合视觉、语言、雷达等多源信息,实现对动态、开放环境的深刻理解和空间感知。 职位描述 我们正在寻找一位杰出的研究员/科学家,加入我们的前沿探索团队,共同定义和构建下一代自动驾驶与机器人 的"大脑"。您将致力于突破性的具身基座模型 (Embodied Foundation Model) 的研究,该模型将深度融合视觉-语 言-行动 (VLA) 能力,并具备卓越的空间感知与空间推理能力。 复杂语义推理与决策:让模型能够理解模糊、抽象的人类指令,并结合对物理世界的空间推理,生成安全、合 理、可解释的行动序列。 学习与适应机制:深入研究强化学习 (RL)、模仿学习 (IL) 及自监督学习方法,使模型能从海量数据和与环境的 交互中持续学习和进化。 技术愿景与路线图:主导构建可泛化、高效率的具身智能基座模型,为未来1-3年的技术演进提供核心支 ...
性能提升84%-166%!L-Zero仅靠强化学习解锁大模型探索世界的能力 | 已开源
量子位· 2025-07-01 00:53
招商局狮子山人工智能实验室 投稿 量子位 | 公众号 QbitAI 大模型可以不再依赖人类调教,真正"自学成才"啦? 新研究仅通过 RLVR (可验证奖励的强化学习),成功让模型自主进化出 通用的探索、验证与记忆能力 ,让模型学会"自学"! 当前主流的LLM Agent依然高度依赖于提示词工程、复杂的系统编排、甚至静态规则表,这使得它们在面对复杂任务时难以实现真正的智能 行为演化。 而来自招商局狮子山人工智能实验室的研究团队认为,RLVR范式是智能体(Agent)通往更高通用性和自主性的重要突破口。 于是,他们从两个关键层面出发构建了端到端Agent训练pipeline—— L0系统 : 智能体架构层面 提出了结构化智能体框架——NB-Agent,在经典"代码即行动" (Code-as-Action) 架构基础上进行扩展,使智能体能够操作记忆/上下 文,从而获得类人类的记忆存储、信息总结与自我反思能力。 学习范式层面 探索了一个核心问题:是否可以仅通过RLVR范式,引导智能体从零开始,学会如何规划、搜索、验证与记忆,最终解决复杂的多轮推理 任务? L0系统的框架、模型及训练集已 全部开源 ,详细可见文末链接。 ...