Workflow
强化学习
icon
Search documents
自搜索强化学习SSRL:Agentic RL的Sim2Real时刻
机器之心· 2025-09-02 01:27
Core Insights - The article discusses the development and effectiveness of SSRL (Structured Search Reinforcement Learning) in enhancing the training efficiency and stability of Search Agents using large language models (LLMs) [6][28] - SSRL demonstrates superior performance over traditional methods that rely on external search engines, achieving effective transfer from simulation to real-world applications (Sim2Real) [6][28] Group 1 - SSRL utilizes structured prompts and format rewards to effectively extract world knowledge from models, leading to improved performance across various benchmarks and reduced hallucination [2][6] - The research highlights the high costs and inefficiencies associated with current RL training methods for Search Agents, which include full-real and semi-real search approaches [7][13] - The introduction of SSRL allows for a significant increase in training efficiency, estimated at approximately 5.6 times, while maintaining a continuous increase in training rewards without collapse [31][32] Group 2 - Experiments show that models trained with SSRL outperform those relying on external engines, particularly in real-world search scenarios, indicating the importance of integrating real-world knowledge [28][31] - The article presents findings that suggest the combination of self-generated knowledge and real-world knowledge can enhance model performance, particularly through entropy-guided search strategies [34] - The integration of SSRL with TTRL (Task-Driven Reinforcement Learning) has shown to improve generalization and effectiveness, achieving up to a 67% performance increase in certain tasks [38][39]
维持推荐小盘成长,风格连续择优正确
2025-09-02 00:42
Summary of Key Points from the Conference Call Industry or Company Involved - The conference call primarily discusses the investment strategies and market outlook of CICC (China International Capital Corporation) focusing on small-cap growth stocks and various asset classes. Core Insights and Arguments - CICC maintains a positive outlook on small-cap growth style for September, despite a slight decline in overall indicators. Market conditions, sentiment, and macroeconomic factors support the continued superiority of small-cap growth in the coming month [1][2] - In asset allocation, CICC is optimistic about domestic equity assets, neutral on commodity assets, and cautious regarding bond assets. The macro expectation gap indicates a bullish stance on stocks, particularly small-cap and dividend stocks, while being bearish on growth stocks [3][4] - The industry rotation model for September recommends sectors such as comprehensive finance, media, computer, banking, basic chemicals, and real estate, based on price and volume information. The previous month's recommended sectors achieved a 2.4% increase [5] - The "growth trend resonance" strategy performed best in August with a return of 18.1%, significantly outperforming the mixed equity fund index for six consecutive months [7] - Year-to-date (YTD) performance of CICC's various strategies is strong, with an overall return of 43%, surpassing the Tian Gu Hang operating index by 15 percentage points. The XG Boost growth selection strategy has a YTD return of 47.1% [8] Other Important but Possibly Overlooked Content - The small-cap strategy underperformed expectations due to extreme market conditions led by large-cap stocks, which created a positive feedback loop for index growth. This indicates a potential phase of inefficacy for the strategy [6] - The active quantitative stock selection strategies include stable growth and small-cap exploration, with the latter showing mixed results in August. Despite positive absolute returns, small-cap exploration strategies lagged behind other indices [8] - CICC's quantitative team has developed various models based on advanced techniques like reinforcement learning and deep learning, with notable performance in stock selection strategies. The Attention GRU model, for instance, has shown promising results in both the market and specific indices [10]
端到端自动驾驶的万字总结:拆解三大技术路线(UniAD/GenAD/Hydra MDP)
自动驾驶之心· 2025-09-01 23:32
Core Viewpoint - The article discusses the current development status of end-to-end autonomous driving algorithms, comparing them with traditional algorithms and highlighting their advantages and limitations [3][5][6]. Group 1: Traditional vs. End-to-End Algorithms - Traditional autonomous driving algorithms follow a pipeline of perception, prediction, and planning, where each module has distinct inputs and outputs [5][6]. - The perception module takes sensor data as input and outputs bounding boxes for the prediction module, which then outputs trajectories for the planning module [6]. - End-to-end algorithms, in contrast, take raw sensor data as input and directly output path points, simplifying the process and reducing error accumulation [6][10]. Group 2: Limitations of End-to-End Algorithms - End-to-end algorithms face challenges such as lack of interpretability, safety guarantees, and issues related to causal confusion [12][57]. - The reliance on imitation learning in end-to-end algorithms limits their ability to handle corner cases effectively, as they may misinterpret rare scenarios as noise [11][57]. - The inherent noise in ground truth data can lead to suboptimal learning outcomes, as human driving data may not represent the best possible actions [11][57]. Group 3: Current End-to-End Algorithm Implementations - The ST-P3 algorithm is highlighted as an early example of end-to-end autonomous driving, focusing on spatiotemporal learning with three core modules: perception, prediction, and planning [14][15]. - Innovations in ST-P3 include a perception module that uses a self-centered cumulative alignment technique, a dual-path prediction mechanism, and a planning module that incorporates prior information for trajectory optimization [15][19][20]. Group 4: Advanced Techniques in End-to-End Algorithms - The UniAD framework introduces a multi-task approach by incorporating five auxiliary tasks to enhance performance, addressing the limitations of traditional modular stacking methods [24][25]. - The system employs a full Transformer architecture for planning, integrating various interaction modules to improve trajectory prediction and planning accuracy [26][29]. - The VAD (Vectorized Autonomous Driving) method utilizes vectorized representations to better express structural information of map elements, enhancing computational speed and efficiency [32][33]. Group 5: Future Directions and Challenges - The article emphasizes the need for further research to overcome the limitations of current end-to-end algorithms, particularly in optimizing learning processes and handling exceptional cases [57]. - The introduction of multi-modal planning and multi-model learning approaches aims to improve trajectory prediction stability and performance [56][57].
开学了:入门AI,可以从这第一课开始
机器之心· 2025-09-01 08:46
Core Viewpoint - The article emphasizes the importance of understanding AI and its underlying principles, suggesting that individuals should start their journey into AI by grasping fundamental concepts and practical skills. Group 1: Understanding AI - AI is defined through various learning methods, including supervised learning, unsupervised learning, and reinforcement learning, which allow machines to learn from data without rigid programming rules [9][11][12]. - The core idea of modern AI revolves around machine learning, particularly deep learning, which enables machines to learn from vast amounts of data and make predictions [12]. Group 2: Essential Skills for AI - Three essential skills for entering the AI field are mathematics, programming, and practical experience. Mathematics provides the foundational understanding, while programming, particularly in Python, is crucial for implementing AI concepts [13][19]. - Key mathematical areas include linear algebra, probability and statistics, and calculus, which are vital for understanding AI algorithms and models [13]. Group 3: Practical Application and Tools - Python is highlighted as the primary programming language for AI due to its simplicity and extensive ecosystem, including libraries like NumPy, Pandas, Scikit-learn, TensorFlow, and PyTorch [20][21]. - Engaging in hands-on projects, such as data analysis or machine learning tasks, is encouraged to solidify understanding and build a portfolio [27][46]. Group 4: Career Opportunities in AI - Various career paths in AI include machine learning engineers, data scientists, and algorithm researchers, each focusing on different aspects of AI development and application [38][40]. - The article suggests that AI skills can enhance various fields, creating opportunities for interdisciplinary applications, such as in finance, healthcare, and the arts [41][43]. Group 5: Challenges and Future Directions - The rapid evolution of AI technology presents challenges, including the need for continuous learning and adaptation to new developments [34][37]. - The article concludes by encouraging individuals to embrace uncertainty and find their passion within the AI landscape, highlighting the importance of human creativity and empathy in the technological realm [71][73].
科普向:一文解构大模型后训练,GRPO和它的继任者们的前世今生
3 6 Ke· 2025-09-01 04:38
Group 1 - The core concept of the article revolves around the evolution of post-training methods in large language models, particularly focusing on the GRPO algorithm as a significant advancement in reinforcement learning paradigms [2][46]. - GRPO has emerged as a universal reinforcement learning algorithm applicable to a wide range of post-training tasks, with notable improvements over previous methods like PPO [2][48]. - The article discusses the importance of post-training in enhancing the adaptability and flexibility of models, addressing the limitations of pre-training alone [5][46]. Group 2 - The article highlights the transition from PPO to GRPO, emphasizing the reduction of computational costs and memory requirements, making GRPO a more efficient alternative [18][14]. - GRPO's methodology involves using historical performance data to establish a baseline for advantage estimation, eliminating the need for a separate value function [16][14]. - Despite its advantages, GRPO still faces stability issues, prompting further research and development of improved algorithms like DAPO and GSPO [19][48]. Group 3 - DAPO, developed by ByteDance and Tsinghua AIR, builds upon GRPO by introducing enhancements such as Clip-Higher and dynamic sampling to improve training efficiency [20][21]. - GSPO represents a significant advancement by shifting the focus from token-level to sequence-level importance sampling, which enhances training stability [28][30]. - GFPO addresses the limitations of GRPO by allowing for the simultaneous optimization of multiple response attributes, thus improving the overall performance of models [33][34].
RLinf开源!首个面向具身智能“渲训推一体化”的大规模强化学习框架
具身智能之心· 2025-09-01 04:02
Core Viewpoint - The article discusses the launch of RLinf, a large-scale reinforcement learning framework aimed at embodied intelligence, highlighting its innovative design and capabilities in enhancing AI's transition from perception to action [2][5]. Group 1: Framework Overview - RLinf is a flexible and scalable framework designed for embodied intelligence, integrating various components to optimize performance [5]. - The framework's name "inf" signifies both "infrastructure" and "infinite" scaling, emphasizing its adaptable system design [7]. - RLinf features a hybrid execution model that achieves over 120% system speedup compared to traditional frameworks, with VLA model performance improvements of 40%-60% [7][12]. Group 2: Execution Modes - RLinf supports three execution modes: Collocated, Disaggregated, and Hybrid, allowing users to configure components based on their needs [17][15]. - The hybrid mode combines the advantages of both shared and separated execution, minimizing system idle time and enhancing efficiency [12][15]. Group 3: Communication and Scheduling - The framework includes an adaptive communication library designed for reinforcement learning, optimizing data exchange between components [19][22]. - RLinf features an automated scheduling module that minimizes resource idleness and dynamically adjusts to user training flows, achieving rapid scaling capabilities [23][24]. Group 4: Performance Metrics - RLinf has demonstrated significant performance improvements in embodied intelligence tasks, achieving success rates of 80%-90% in specific scenarios, compared to 30%-50% in previous models [24][26]. - The framework has also achieved state-of-the-art (SOTA) performance in mathematical reasoning tasks across multiple datasets, showcasing its versatility [29][30]. Group 5: Documentation and Community Engagement - Comprehensive documentation and API support are provided to enhance user experience and facilitate understanding of the framework [32][34]. - The RLinf team encourages collaboration and invites users to explore the framework, highlighting ongoing recruitment for various research and engineering positions [33][34].
红杉美国:未来一年,这五个AI赛道我们重点关注
创业邦· 2025-09-01 03:48
Core Insights - Sequoia Capital views the AI revolution as a transformative event comparable to the Industrial Revolution, presenting a $10 trillion opportunity in the service industry, with only $20 billion currently automated by AI [1][7][13]. Investment Themes - **Theme 1: Persistent Memory** The concept of persistent memory involves both long-term memory for AI to retain shared context and the identity of AI agents to maintain their unique characteristics over time. This area remains largely unsolved, presenting a significant opportunity [30]. - **Theme 2: Seamless Communication Protocols** The need for standardized communication protocols among AI agents is critical for seamless collaboration, similar to the TCP/IP protocols during the internet revolution. This could transform business models by allowing AI agents to interact autonomously [32]. - **Theme 3: AI Voice** AI voice technology is currently maturing, with improvements in fidelity and latency, enabling real-time conversations. Its applications span consumer and enterprise sectors, including logistics and trading [35]. - **Theme 4: AI Security** There is a substantial opportunity in AI security across the development and consumer spectrum, ensuring safe technology development and usage. This includes protecting both users and AI agents from vulnerabilities [37]. - **Theme 5: Open Source AI** Open source AI is at a pivotal moment, with the potential to compete with proprietary models. This is essential for fostering a more open and accessible AI landscape, allowing broader participation in AI development [40].
科普向:一文解构大模型后训练,GRPO和它的继任者们的前世今生
机器之心· 2025-09-01 02:49
Core Viewpoint - The article discusses the evolution and significance of the Group Relative Policy Optimization (GRPO) algorithm in the context of large language models and reinforcement learning, highlighting its advantages and limitations compared to previous methods like Proximal Policy Optimization (PPO) [4][38]. Summary by Sections Development of Large Language Models - The rapid advancement of large language models has led to the emergence of various post-training methods, with GRPO being a notable innovation that enhances reinforcement learning paradigms [3][5]. Post-Training and Reinforcement Learning - Post-training is crucial for refining models' capabilities in specific domains, enhancing adaptability and flexibility to meet diverse application needs [12][11]. - Reinforcement learning, particularly through human feedback (RLHF), plays a vital role in the post-training phase, aiming to optimize model outputs based on user preferences [14][19]. GRPO and Its Advantages - GRPO eliminates the need for a separate critic model, reducing memory and computational costs significantly compared to PPO, which requires dual networks [30][35]. - The GRPO framework utilizes historical performance data to establish a baseline for evaluating model improvements, thus simplifying the training process [34][35]. Comparison of GRPO and PPO - GRPO offers substantial improvements in memory requirements and training speed, making it a more efficient choice for large language model training [37]. - Despite its advantages, GRPO still faces stability issues similar to those of PPO, particularly in smaller-scale reinforcement learning tasks [39]. Recent Innovations: DAPO, GSPO, and GFPO - DAPO introduces enhancements to GRPO, such as Clip-Higher and dynamic sampling, to address practical challenges encountered during training [41][42]. - GSPO advances the methodology by shifting the focus from token-level to sequence-level importance sampling, significantly improving training stability [48][49]. - GFPO allows for simultaneous optimization of multiple response attributes, addressing limitations of GRPO related to scalar feedback and multi-round reasoning tasks [61][63]. Conclusion - The evolution of post-training methods, from PPO to GRPO and beyond, illustrates a clear trajectory in optimizing large language models, with GRPO serving as a pivotal point for further advancements in the field [81][82].
首个为具身智能而生的大规模强化学习框架RLinf!清华、北京中关村学院、无问芯穹等重磅开源
机器之心· 2025-09-01 02:49
Core Viewpoint - The article discusses the launch of RLinf, a large-scale reinforcement learning framework designed for embodied intelligence, emphasizing its flexible and scalable architecture that integrates training, rendering, and inference processes [5][7]. Group 1: Development of RL Framework - The transition in artificial intelligence from "perception" to "action" highlights the importance of embodied intelligence, which is gaining attention in both academia and industry [2][4]. - RLinf is developed collaboratively by Tsinghua University, Beijing Zhongguancun College, and Wuwenchin, aiming to address the limitations of existing frameworks in supporting embodied intelligence [5][7]. Group 2: Features of RLinf - RLinf's architecture consists of six layers: user layer, task layer, execution layer, scheduling layer, communication layer, and hardware layer, allowing for a hybrid execution mode that achieves over 120% system speedup [7][12]. - The framework introduces a Macro-to-Micro Flow (M2Flow) mechanism, enabling flexible construction of training processes while maintaining high programming flexibility and debugging ease [14][15]. Group 3: Execution Modes - RLinf supports three execution modes: Collocated Mode, Disaggregated Mode, and Hybrid Mode, allowing users to configure components for optimal resource utilization [19][20]. - The framework integrates low-intrusion multi-backend solutions to cater to the diverse needs of researchers in the embodied intelligence field [16][20]. Group 4: Communication and Scheduling - RLinf features an adaptive communication library designed for reinforcement learning, optimizing data exchange between components to enhance system efficiency [22][28]. - An automated scheduling module minimizes resource idling by analyzing component performance and selecting the best execution mode, significantly improving training stability [24][25]. Group 5: Performance Metrics - RLinf demonstrates superior performance in embodied intelligence tasks, achieving over 120% efficiency improvement compared to existing frameworks in specific tests [27][33]. - The framework has shown significant success rate improvements in various tasks, with models achieving up to 97.3% success rates in specific scenarios [31][35]. Group 6: Future Development and Community Engagement - The RLinf team emphasizes open-source principles, providing comprehensive documentation and support to enhance user experience and facilitate collaboration [40][41]. - The team is actively recruiting for various positions to further develop and maintain the RLinf framework, inviting community engagement and feedback [42][43].
R-Zero 深度解析:无需人类数据,AI 如何实现自我进化?
机器之心· 2025-08-31 03:54
Core Viewpoint - The article discusses the R-Zero framework, which enables AI models to self-evolve from "zero data" through a collaborative evolution of two AI roles: Challenger and Solver, aiming to overcome the limitations of traditional large language models that rely on extensive human-annotated data [2][3]. Group 1: R-Zero Framework Overview - R-Zero is designed to allow AI to self-generate learning tasks and improve reasoning capabilities without human intervention [11]. - The framework consists of two independent yet collaboratively functioning agents: Challenger (Qθ) and Solver (Sϕ) [6]. - The Challenger acts as a course generator, creating tasks that are at the edge of the Solver's current capabilities, focusing on tasks with high information gain [6]. Group 2: Iterative Process - The process involves an iterative loop where the Challenger trains on the frozen Solver model to generate questions that maximize the Solver's uncertainty [8]. - After each iteration, the enhanced Solver becomes the new target for the Challenger's training, leading to a spiral increase in both agents' capabilities [9]. Group 3: Implementation and Results - The framework generates pseudo-labels through a self-consistency strategy, where the Solver produces multiple candidate answers for each question, selecting the most frequent as the pseudo-label [17]. - A filtering mechanism ensures that only questions with a specific accuracy range are retained for training, enhancing the quality of the learning process [18]. - Experimental results show significant improvements in reasoning capabilities, with the Qwen3-8B-Base model's average score in mathematical benchmarks increasing from 49.18 to 54.69 after three iterations (+5.51) [18]. Group 4: Generalization and Efficiency - The model demonstrates strong generalization capabilities, with average scores in general reasoning benchmarks like MMLU-Pro and SuperGPQA improving by 3.81 points, indicating enhanced core reasoning abilities rather than mere memorization of specific knowledge [19]. - The R-Zero framework can serve as an efficient intermediate training stage, maximizing the value of human-annotated data when used for subsequent fine-tuning [22]. Group 5: Challenges and Limitations - A key challenge identified is the decline in the accuracy of pseudo-labels, which dropped from 79.0% in the first iteration to 63.0% in the third, indicating increased noise in the supervisory signals as task difficulty rises [26]. - The framework's reliance on domains with objective, verifiable answers limits its applicability in areas with subjective evaluation criteria, such as creative writing [26].