Workflow
AReaL
icon
Search documents
从现有主流 RL 库来聊聊RL Infra架构演进
自动驾驶之心· 2025-09-25 23:33
Core Viewpoint - Reinforcement Learning (RL) is transitioning from a supportive technology to a core driver of model capabilities, focusing on multi-step, interactive agent training to achieve General Artificial Intelligence (AGI) [2][6]. Group 1: Modern RL Infrastructure Architecture - The core components of modern RL infrastructure include a Generator, which interacts with the environment to generate trajectories and calculate rewards, and a Trainer, which updates model parameters based on trajectory data [6][4]. - The generator-trainer architecture, combined with distributed coordination layers like Ray, forms the "gold standard" for RL systems [6][4]. Group 2: Primary Development - Primary Development frameworks serve as foundational frameworks for building RL training pipelines, providing core algorithm implementations and integration with underlying training/inference engines [8][7]. - TRL (Transformer Reinforcement Learning) is a user-friendly RL framework launched by Hugging Face, offering various algorithm supports [9][10]. - OpenRLHF, developed by a collaborative team including ByteDance and NetEase, aims to provide an efficient and scalable RLHF and Agentic RL framework [11][14]. - veRL, developed by Byte's Seed team, is one of the most comprehensive frameworks with extensive algorithm support [16][19]. - AReaL (Asynchronous Reinforcement Learning) is designed for large-scale, high-throughput RL training with a fully asynchronous architecture [20][21]. - NeMo-RL, launched by NVIDIA, integrates into its extensive NeMo ecosystem, focusing on production-level RL frameworks [24][28]. - ROLL, an Alibaba open-source framework, emphasizes asynchronous and Agentic capabilities for large-scale LLM RL [30][33]. - slime, developed by Tsinghua and Zhipu, is a lightweight framework focusing on seamless integration of SGLang with Megatron [34][36]. Group 3: Secondary Development - Secondary Development frameworks are built on primary frameworks, targeting specific downstream application scenarios like multi-modal, multi-agent, and GUI automation [44][3]. - Agentic RL frameworks, such as verl-agent, optimize for asynchronous rollout and training, addressing the core challenges of multi-round interactions with external environments [46][47]. - Multimodal RL frameworks, like VLM-R1 and EasyR1, focus on training visual-language reasoning models, addressing data processing and loss function design challenges [53][54]. - Multi-Agent RL frameworks, such as MARTI, integrate multi-agent reasoning and reinforcement learning for complex collaborative tasks [59][60]. Group 4: Summary and Trends - The RL infrastructure is evolving from a "workshop" model to a "standardized pipeline," with increasing modularity in framework design [65]. - Asynchronous architectures are becoming essential to address the computational asymmetry between rollout and training [66]. - The emergence of high-performance inference engines like vLLM and SGLang significantly accelerates the rollout process [66]. - The evolution from RLHF to Agentic RL reflects the growing complexity of tasks supported by new frameworks [66]. - Distributed training framework choices, such as Megatron-LM and DeepSpeed, are critical for large-scale model training [66]. - Scene-driven secondary development frameworks are addressing unique challenges in vertical domains [66]. - The importance of orchestrators for managing distributed components in RL systems is becoming widely recognized [66].
外滩大会观察:中国“小虎队”勾勒科技新图景
Huan Qiu Wang· 2025-09-11 10:23
Group 1 - The article highlights the emergence of a new generation of young innovators in China, referred to as the "Tech Tigers," who are reshaping the technology landscape with an average age of under 30 [1][11] - The 2025 Inclusion Bund Conference in Shanghai serves as a platform for these young researchers, developers, and entrepreneurs, featuring various events such as the AI Innovation Competition and technology exhibitions [1][11] - The AI Innovation Competition attracted nearly 20,000 participants, with over half being post-2000 generation, showcasing the significant involvement of youth in technological advancements [1][11] Group 2 - Young researchers like Lian Hui from the Hefei Institute of Physical Science are making strides in clean energy through controlled nuclear fusion technology, which has implications for AI computing and industrial applications [2] - Zhang Fan, a professor at the University of Electronic Science and Technology, is recognized for his work in digital medicine, significantly reducing MRI imaging time, which can save critical time for patients [2] - Cheng Haonan, a post-95 researcher, developed a platform to combat deepfake technology, demonstrating the innovative spirit of young researchers in addressing contemporary challenges [3] Group 3 - Young entrepreneurs like Wu Chenglin and Zhu Zheqing are leading AI startups that focus on innovative applications of AI technology, emphasizing a shift from traditional business models to more dynamic, technology-driven solutions [9][10] - The article emphasizes the importance of open-source communities in fostering collaboration and innovation among young engineers, as seen in the contributions of figures like Fan Wendong and Xiang Jinyu [6][9] - The narrative illustrates a broader cultural shift among young innovators who are not only focused on technological advancements but also on redefining the creative process and democratizing art through AI [10][11]
在OpenAI炼Agent一年半,回国做出首个开源Agent训练框架!这个30岁清华天才却说:创业不是技术命
AI前线· 2025-08-23 05:32
Core Viewpoint - The article highlights the journey and achievements of Wu Yi, a prominent figure in AI and reinforcement learning, emphasizing his contributions to the field and the unique positioning of his startup, BianSai Technology, which focuses on the AReaL framework for training large models [2][4][8]. Group 1: Career and Achievements - Wu Yi has a distinguished background, being an ACM World Medalist and a coach for the IOI team, with significant experiences at Facebook, ByteDance, and OpenAI [2][4]. - His startup, BianSai Technology, was acquired by Ant Group in 2024, and the team has developed a unique asynchronous reinforcement learning framework called AReaL, which has gained traction on GitHub with 2.4k stars [2][4][8]. Group 2: Insights from OpenAI Experience - Wu Yi's decision to join OpenAI was somewhat serendipitous, as he initially aimed for Google Brain but found OpenAI more accommodating due to its non-profit structure [4][5]. - He emphasizes the importance of evidence-driven decision-making in AI development, advocating for a flexible approach that allows for rapid adjustments based on new findings [5][13]. Group 3: Reinforcement Learning and Competitions - Wu Yi discusses the differences in performance of AI models in competitions like IOI and CCPC, attributing failures to the readiness of the models rather than inherent limitations of AI [6][7]. - He believes that AI's role in competitive programming is akin to sports, where psychological factors and skills play a significant role [6][7]. Group 4: AReaL Framework and Market Position - AReaL is positioned as a unique framework for training agent models, with Wu Yi asserting that there are currently no direct competitors in this space [2][33][36]. - The framework aims to facilitate faster and more effective training of agent models, focusing on user-friendliness and performance [36][37]. Group 5: Future Directions and Challenges - Wu Yi anticipates that multi-agent systems will become increasingly important as the complexity of agent workflows grows, presenting new opportunities for algorithm development [41][42]. - He expresses confidence that agent technology will evolve to become a mainstream interaction form in AI, moving towards more autonomous and proactive roles [42].
清华叉院教授手把手教你用强化学习训练智能体
机器之心· 2025-08-19 02:43
Core Viewpoint - The article discusses the significance of Agentic Reinforcement Learning (Agentic RL) in training general intelligent agents, highlighting the ASearcher project as a key initiative by the AReaL team to develop an end-to-end search agent using this technology [1][2]. Summary by Sections Agentic RL Challenges - The main difficulty in Agentic RL is the long-horizon tool usage, which requires complex interactions in various environments [11]. ASearcher Project - ASearcher leverages fully asynchronous RL to unlock long-horizon tool usage for agents, allowing up to 128 complex environment interactions [2][11]. AReaL-Lite - AReaL-Lite is introduced as a lightweight development framework that enables rapid training of Agentic RL, simplifying the coding process [11]. Hands-on Training - The article mentions a hands-on session where participants will learn to implement multi-turn search agent training in Jupyter Notebook, emphasizing the need for a GPU server with at least 4 cards [11]. Guest Speakers - The session features notable speakers including Professor Wu Yi from Tsinghua University and key members from the AReaL and ASearcher projects, highlighting their expertise in the field [11].
从 OpenAI 回清华,吴翼揭秘强化学习之路:随机选的、笑谈“当年不懂股权的我” | AGI 技术 50 人
AI科技大本营· 2025-06-19 01:41
Core Viewpoint - The article highlights the journey of Wu Yi, a prominent figure in the AI field, emphasizing his contributions to reinforcement learning and the development of open-source systems like AReaL, which aims to enhance reasoning capabilities in AI models [1][6][19]. Group 1: Wu Yi's Background and Career - Wu Yi, born in 1992, excelled in computer science competitions and was mentored by renowned professors at Tsinghua University and UC Berkeley, leading to significant internships at Microsoft and Facebook [2][4]. - After completing his PhD at UC Berkeley, Wu joined OpenAI, where he contributed to notable projects, including the "multi-agent hide-and-seek" experiment, which showcased complex behaviors emerging from simple rules [4][5]. - In 2020, Wu returned to China to teach at Tsinghua University, focusing on integrating cutting-edge technology into education and research while exploring industrial applications [5][6]. Group 2: AReaL and Reinforcement Learning - AReaL, developed in collaboration with Ant Group, is an open-source reinforcement learning framework designed to enhance reasoning models, providing efficient and reusable training solutions [6][19]. - The framework addresses the need for models to "think" before generating answers, a concept that has gained traction in recent AI developments [19][20]. - AReaL differs from traditional RLHF (Reinforcement Learning from Human Feedback) by focusing on improving the intelligence of models rather than merely making them compliant with human expectations [21][22]. Group 3: Challenges in AI Development - Wu Yi discusses the significant challenges in entrepreneurship within the AI sector, emphasizing the critical nature of timing and the risks associated with missing key opportunities [12][13]. - The evolution of model sizes presents new challenges for reinforcement learning, as modern models can have billions of parameters, necessitating adaptations in training and inference processes [23][24]. - The article also highlights the importance of data quality and system efficiency in training reinforcement learning models, asserting that these factors are more critical than algorithmic advancements [30][32]. Group 4: Future Directions in AI - Wu Yi expresses optimism about future breakthroughs in AI, particularly in areas like memory expression and personalization, which remain underexplored [40][41]. - The article suggests that while multi-agent systems are valuable, they may not be essential for all tasks, as advancements in single models could render multi-agent approaches unnecessary [42][43]. - The ongoing pursuit of scaling laws in AI development indicates that improvements in model performance will continue to be a focal point for researchers and developers [26][41].