Workflow
verl
icon
Search documents
从模型到生态:2025 全球机器学习技术大会「开源模型与框架」专题前瞻
AI科技大本营· 2025-09-26 05:49
Core Insights - The article discusses the growing divide between open-source and closed-source AI models, highlighting that the performance gap has narrowed from 8% to 1.7% as of 2025, indicating that open-source models are catching up [1][12]. Open Source Models and Frameworks - The 2025 Global Machine Learning Technology Conference will feature a special topic on "Open Source Models and Frameworks," inviting creators and practitioners to share their insights and experiences [1][12]. - Various open-source projects are being developed, including mobile large language model inference, reinforcement learning frameworks, and efficient inference services, aimed at making open-source technology more accessible to developers [2][7]. Key Contributors - Notable contributors to the open-source projects include: - Wang Zhaode, a technical expert from Alibaba Taotian Group, focusing on mobile large language model inference [4][23]. - Chen Haiquan, an engineer from ByteDance, contributing to the Verl project for flexible and efficient reinforcement learning programming [4][10]. - Jiang Yong, a senior architect at Dify, involved in the development of open-source tools [4][23]. - You Kaichao, the core maintainer of vLLM, which provides low-cost large model inference services [4][7]. - Li Shenggui, a core developer of SGLang, currently a PhD student at Nanyang Technological University [4][23]. Conference Highlights - The conference will feature discussions on the evolution of AI competition, which now encompasses data, models, systems, and evaluation, with major players like Meta, Google, and Alibaba vying for dominance in the AI ecosystem [12][13]. - Attendees will have the opportunity to hear from leading experts, including Lukasz Kaiser, a co-inventor of GPT-5 and Transformer, who will provide insights into the future of AI technology [12][13]. Event Details - The conference is set to take place soon, with a focus on the latest technological insights and industry trends, encouraging developers to participate and share their experiences [12][13].
强化学习框架的演进与发展趋势
自动驾驶之心· 2025-08-18 23:32
Group 1 - The article discusses the transition from Supervised Fine-Tuning (SFT) to Reinforcement Learning (RL) in model training paradigms, highlighting that RL is becoming increasingly critical for enhancing model capabilities [3][4][8] - RL algorithms are evolving with new methods such as GRPO, RLOO, and DAPO, focusing on improving stability and sample efficiency [4] - The RL training process consists of three main modules: Rollout (policy generation), Reward Evaluation, and Policy Update, each playing a vital role in the training framework [5][6][7] Group 2 - The design of RL training frameworks faces challenges in coordinating Rollout and training modules, especially with the increasing model scale and the need for distributed multi-GPU training [12][13] - There is a diversity of underlying training and inference frameworks, which complicates parameter synchronization and inference scheduling [14] - Performance optimization strategies include data parallelism, tensor parallelism, and pipeline parallelism, each with distinct advantages and limitations [22][24] Group 3 - The article outlines the importance of efficient data transfer mechanisms and parameter synchronization between training frameworks and inference engines, emphasizing the need for flexible communication strategies [32][39] - SLIME and ROLL frameworks are introduced, showcasing their approaches to managing data transfer and parameter synchronization effectively [42][46] - The integration of Ray for distributed computing is discussed, highlighting its role in managing resource allocation and communication in complex RL tasks [48][53] Group 4 - The article concludes with a comparison of various RL frameworks, such as SLIME, ROLL, and Verl, each catering to different needs and offering unique features for specific applications [61] - The rapid evolution of technology necessitates maintaining simplicity and high maintainability in framework design to adapt to new trends [58] - The article emphasizes the significance of open-source frameworks in advancing RL technology, particularly in the context of China's leading position in technical strength and understanding [60]
多模态大模型强化学习训练框架 - EasyR1代码走读(GRPO)
自动驾驶之心· 2025-07-15 12:30
Core Insights - The article discusses the exploration of the EasyR1 framework for multi-modal reinforcement learning, particularly focusing on its implementation and configuration for training models like Qwen2.5-VL [1][4][6]. Group 1: Framework Overview - EasyR1 is derived from the verl framework and is designed for language-based reinforcement learning [1][6]. - The code version referenced is approximately from June 10, indicating ongoing updates and improvements [1]. Group 2: Configuration Details - The configuration file is structured into four main categories: data, algorithm, worker, and trainer, with specific parameters outlined for each [6][11]. - Data configurations include paths for training and validation files, maximum prompt and response lengths, and batch sizes for training iterations [9][10]. - Algorithm configurations specify parameters for the advantage estimator, discount factors, and KL divergence settings [11][13]. Group 3: Training Workflow - The training process is initiated through a main script that sets up the data loaders and begins the training loop [42][43]. - The workflow includes steps for preparing data, generating sequences, and computing rewards, with specific attention to balancing batch sizes across distributed processes [46][50][64]. - The article emphasizes the importance of handling multi-modal data and ensuring that the training process accommodates various input types [65][66]. Group 4: Data Handling - The dataset must include specific keys such as problem, answer, and images, formatted in JSON for compatibility with the loading functions [40][41]. - The data loading process supports multiple file formats and is designed to create a seamless pipeline for training [41][32]. Group 5: Model Update Mechanism - The article outlines the mechanism for updating the actor model, detailing how policy loss is computed and how gradients are managed during training [82][86]. - It highlights the significance of KL divergence in the training process, particularly in relation to the reference model [71][80].