GRPO

Search documents
如何准备RL面试相关的问题?
自动驾驶之心· 2025-09-12 16:03
作者 | Abel chen 编辑 | 自动驾驶之心 原文链接: https://zhuanlan.zhihu.com/p/1948681769332240910 点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 本文只做学术分享,如有侵权,联系删文 1. GRPO是on policy还是off policy?为什么? 简短答案: GRPO 最初设计和常用实现是 on-policy(在线/近端策略式) ;但它可以被扩展为 off-policy,已有工作专门研究这种扩展及其优缺点。 为什么是 on-policy(解释) 为什么有人说可以 off-policy(扩展) 最近有工作把 GRPO 的思想推广到 off-policy 场景(比如用来自别的策略 / 旧批次的数据来估计优势并做修正),并且报告了在样本效率、稳定性等方面的潜在好 处与权衡。也就是说,虽然 GRPO 本质上是基于 on-policy 的 surrogate objective,但数学上和工程上可以设计重要性采样、批内归一化或裁剪等技巧把它改成 off- policy 版本。 实践建议(简要) ...
从RLHF、PPO到GRPO再训练推理模型,这是你需要的强化学习入门指南
机器之心· 2025-06-22 04:26
Core Insights - Reinforcement Learning (RL) has become an essential technology in the AI field, particularly in large language models (LLMs) [1] - The Unsloth team has released a comprehensive reinforcement learning tutorial that covers various concepts from RLHF to GRPO, making it accessible for beginners and advanced users alike [2][3] Group 1: Understanding Reinforcement Learning - The goal of reinforcement learning is to increase the likelihood of achieving "good" outcomes while reducing the chances of "bad" outcomes [8][10] - Key components of RL include the environment, agent, actions, and reward functions, which collectively define the learning process [9][14] - RLHF (Reinforcement Learning from Human Feedback) has gained popularity, particularly through OpenAI's implementation, which trains agents to generate outputs deemed useful by humans [16][19] Group 2: GRPO and Its Advantages - GRPO (Group Relative Policy Optimization) is a method developed to train reasoning models, differing from PPO (Proximal Policy Optimization) by removing the value model and utilizing custom reward functions [22][24] - GRPO estimates average rewards through sampling multiple outputs for a given question, which helps in optimizing the model's performance [27][28] - The approach allows for significant memory savings and can enhance various tasks beyond coding and mathematics, such as email automation and legal applications [30] Group 3: Training with Unsloth - Unsloth provides a detailed guide for training reasoning models using GRPO, requiring a minimum of 5GB VRAM for local training of models up to 1.5 billion parameters [44] - The training process involves generating multiple answer variants for each question, evaluating them with a reward function, and updating model weights accordingly [45][57] - Effective training requires a well-designed reward function and a sufficient amount of data, with recommendations for at least 500 lines for optimal results [49][50] Group 4: Reward Functions and Validators - Reward functions and validators play crucial roles in evaluating model outputs, with the former assigning scores based on correctness and quality, while the latter verifies the accuracy of the outputs [46][56] - Examples of reward functions include those that reward correct answers and penalize incorrect or overly verbose responses [61] - The design of reward functions is critical, as poorly constructed ones can inadvertently degrade model performance [57]