GRPO算法
Search documents
DeepSeek-V3.2巨「吃」Token,竟然是被GRPO背刺了
3 6 Ke· 2025-12-04 10:38
DeepSeek 一发布模型,总会引起业内的高度关注与广泛讨论,但也不可避免的暴露出一些小 Bug。 比如老外用英文询问,它却在思考过程中切回「神秘的东方文字」。当然,DeepSeek 模型对汉字「情有独钟」的情况早已出现,「极」字 Bug 就是典型 例子。 而这一次,随着新模型 DeepSeek-V3.2 的发布,大家又发现了 DeepSeek 需要优化的地方:其长思考版本(Speciale)暴露出一些 Token 使用效率不佳的问 题。 根据多位研究者反馈,DeepSeek-V3.2 Speciale 在处理复杂任务时出现明显的 Token 消耗异常。具体表现为: 在相同任务上,Gemini 只消耗 2 万 Token,DeepSeek-V3.2 Speciale 却用了 7.7 万,也就是说,它需要 3 倍以上的 Token 才能输出类似质量的结果。 另外,Speciale 版本出现输出内容又长又啰嗦的问题,但最终仍然错的情况,这并不是新问题,而是 GRPO 算法本身的固有缺陷。 实际上,DeepSeek-V3.2 在 Token 消耗方面的异常表现,已经被不少用户与研究者观察到。有社区网友指出,Spe ...
DeepSeek-V3.2巨「吃」Token,竟然是被GRPO背刺了
机器之心· 2025-12-04 08:18
Core Insights - The article discusses the release of the DeepSeek-V3.2 model, highlighting its performance issues, particularly in token consumption and output verbosity, which have raised concerns among users and researchers [1][2][6]. Token Consumption and Efficiency - DeepSeek-V3.2 Speciale exhibits inefficient token usage, consuming 77,000 tokens for tasks where Gemini only requires 20,000, indicating over three times the token expenditure for similar quality results [1][6]. - Users have noted that the generation speed of DeepSeek-V3.2 Speciale is approximately 30 tokens per second, and an increase to around 100 tokens per second could significantly enhance usability and experience [6]. Output Quality and Verbosity - The Speciale version tends to produce lengthy and verbose outputs, often resulting in incorrect responses, which is attributed to inherent flaws in the GRPO algorithm [2][15]. - The model's performance in benchmark tests shows that it has a median score of 76.38, with a median difference of 11.07% compared to other models, indicating a notable gap in efficiency [7]. Comparison with Other Models - In benchmark comparisons, DeepSeek-V3.2 Speciale's token consumption during inference has been reported to be significantly higher than its predecessor, with a consumption of 86 million tokens compared to 62 million for the previous version [7][10]. - The model's performance metrics reveal that it lags behind competitors like Gemini-3.0 Pro in terms of output token delay and efficiency [10][12]. Algorithmic Limitations - The GRPO algorithm, which underpins DeepSeek, has been criticized for introducing biases that lead to longer and often incorrect responses, a problem that persists in the latest model [16][20]. - Length bias, a significant issue in the GRPO algorithm, causes the model to generate longer responses even when they are incorrect, which has been identified as a primary reason for the high token consumption in DeepSeek-V3.2 Speciale [20][23]. Future Directions - The developers acknowledge the need for improved token efficiency as a critical area for future research, aiming to balance performance and cost in subsequent model iterations [14][23].
DeepSeek-V3.2被找出bug了:疯狂消耗token,答案还可能出错,研究人员:GRPO老问题没解决
3 6 Ke· 2025-12-04 02:21
DeepSeek-V3.2很强很火爆,但随着讨论的深入,还是有bug被发现了。 并且是个老问题:浪费token。 图源:x@Hangslin 不少网友都提到,DeepSeek-V3.2的长思考增强版Speciale,确确实实以开源之姿又给闭源TOP们上了压力,但问题也很明显: 在面对复杂任务时,消耗的token数偏多,甚至可能会出现"又长又错"的答案。 比如,同样解决一个问题,Gemini只用了2万个token,而Speciale需要花费7.7万个。 这是怎么一回事? 没有被纠正的"长度偏见" 有研究者指出,这其实是自DeepSeek-R1-Zero以来,DeepSeek系列模型一直存在的一个"bug"。 $$\tau_{i,t}(\theta)=\frac{\pi_{\theta}(o_{i,t}|q,o_{i,<t})}{\pi_{\rm old}(o_{i,t}|q,o_{i,<t})}\tag{6}$$ 简单来说,问题出在了GRPO算法上。 来自Sea AI Lab和新加坡国立大学等研究机构的学者认为,GRPO存在两个"隐藏偏见"。 长度偏见:错误答案越长,惩罚反而会越轻 GRPO计算奖励时,会把"答 ...
DeepSeek-V3.2被找出bug了:疯狂消耗token,答案还可能出错,研究人员:GRPO老问题没解决
量子位· 2025-12-03 09:05
Core Viewpoint - DeepSeek-V3.2 has gained significant attention but has been found to have issues, particularly with token consumption during complex tasks, leading to longer and potentially incorrect answers [1][4][5]. Group 1: Token Consumption Issues - DeepSeek-V3.2's Speciale version consumes more tokens compared to competitors, using 77,000 tokens for certain tasks while Gemini only uses 20,000 tokens [5]. - The model's reliance on the GRPO algorithm has led to a "length bias," where longer incorrect answers are less penalized, resulting in the generation of "long and wrong" responses [10][11]. Group 2: Hidden Biases in GRPO Algorithm - The GRPO algorithm has two hidden biases: length bias and difficulty bias. The length bias results in longer incorrect answers being favored, while the difficulty bias causes the model to focus excessively on overly simple or overly difficult questions, neglecting medium-difficulty questions that are crucial for skill improvement [10][12]. - Despite attempts to address these biases, the length bias remains a challenge, as acknowledged in DeepSeek's technical report [15][13]. Group 3: Cost and Resource Considerations - DeepSeek-V3.2's output cost is significantly lower than that of GPT-5, at only 1/24 of the price, which may make it more acceptable despite its token efficiency issues [17]. - The model's context length of 128K has not been updated for a long time, which may be related to limited GPU resources [18].
多模态大模型强化学习训练框架 - EasyR1代码走读(GRPO)
自动驾驶之心· 2025-07-15 12:30
Core Insights - The article discusses the exploration of the EasyR1 framework for multi-modal reinforcement learning, particularly focusing on its implementation and configuration for training models like Qwen2.5-VL [1][4][6]. Group 1: Framework Overview - EasyR1 is derived from the verl framework and is designed for language-based reinforcement learning [1][6]. - The code version referenced is approximately from June 10, indicating ongoing updates and improvements [1]. Group 2: Configuration Details - The configuration file is structured into four main categories: data, algorithm, worker, and trainer, with specific parameters outlined for each [6][11]. - Data configurations include paths for training and validation files, maximum prompt and response lengths, and batch sizes for training iterations [9][10]. - Algorithm configurations specify parameters for the advantage estimator, discount factors, and KL divergence settings [11][13]. Group 3: Training Workflow - The training process is initiated through a main script that sets up the data loaders and begins the training loop [42][43]. - The workflow includes steps for preparing data, generating sequences, and computing rewards, with specific attention to balancing batch sizes across distributed processes [46][50][64]. - The article emphasizes the importance of handling multi-modal data and ensuring that the training process accommodates various input types [65][66]. Group 4: Data Handling - The dataset must include specific keys such as problem, answer, and images, formatted in JSON for compatibility with the loading functions [40][41]. - The data loading process supports multiple file formats and is designed to create a seamless pipeline for training [41][32]. Group 5: Model Update Mechanism - The article outlines the mechanism for updating the actor model, detailing how policy loss is computed and how gradients are managed during training [82][86]. - It highlights the significance of KL divergence in the training process, particularly in relation to the reference model [71][80].
DeepSeek用的GRPO有那么特别吗?万字长文分析四篇精品论文
机器之心· 2025-05-24 03:13
Core Insights - The article discusses recent advancements in reasoning models, particularly focusing on GRPO and its improved algorithms, highlighting the rapid evolution of AI in the context of reinforcement learning and reasoning [1][2][3]. Group 1: Key Papers and Models - Kimi k1.5 is a newly released reasoning model that employs reinforcement learning techniques and emphasizes long context extension and improved strategy optimization [10][17]. - OpenReasonerZero is the first complete reproduction of reinforcement learning training on a foundational model, showcasing significant results [34][36]. - DAPO explores improvements to GRPO to better adapt to reasoning training, presenting a large-scale open-source LLM reinforcement learning system [48][54]. Group 2: GRPO and Its Characteristics - GRPO is closely related to PPO (Proximal Policy Optimization) and shares similarities with RLOO (REINFORCE Leave One Out), indicating that many leading research works do not utilize GRPO [11][12][9]. - The core understanding is that current RL algorithms are highly similar in implementation, with GRPO being popular but not fundamentally revolutionary [15][6]. - GRPO includes clever modifications specifically for reasoning training rather than traditional RLHF scenarios, focusing on generating multiple answers for reasoning tasks [13][12]. Group 3: Training Techniques and Strategies - Kimi k1.5's training involves supervised fine-tuning (SFT) and emphasizes behavior patterns such as planning, evaluation, reflection, and exploration [23][24]. - The training methods include a sequence strategy that starts with simpler tasks and gradually increases complexity, akin to human learning processes [27][28]. - The paper discusses the importance of data distribution and the quality of prompts in ensuring effective reinforcement learning [22][41]. Group 4: DAPO Improvements - DAPO introduces two distinct clipping hyperparameters to enhance the learning dynamics and efficiency of the model [54][60]. - It also emphasizes dynamic sampling by removing samples with flat rewards from the batch to improve learning speed [63]. - The use of token-level loss rather than per-response loss is proposed to better manage learning dynamics and avoid issues with long responses [64][66]. Group 5: Dr. GRPO Modifications - Dr. GRPO aims to improve learning dynamics by modifying GRPO to achieve stronger performance with shorter generated lengths [76][79]. - The modifications include normalizing advantages across all tokens in a response, which helps in managing the learning signal effectively [80][81]. - The paper highlights the importance of high-quality data engineering in absorbing the effects of these changes, emphasizing the need for a balanced distribution of problem difficulty [82][89].