Workflow
SFT
icon
Search documents
SFT的本质,其实是在优化RL目标的下界...
自动驾驶之心· 2025-10-22 00:03
Core Insights - The article establishes that under sparse rewards, the training objective of Standard Fine-Tuning (SFT) is a loose lower bound of the Reinforcement Learning (RL) objective, and introduces a bridge distribution to tighten this lower bound while maintaining training stability [1][9][23]. Group 1: Relationship Between SFT and RL - The training objective function for RL strategy gradient algorithms is defined, linking SFT and RL through the derivation of the objective function [4][3]. - SFT operates on a fixed set of labeled data, contrasting with RL's online sampling, which optimizes the strategy model based on reward values [5][9]. - The article demonstrates that SFT's optimization goal can be viewed as a lower bound of the RL objective, indicating that SFT training can yield some effectiveness [9][23]. Group 2: Importance Sampling and Adjustments - The article discusses the application of importance sampling to transition from online to offline sampling in the RL training objective [6][11]. - A key finding is that the lower bound of SFT may become looser as training progresses, necessitating adjustments to tighten this bound [9][11]. - The introduction of an auxiliary distribution is proposed to adjust the SFT training objective, allowing for a tighter lower bound while ensuring training stability [11][12]. Group 3: Properties of iw SFT - The iw SFT formulation incorporates a weight coefficient that can be freely adjusted, allowing for the tightening of the lower bound [11][13]. - The choice of the auxiliary distribution is critical; it should be close to the reference distribution to ensure a tight lower bound while maintaining stability [13][14]. - Two methods for constraining importance weights are proposed: clipping the importance weights and smoothing them to reduce variance [14][15]. Group 4: Practical Implications - The article illustrates the advantages of iw SFT through a multi-armed bandit example, showing how it can effectively utilize negative sample information to improve strategy convergence [18][19][20]. - The overall conclusion emphasizes the importance of understanding the relationship between SFT and RL, and how adjustments can enhance training outcomes [23].
后训练的「分」与「合」,SFT&RL 大一统才是正解?
机器之心· 2025-09-14 01:30
Group 1 - The article discusses the limitations of the traditional "SFT followed by RL" paradigm in post-training for AI models, suggesting a unified approach that combines both methods [7][9][10] - It highlights the importance of post-training in aligning the model's capabilities with human values and preferences, addressing the challenges of "catastrophic forgetting" and overfitting associated with SFT [8][11][12] - The emerging trend in the industry is to explore a unified framework for post-training that leverages the strengths of both SFT and RL, rather than treating them as separate processes [10][15][17] Group 2 - The article evaluates the competitive landscape of AI hardware among major players like Meta, OpenAI, Apple, and Google, questioning whether AI hardware will become a new essential or merely a passing trend [2] - It raises questions about the user experience with AI hardware, such as whether it will truly replace traditional devices or simply serve as an additional feature [2][3] - The potential for innovative AI hardware forms to integrate seamlessly into daily life is explored, along with the implications for user interaction and technology adoption [2][3] Group 3 - The article examines the role of generative AI in search, debating whether it will serve as a replacement for traditional search engines or act as a growth engine for expanding user queries and intentions [3] - It discusses how multimodal interactions and conversational AI are redefining task completion for users, potentially enhancing the value of advertising and commercial opportunities [3] - Google's strategy of gradually integrating AI capabilities into its products, rather than waiting for full technological maturity, reflects a proactive approach to product development and market positioning [3]
大模型微调到底有没有技术含量,或者说技术含量到底有多大?
自动驾驶之心· 2025-08-10 23:32
Core Viewpoint - The article emphasizes the importance of individual approaches and methodologies in the field of large language models (LLMs), particularly in the context of fine-tuning and data quality, suggesting that the technical depth of work in this area is highly dependent on personal engagement and practices [5][16]. Data Work - Method 1 involves inheriting training data from colleagues without checking data quality, which may lead to suboptimal results [7]. - Method 2 suggests downloading open-source data to create a "system + query + answer" dataset [8]. - Method 3 focuses on generating data using GPT-4, emphasizing the diversity of prompts and the importance of data quality checks [8]. - Method 4 advocates using user interaction logs to drive data construction, analyzing user feedback to improve answer quality [9]. - Method 5 recommends breaking down complex tasks at the data level to enhance model performance [9]. Training Code - Method 1 involves inheriting training code and making minimal modifications [11]. - Method 2 encourages a thorough understanding of training code parameters and their implications [11]. - Method 3 promotes questioning and improving training code, such as optimizing speed and framework choices [12]. Experimental Analysis - Method 1 suggests running prepared evaluation sets and addressing data quality issues based on results [14]. - Method 2 involves analyzing bad cases from models to identify underlying issues and designing experiments to validate findings [14]. - Method 3 emphasizes the relationship between model results, data quality, and training methods, advocating for a comprehensive analysis of training logs and evaluation results [15]. Community and Collaboration - The article highlights the establishment of a large community focused on various aspects of autonomous driving technology, including large models and multi-sensor fusion, with nearly 4,000 members and over 300 companies and research institutions involved [18].
OpenThoughts: Data Recipes for Reasoning Models — Ryan Marten, Bespoke Labs
AI Engineer· 2025-07-19 21:10
Open Thoughts项目概览 - Bespoke Labs 发布 Open Thoughts 3,旨在创建最佳的开源推理数据集 [1][9] - Open Thoughts 项目专注于推理数据配方,以解决创建强大推理模型的关键缺失环节 [6][9] - Open Thoughts 3 在科学、代码和数学等领域都优于 Deepseek R1 quen 7B 模型 [13] 数据集创建与优化 - 数据集流水线包括问题来源、混合、过滤、答案生成和答案过滤等步骤 [17] - 实验创建了超过 5000 个数据集和近 3000 个模型,以严格评估流水线中每个步骤的不同决策 [18] - 每个问题采样多个推理轨迹效果显著,在固定问题规模下,性能不会下降,允许数据规模扩大 16 倍 [19][20] - 合成问题是可扩展的,可以进一步提高准确性 [22] - 问题过滤通过让语言模型评估问题的难度和答案的长度来筛选高质量问题 [23] 关键学习与发现 - 少量高质量的数据来源优于大量多样性的数据来源 [25] - 对于 SFT 和知识蒸馏,基于答案过滤或验证答案似乎没有帮助 [26] - 较强的评估基准模型并不一定意味着它是一个更好的教师模型,例如,Quen 32B 是比 Deepseek R1 更好的教师模型 [21] - 通过知识蒸馏,模型可以在某些领域超越教师模型,例如在法律推理领域 [35][36][37] 实践建议 - 根据特定领域调整数据配方,从 Open Thoughts 的配方开始迭代 [29] - 针对代码、科学和数学等不同领域,应区别研究流水线的每个步骤 [29][30] - 如果特定领域的数据不足,可以将现有数据转换为问题,并使用上下文示例生成更多数据 [32] - 评估至关重要,需要使用 Evalchemy 等开源库来确保模型改进的有效性 [33][34]