Workflow
ZeroSearch
icon
Search documents
自搜索强化学习SSRL:Agentic RL的Sim2Real时刻
机器之心· 2025-09-02 01:27
Core Insights - The article discusses the development and effectiveness of SSRL (Structured Search Reinforcement Learning) in enhancing the training efficiency and stability of Search Agents using large language models (LLMs) [6][28] - SSRL demonstrates superior performance over traditional methods that rely on external search engines, achieving effective transfer from simulation to real-world applications (Sim2Real) [6][28] Group 1 - SSRL utilizes structured prompts and format rewards to effectively extract world knowledge from models, leading to improved performance across various benchmarks and reduced hallucination [2][6] - The research highlights the high costs and inefficiencies associated with current RL training methods for Search Agents, which include full-real and semi-real search approaches [7][13] - The introduction of SSRL allows for a significant increase in training efficiency, estimated at approximately 5.6 times, while maintaining a continuous increase in training rewards without collapse [31][32] Group 2 - Experiments show that models trained with SSRL outperform those relying on external engines, particularly in real-world search scenarios, indicating the importance of integrating real-world knowledge [28][31] - The article presents findings that suggest the combination of self-generated knowledge and real-world knowledge can enhance model performance, particularly through entropy-guided search strategies [34] - The integration of SSRL with TTRL (Task-Driven Reinforcement Learning) has shown to improve generalization and effectiveness, achieving up to a 67% performance increase in certain tasks [38][39]
成本暴降88%!通义实验室、北大发布ZeroSearch,无需搜索即可激活LLM检索能力
机器之心· 2025-05-29 04:53
方法 无需搜索的强化学习框架 本文作者来自通义实验室和北京大学,第一作者是北京大学智能学院博士生孙浩,主要研究方向是RAG和Agent,在 NeurIPS、ACL、EMNLP 等国际顶级会议上 发表多篇论文,师从张岩教授。该工作在阿里巴巴通义实验室RAG团队实习期间完成。 信息检索能力对提升大语言模型 (LLMs) 的推理表现至关重要,近期研究尝试引入强化学习 (RL) 框架激活 LLMs 主动搜集信息的能力,但现有方法在训练过程中 面临两大核心挑战: 为了解决这些问题,我们提出了 ZeroSearch 框架 —— 无需真实搜索,直接用大语言模型模拟搜索引擎,并引入课程学习策略,在显著降低 88% 成本的同时,在 多项任务上性能超过依赖真实搜索引擎的方法。 传统训练方法需要在 Rollout 阶段频繁与真实搜索引擎交互,产生大量 API 开销,而大语言模型在预训练阶段积累了丰富的世界知识,具备根据 query 返回相关信 息的能力,因此 ZeroSearch 创新性地引入大语言模型作为模拟搜索引擎(Simulation LLM),无需真实搜索,即可为策略模型生成检索文档,大幅降低了训练成 本: $$\oper ...
通义实验室新研究:大模型自己「扮演」搜索引擎,提升推理能力无需搜索API
量子位· 2025-05-17 03:50
闻乐 发自 凹非寺 量子位 | 公众号 QbitAI 强化学习(RL)+真实搜索引擎,可以有效提升大模型检索-推理能力。 但问题来了: 一方面,搜索引擎返回的文档质量难以预测,给训练过程带来了噪音和不稳定性。 另一方面,RL训练需要频繁部署,会产生大量API开销,严重限制可扩展性。 现在,来自阿里通义实验室的解决方案公开了:开源 ZeroSearch ,提供了一种 无需与真实搜索引擎交互 的强化学习框架。 实验表明,ZeroSearch仅需3B参数的LLM作为检索模块,即可有效提升搜索能力,节省了高昂API成本。 ZeroSearch让LLM"自给自足"实现搜索进化 研究团队用 模拟搜索环境+渐进式抗噪训练 ,让LLM不再依赖昂贵搜索引擎API。 轻量微调:把LLM变成"搜索引擎模拟器" 用少量标注数据微调LLM,使其能按指令生成两种文档—— 有用结果 和 噪声干扰 。 通过收集与真实搜索引擎交互的数据,ZeroSearch对LLM进行轻量级监督微调。 在这个过程中,模型学会生成与真实搜索引擎风格相似的文档,同时能够根据提示词生成相关或噪声文档。 这种能力使得模型在训练过程中能够动态调整文档质量,从而更好地模 ...
颠覆谷歌搜索API,成本降至88%,阿里开源RL框架ZeroSearch,重新定义AI搜索!
AI科技大本营· 2025-05-09 09:35
ZeroSearch 不是让搜索消失,而是让搜索真正"融入"智能本身。 整理| 梦依丹 ZeroSearch 的思路是 先用轻量级的监督微调,将大模型转化为一个能根据查询生成"相关"与"干扰"文档的检索模块;再通过"逐步降低文档质量"的课 程式训练策略,挑战模型的推理和检索能力,从而实现更稳健的搜索学习路径。 不依赖搜索引擎的 PPO 和 GRPO 训练演示 其做法是: 出品丨AI 科技大本营(ID:rgznai100) 仅需 70.8 美元,在 4 块 A100 GPU 上运行 140亿参数模型,你就能获得媲美甚至超越谷歌搜索的强大 AI 搜索能力! 近日,阿里巴巴通义团队开源了一套全新的解决方案——ZeroSearch,这是一款由大模型驱动的生成式搜索引擎框架,训练过程无需调用任何外部搜索 接口,完全"自给自足",实现了低成本,高性能的检索能力构建。 传统搜索引擎的调用,往往意味着不可控的文档质量与高昂的 API 成本。为了解决这些问题, ZeroSearch 引入了一种全新的强化学习框架——在不与 真实搜索引擎交互的前提下训练出"搜索能力" 。 优化目标如下: 其中, 是待优化的策略模型, 是参考模型, ...
拜拜,昂贵的谷歌搜索 API!阿里开源 RL 框架让大模型自给自足、成本直降88%,网友:游戏规则变了
AI前线· 2025-05-09 05:18
Core Viewpoint - Alibaba's new technology "ZeroSearch" significantly reduces the cost and complexity of training AI systems for information retrieval, eliminating the need for expensive commercial search engine APIs [1][2][14]. Summary by Sections Technology Overview - ZeroSearch is a reinforcement learning framework that allows large language models (LLMs) to develop advanced search capabilities through simulation, outperforming models based on real search engines while incurring zero API costs [2][3]. - The technology is compatible with various model series, including Qwen-2.5 and LLaMA-3.2, and does not require a separate supervised preheating phase [2][3]. Performance Metrics - In comprehensive experiments across seven question-answer datasets, ZeroSearch's performance matched or exceeded that of models trained with real search engines [3][5]. - A 3 billion parameter LLM can achieve search capabilities comparable to Google, while a 14 billion parameter module can surpass Google's performance [3][5]. Cost Efficiency - Training using Google search via SerpAPI for approximately 64,000 queries costs around $586.70, while using a 14 billion parameter simulated LLM on four A100 GPUs costs only $70.80, representing an 88% reduction in costs [7][8]. Methodology - ZeroSearch begins with a lightweight supervised fine-tuning process that transforms LLMs into retrieval modules capable of generating relevant and irrelevant documents in response to queries [9][11]. - The system employs a course-based learning deployment mechanism, gradually increasing the difficulty of generated documents to simulate challenging retrieval scenarios [11][12]. Implications for AI Development - ZeroSearch represents a significant shift in AI training methods, enabling AI systems to improve without relying on external tools like search engines [14][15]. - This technology creates a more equitable competitive environment for small AI companies and startups by drastically lowering the entry barrier associated with high API costs [14][15].