自主信息检索智能体

Search documents
阿里发布信息检索Agent,可自主上网查资料,GAIA基准超越GPT-4o | 模型&数据开源
量子位· 2025-06-27 04:40
Core Viewpoint - Alibaba has introduced WebDancer, an autonomous information retrieval agent capable of understanding and navigating the web like a human, enhancing the capabilities of traditional models through multi-step reasoning and tool usage [1][3]. Group 1: WebDancer's Capabilities - WebDancer can perform complex tasks such as web browsing, information searching, and question answering, demonstrating its ability to execute multi-step reasoning [9]. - The model achieved a Pass@3 score of 61.1% on GAIA and 54.6% on WebWalkerQA, outperforming baseline models and some open-source frameworks [4][34]. - WebDancer employs a four-stage training paradigm, which includes data construction, trajectory sampling, supervised fine-tuning, and reinforcement learning to enhance its reasoning and decision-making capabilities [10][28]. Group 2: Training Methodology - The first stage involves constructing browsing data to create complex QA pairs that require multiple interactions, simulating human behavior [12][15]. - The second stage focuses on generating high-quality Thought-Action-Observation trajectories, utilizing a dual-path sampling method for both short and long reasoning chains [20][22]. - The supervised fine-tuning stage integrates these trajectories to teach the model basic task decomposition and tool usage while preserving its original reasoning abilities [25][27]. - The reinforcement learning stage aims to optimize the agent's decision-making and generalization capabilities in real-world web environments [28][30]. Group 3: Performance Analysis - WebDancer's performance was tested on challenging datasets, including BrowseComp in English and Chinese, where it demonstrated robust capabilities in handling difficult reasoning and information retrieval tasks [36]. - The analysis of Pass@1 and Pass@3 metrics indicates that reinforcement learning significantly improves the sampling of correct responses, while consistency in language reasoning models shows notable improvement [38].