Workflow
通义法睿
icon
Search documents
通义DeepResearch重磅开源
Core Insights - Tongyi's first deep research agent model, DeepResearch, has been officially open-sourced, featuring a parameter size of only 30 billion (with 3 billion activated), achieving state-of-the-art (SOTA) results across multiple authoritative evaluation sets, surpassing many top agent models [1][5] Model Training - The Tongyi team has developed a complete training pipeline driven by synthetic data, integrating pre-training and post-training phases. This model capability is based on a multi-stage data strategy aimed at creating vast amounts of high-quality training data without relying on expensive manual annotations [3] - The training pipeline is optimized based on the Qwen3-30B-A3B model, incorporating innovative reinforcement learning (RL) algorithms for validation and real training, enhancing model efficiency and robustness. The use of asynchronous reinforcement learning algorithms and automated data curation processes significantly boosts the model's iteration speed and generalization ability [3] Model Performance - The DeepResearch model, with 3 billion activated parameters, performs comparably to flagship models such as OpenAI's o3, DeepSeek V3.1, and Claude-4-Sonnet in various authoritative agent evaluation sets, including Humanity's Last Exam (HLE), BrowseComp, and GAIA [5] Model Applications - The model has been applied in various real-world scenarios, such as the "Xiao Gao Teacher" developed in collaboration with Amap, which acts as an AI co-pilot for complex travel planning tasks. Additionally, Tongyi's legal research agent, empowered by the DeepResearch architecture, can autonomously execute complex multi-step research tasks, simulating the workflow of a junior lawyer [7] DeepResearch Agent Series - Tongyi DeepResearch also boasts a rich family of DeepResearch Agent models. Earlier this year, the team has continuously expanded its DeepResearch offerings, with previously open-sourced models like WebWalker, WebDancer, and WebSailor achieving industry-leading results in agent synthetic data and reinforcement learning [9]
开源Agent模型榜第一名,现在是阿里通义DeepResearch
量子位· 2025-09-18 04:20
Core Viewpoint - Alibaba has open-sourced its first deep research agent model, Tongyi DeepResearch, which outperforms existing models like OpenAI's Deep Research and DeepSeek-V3.1 in various authoritative evaluation sets [1][3]. Data Strategy - The model's capability enhancement is attributed to a multi-stage data strategy designed to generate high-quality training data without relying on expensive manual annotations [4][5]. - The team introduced Agentic CPT for incremental pre-training, establishing a solid foundation for the agent [6]. - A systematic and scalable data synthesis scheme was developed to create a positive feedback loop for data generation [7]. Data Construction - An open-world knowledge memory was constructed using a wide range of knowledge documents, web crawler data, knowledge graphs, and trajectory data from post-training [8]. - Three types of action data were created based on diverse question styles and historical trajectory data, enabling extensive exploration of the reasoning-action space [9]. Post-training Data - The team developed a fully automated synthetic data generation scheme to produce datasets that surpass the quality of manual annotations [11][12]. - A new process was designed to extract information from real website data, ensuring the authenticity of data structures while increasing question complexity [14]. Reasoning Modes - Tongyi DeepResearch features both a native ReAct Mode and a Heavy Mode for handling complex multi-step research tasks [15][18]. - The IterResearch paradigm was created to deconstruct tasks into a series of research rounds, allowing the agent to maintain cognitive focus and high-quality reasoning [20]. Training Process - The training process was innovated to connect Agentic CPT, Agentic SFT, and Agentic RL, leading to a new paradigm for agent model training [25][27]. - The team emphasized the importance of data quality and training environment stability over algorithmic factors in the success of reinforcement learning projects [37][39]. Application Deployment - Tongyi DeepResearch has empowered multiple internal applications within Alibaba, including the Gaode travel agent, which integrates complex query capabilities into its app [42][43]. - A simulated training environment was created to address the high costs and inconsistencies associated with real-time web API development [44]. Legal AI Application - Tongyi Law Rui, a legal AI agent, aims to provide professional legal services, leveraging innovative agent architecture and iterative planning technology for complex reasoning tasks [46].
通义DeepResearch震撼发布!性能比肩OpenAI,模型、框架、方案完全开源
机器之心· 2025-09-18 01:01
Core Insights - The article discusses the advancements of Tongyi DeepResearch, highlighting its transition from basic conversational capabilities to sophisticated research functionalities, achieving state-of-the-art (SOTA) results across multiple benchmarks while being fully open-source [1][3]. Data Strategy - The improvement in model capabilities is attributed to a multi-stage data strategy designed to generate high-quality training data without relying on expensive manual annotations [5]. - The team introduced Agentic Continual Pre-training (CPT) to establish a solid foundation for the model, utilizing a systematic and scalable data synthesis approach [6]. - The data generation process involves restructuring and constructing questions based on a wide array of knowledge documents, web crawler data, and knowledge graphs, creating an open-world knowledge memory anchored by entities [6]. Reasoning Modes - Tongyi DeepResearch features both a native ReAct Mode and a Heavy Mode for managing complex multi-step research tasks [11]. - In ReAct Mode, the model excels in a standard thinking-action-observation cycle, supporting extensive interaction rounds with a context length of 128K [12]. - Heavy Mode employs a new IterResearch paradigm to deconstruct tasks into research rounds, allowing the agent to maintain cognitive focus and high-quality reasoning [13][14]. Training Methodology - The training process integrates Agentic CPT, Supervised Fine-Tuning (SFT), and Reinforcement Learning (RL), establishing a new paradigm for agent model training [17][20]. - The team customized RL algorithms based on GRPO, ensuring that learning signals align with the model's current capabilities, and implemented strategies to enhance training stability [21]. - Dynamic indicators during training show significant learning effects, with rewards consistently increasing, indicating effective exploration and adaptation [23]. Application Deployment - Tongyi DeepResearch has empowered various internal applications within Alibaba, including the creation of a simulated training environment to reduce development costs and improve speed [27]. - The team developed a stable and efficient tool sandbox to ensure reliable tool calls during agent training and evaluation [27]. - The collaboration with Gaode App focuses on enhancing complex query experiences in navigation and local services, showcasing the practical application of agent capabilities [28]. Legal Intelligence - Tongyi Falvui serves as a legal intelligence agent, providing professional legal services such as legal Q&A, case law retrieval, and document drafting, leveraging innovative agent architecture [30]. - The performance metrics of Tongyi Falvui indicate superior quality in answer points, case citations, and legal references compared to other models [31]. Research Contributions - The Tongyi DeepResearch team has consistently published technical reports, contributing to the open-source community and advancing the field of deep research agents [33].