Workflow
多智能体系统
icon
Search documents
2025上半年AI核心成果及趋势报告-量子位智库
Sou Hu Cai Jing· 2025-08-01 04:37
Application Trends - General-purpose Agent products are deeply integrating tool usage, capable of automating tasks that would take hours for humans, delivering richer content [1][13] - Computer Use Agents (CUA) are being pushed to market, focusing on visual operations and merging with text-based deep research Agents [1][14] - Vertical scenarios are accelerating Agentization, with natural language control becoming part of workflows, and AI programming gaining market validation with rapid revenue growth [1][15][17] Model Trends - Reasoning capabilities are continuously improving, with significant advancements in mathematical and coding problems, and some models performing excellently in international competitions [1][20] - Large model tools are enhancing their capabilities, integrating visual and text modalities, and improving multi-modal reasoning abilities [1][22] - Small models are accelerating in popularity, lowering deployment barriers, and model evaluation is evolving towards dynamic and practical task-oriented assessments [1][30] Technical Trends - Resource investment is shifting towards post-training and reinforcement learning, with the importance of reinforcement learning increasing, and future computing power consumption potentially exceeding pre-training [1][33] - Multi-agent systems are becoming a frontier paradigm, with online learning expected to be the next generation of learning methods, and rapid iteration and optimization of Transformer and hybrid architectures [1][33] - Code verification is emerging as a frontier for enhancing AI programming automation, with system prompts significantly impacting user experience [1][33] Industry Trends - xAI's Grok 4 has entered the global top tier, demonstrating that large models lack a competitive moat [2] - Computing power is becoming a key competitive factor, with leading players expanding their computing clusters to hundreds of thousands of cores [2] - OpenAI's leading advantage is diminishing as Google and xAI catch up, with the gap between Chinese and American general-purpose large models narrowing, and China showing strong performance in multi-modal fields [2]
因赛集团:正争取成为某国内头部科技大厂在营销传播领域的战略合作伙伴
Xin Lang Cai Jing· 2025-07-30 09:28
因赛集团(300781.SZ)发布投资者关系活动记录表公告称,公司正在争取成为某国内头部科技大厂在营 销传播领域的战略合作伙伴并陪伴其全球化布局,通过因赛集团及各营销细分领域的优秀子公司为其提 供全链路营销服务。公司制定了新的研发计划,拟在Q3研发完成多智能体系统(MAS)基座并上线, 整合文案、图片、视频、语音、数字人等多样化AI智能体,研发完成支撑AI智能体高效协作的交互机 制与动态工作流中台等。 ...
AI智能体(八):构建多智能体系统
3 6 Ke· 2025-07-27 23:12
Group 1 - The article discusses the value creation potential of AI agents in workflows that are difficult to automate using traditional methods [3]. - AI agents consist of three core components: models, tools, and instructions, which are essential for their functionality [6][8]. - The selection of models should be based on the complexity of tasks, with a focus on achieving performance benchmarks while optimizing for cost and latency [3][6]. Group 2 - Function calling is the primary method for large language models (LLMs) to interact with tools, enhancing the capabilities of AI agents [6][7]. - High-quality instructions are crucial for LLM-based applications, as they reduce ambiguity and improve decision-making [8][11]. - The orchestration of AI agents can be modeled as a graph, where agents represent nodes and tool calls represent edges, facilitating effective workflow execution [11][15]. Group 3 - The article outlines a supervisor mode for managing multiple specialized agents, allowing for task delegation and efficient workflow management [16][17]. - Custom handoff tools can be created to enhance the interaction between agents, allowing for tailored task assignments [33][34]. - The implementation of a multi-layered supervisory structure is possible, enabling the management of multiple teams of agents [31].
如何实现可验证的Agentic Workflow?MermaidFlow开启安全、稳健的智能体流程新范式
机器之心· 2025-07-24 03:19
Core Viewpoint - The article discusses the advancements in Multi-Agent Systems (MAS) and introduces "Agentic Workflow" as a key concept for autonomous decision-making and collaboration among intelligent agents, highlighting the emergence of structured and verifiable workflow frameworks like "MermaidFlow" [1][4][22]. Group 1: Introduction to Multi-Agent Systems - The development of large language models is driving the evolution of AI agents from single capabilities to complex system collaborations, making MAS a focal point in both academia and industry [1]. - Leading teams, including Google and Shanghai AI Lab, are launching innovative Agentic Workflow projects to enhance the autonomy and intelligence of agent systems [2]. Group 2: Challenges in Current Systems - Existing systems face significant challenges such as lack of rationality assurance, insufficient verifiability, and difficulty in intuitive expression, which hinder the reliable implementation and large-scale deployment of MAS [3]. Group 3: Introduction of MermaidFlow - The "MermaidFlow" framework, developed by researchers from Singapore's A*STAR and Nanyang Technological University, aims to advance agent systems towards structured evolution and safe verifiability [4]. - Traditional workflow expressions often rely on imperative code like Python scripts or JSON trees, leading to three core bottlenecks: opaque structure, verification difficulties, and debugging challenges [7][10]. Group 4: Advantages of MermaidFlow - MermaidFlow introduces a structured graphical language that models agent behavior planning as a clear and verifiable flowchart, enhancing the interpretability and reliability of workflows [8][12]. - The structured representation allows for clear visibility of agent definitions, dependencies, and data flows, facilitating easier debugging and optimization [11][14]. Group 5: Performance and Evolution - MermaidFlow demonstrates a high success rate of over 90% in generating executable and structurally sound workflows, significantly improving the controllability and robustness of agent systems compared to traditional methods [18]. - The framework supports safe evolutionary optimization through a structured approach, allowing for modular adjustments and ensuring compliance with semantic constraints [16][19]. Group 6: Conclusion - As MAS and large model AI continue to evolve, achieving structured, verifiable, and efficient workflows is crucial for agent research, with MermaidFlow providing a foundational support for effective collaboration processes [22].
梳理了1400篇研究论文,整理了一份全面的上下文工程指南 | Jinqiu Select
锦秋集· 2025-07-21 14:03
Core Insights - The article discusses the emerging field of Context Engineering, emphasizing the need for a systematic theoretical framework to complement practical experiences shared by Manus' team [1][2] - A comprehensive survey titled "A Survey of Context Engineering for Large Language Models" has been published, analyzing over 1400 research papers to establish a complete technical system for Context Engineering [1][2] Context Engineering Components - Context Engineering is built on three interrelated components: Information Retrieval and Generation, Information Processing, and Information Management, forming a complete framework for optimizing context in large models [2] - The first component, Context Retrieval and Generation, focuses on engineering methods to effectively acquire and construct context information for models, including practices like Prompt Engineering, external knowledge retrieval, and dynamic context assembly [2] Prompting Techniques - Prompting serves as the starting point for model interaction, where effective prompts can unlock deeper capabilities of the model [3] - Zero-shot prompting provides direct instructions relying on pre-trained knowledge, while few-shot prompting offers a few examples to guide the model in understanding task requirements [4] Advanced Reasoning Frameworks - For complex tasks, structured thinking is necessary, with Chain-of-Thought (CoT) prompting models to think step-by-step, significantly improving accuracy in complex tasks [5] - Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT) further enhance reasoning by allowing exploration of multiple paths and dependencies, improving success rates in tasks requiring extensive exploration [5] Self-Refinement Mechanisms - Self-Refinement allows models to iteratively improve their outputs through self-feedback without requiring additional supervised training data [8][9] - Techniques like N-CRITICS and Agent-R enable models to evaluate and correct their reasoning paths in real-time, enhancing output quality [10][11] External Knowledge Retrieval - External knowledge retrieval, particularly through Retrieval-Augmented Generation (RAG), addresses the static nature of model knowledge by integrating dynamic information from external databases [12][13] - Advanced RAG architectures introduce adaptive retrieval mechanisms and hierarchical processing strategies to enhance information retrieval efficiency [14][15] Context Processing Challenges - Processing long contexts presents significant computational challenges due to the quadratic complexity of Transformer self-attention mechanisms [28] - Innovations like State Space Models and Linear Attention aim to reduce computational complexity, allowing models to handle longer sequences more efficiently [29][30] Context Management Strategies - Effective context management is crucial for organizing, storing, and utilizing information, addressing issues like context overflow and collapse [46][47] - Memory architectures inspired by operating systems and cognitive models are being developed to enhance the memory capabilities of language models [48][50] Tool-Integrated Reasoning - Tool-Integrated Reasoning transforms language models from passive text generators into active agents capable of interacting with the external world through function calling and integrated reasoning frameworks [91][92]
「0天复刻Manus」的背后,这名95后技术人坚信:“通用Agent一定存在,Agent也有Scaling Law”| 万有引力
AI科技大本营· 2025-07-11 09:10
Core Viewpoint - The emergence of AI Agents, particularly with the launch of Manus, has sparked a new wave of interest and debate in the AI community regarding the capabilities and future of these technologies [2][4]. Group 1: Development of AI Agents - Manus has demonstrated the potential of AI Agents to automate complex tasks, evolving from mere language models to actionable digital assistants capable of self-repair and debugging [2][4]. - The CAMEL AI community has been working on Agent frameworks for two years, leading to the rapid development of the OWL project, which quickly gained traction in the open-source community [6][8]. - OWL achieved over 10,000 stars on GitHub within ten days of its release, indicating strong community interest and engagement [9][10]. Group 2: Community Engagement and Feedback - The OWL project received extensive feedback from the community, resulting in rapid iterations and improvements based on user input [9][10]. - The initial version of OWL was limited to local IDE usage, but subsequent updates included a Web App to enhance user experience, showcasing the power of community contributions [10][11]. Group 3: Technical Challenges and Innovations - The development of OWL involved significant optimizations, including balancing performance and resource consumption, which were critical for user satisfaction [12][13]. - The introduction of tools like the Browser Tool and Terminal Tool Kit has expanded the capabilities of OWL, allowing Agents to perform automated tasks and install dependencies independently [12][13]. Group 4: Scaling and Future Directions - The concept of "Agent Scaling Law" is being explored, suggesting that the number of Agents could correlate with system capabilities, similar to model parameters in traditional AI [20][21]. - The CAMEL team is investigating the potential for multi-agent systems to outperform single-agent systems in various tasks, with evidence supporting this hypothesis [21][22]. Group 5: Perspectives on General Agents - There is ongoing debate about the feasibility of "general Agents," with some believing in their potential while others view them as an overhyped concept [2][4][33]. - The CAMEL framework is positioned as a versatile multi-agent system, allowing developers to tailor solutions to specific business needs, thus supporting the idea of general Agents [33][34]. Group 6: Industry Trends and Future Outlook - The rise of protocols like MCP and A2A is shaping the landscape for Agent development, with both seen as beneficial for streamlining integration and enhancing functionality [30][35]. - The industry anticipates a significant increase in Agent projects by 2025, with a focus on both general and specialized Agents, indicating a robust future for this technology [34][36].
给你一群顶尖AI,如何组队才能发挥最大战力?UIUC用一个新的多智能体协作基准寻找答案
机器之心· 2025-07-09 04:23
Core Viewpoint - The article discusses the emergence of AI teams that collaborate like human teams in software development and scientific research, highlighting the need for effective evaluation metrics for these multi-agent systems [2][3]. Group 1: Introduction of MultiAgentBench - MultiAgentBench is introduced as a comprehensive benchmark for evaluating the collaboration and competition capabilities of LLM-based multi-agent systems [4][6]. - It aims to fill the gap in existing evaluation metrics that focus primarily on individual agent capabilities rather than the essential aspects of collaboration efficiency and communication quality [3][6]. Group 2: Key Findings and Contributions - The research reveals that the gpt-4o-mini model exhibits the strongest overall task performance among various models [8]. - The decentralized collaboration model using a graph structure is found to be the most efficient, while cognitive self-evolution planning significantly enhances task completion rates [8][12]. - MultiAgentBench identifies critical moments where agents begin to exhibit emergent social behaviors, providing insights into achieving AGI-level collaboration [9][12]. Group 3: Evaluation Framework - The framework includes a collaboration engine, an agent graph to structure relationships, and a cognitive module for personalized information and adaptive strategies [12][15]. - It incorporates diverse interaction strategies and six varied evaluation scenarios, simulating real-world team dynamics [19][20]. Group 4: Performance Metrics - The evaluation system uses milestone-based KPIs to assess task completion and collaboration quality, including task scores, communication scores, and planning scores [27][28]. - The findings indicate that high collaboration does not always correlate with superior task outcomes, emphasizing the importance of individual agent capabilities [30][32]. Group 5: Organizational Structure and Team Dynamics - The study highlights that decentralized organizational structures outperform hierarchical ones, which can lead to communication costs and inefficiencies [38]. - The "Ringelmann Effect" is observed, where increasing the number of agents can lead to diminishing returns in performance, underscoring the need for efficient collaboration mechanisms [40]. Group 6: Emergence of Social Intelligence - Notable emergent behaviors, such as strategic silence and trust differentiation, are observed in competitive scenarios, indicating a shift from pure logical reasoning to initial social behavior capabilities in AI agents [43][44]. - The findings suggest that under the right conditions, AI can learn and exhibit advanced social behaviors, marking a significant step towards more sophisticated artificial intelligence [48].
探索金融多领域应用 中财融通大模型及上市公司研报智能体发布
Sou Hu Cai Jing· 2025-07-06 14:55
Group 1 - The CUFEL model and the CUFEL-A research report generation agent were officially launched at the Global Finance Forum hosted by Central University of Finance and Economics on July 5 [1] - CUFEL is described as not just a single model but a cluster of models or an efficient model fine-tuning process, enhancing performance in specific tasks while maintaining general capabilities [3] - The CUFEL-A agent produces independent and in-depth research reports on A-share listed companies through a four-step process: data aggregation, planning, structuring and reflection, and writing [5] Group 2 - The research report evaluation algorithm is built on three principles: generative, end-to-end, and multi-agent system reinforcement learning, improving the quality of report writing [5] - The model was developed by a team of faculty and students from the Central University of Finance and Economics, which is actively collaborating with leading companies in the financial industry to explore applications in smart credit, compliance, and supply chain finance [5]
当无人机遇到AI智能体:多领域自主空中智能和无人机智能体综述
具身智能之心· 2025-06-30 12:17
Core Insights - The article discusses the evolution of Unmanned Aerial Vehicles (UAVs) into Agentic UAVs, which are characterized by autonomous reasoning, multimodal perception, and reflective control, marking a significant shift from traditional automation platforms [5][6][11]. Research Background - The motivation for this research stems from the rapid development of UAVs from remote-controlled platforms to complex autonomous agents, driven by advancements in artificial intelligence (AI) [6][7]. - The increasing demand for autonomy, adaptability, and interpretability in UAV operations across various sectors such as agriculture, logistics, environmental monitoring, and public safety is highlighted [6][7]. Definition and Architecture of Agentic UAVs - Agentic UAVs are defined as a new class of autonomous aerial systems with cognitive capabilities, situational adaptability, and goal-directed behavior, contrasting with traditional UAVs that operate based on predefined instructions [11][12]. - The architecture of Agentic UAVs consists of four core layers: perception, cognition, control, and communication, enabling autonomous sensing, reasoning, action, and interaction [12][13]. Enabling Technologies - Key technologies enabling the development of Agentic UAVs include: - **Perception Layer**: Utilizes a suite of sensors (RGB cameras, LiDAR, thermal sensors) for real-time semantic understanding of the environment [13][14]. - **Cognition Layer**: Acts as the decision-making core, employing techniques like reinforcement learning and probabilistic modeling for adaptive control strategies [13][14]. - **Control Layer**: Converts planned actions into specific flight trajectories and commands [13][14]. - **Communication Layer**: Facilitates data exchange and task coordination among UAVs and other systems [13][14]. Applications of Agentic UAVs - **Precision Agriculture**: Agentic UAVs are transforming precision agriculture by autonomously identifying crop health issues and optimizing pesticide application through real-time data analysis [17][18]. - **Disaster Response and Search and Rescue**: These UAVs excel in dynamic environments, providing real-time adaptability and autonomous task reconfiguration during disaster scenarios [20][21]. - **Environmental Monitoring**: Agentic UAVs serve as intelligent, mobile environmental sentinels, capable of monitoring rapidly changing ecosystems with high spatial and temporal resolution [22][23]. - **Urban Infrastructure Inspection**: They offer a transformative approach to infrastructure inspections, enabling real-time damage detection and adaptive task planning [24]. - **Logistics and Smart Delivery**: Agentic UAVs are emerging as intelligent aerial couriers, capable of executing complex delivery tasks with minimal supervision [25][26]. Challenges and Limitations - Despite the transformative potential of Agentic UAVs, their widespread application faces challenges related to technical constraints, regulatory hurdles, and cognitive dimensions [43].
突破多智能体系统边界,开源方案OWL超越OpenAI Deep Research,获17k star
机器之心· 2025-06-17 03:22
Core Insights - The article discusses the introduction of a new multi-agent framework called Workforce, along with the OWL (Optimized Workforce Learning) training method, which achieved a 69.70% accuracy on the GAIA benchmark, surpassing both open-source and commercial systems, including OpenAI's offerings [1][18]. Background and Challenges - The rapid development of large language models (LLMs) has revealed limitations in single-agent systems for handling complex real-world tasks, leading to the emergence of multi-agent systems (MAS) [7]. - Current MAS face significant challenges in cross-domain transferability, as they are often deeply customized for specific domains, limiting flexibility and scalability [7][10]. Innovative Breakthroughs - The Workforce framework employs a "decoupled design" to address cross-domain transfer issues by decomposing the system into three core components: a domain-agnostic planner, a coordinator agent, and specialized worker nodes [8][12]. - This modular architecture allows for easy adaptation to new domains by replacing or adding worker nodes without altering the core planner and coordinator, significantly reducing complexity and costs associated with system migration [12]. Technical Innovations - The OWL training method focuses on optimizing the planner's capabilities rather than training the entire system, utilizing a two-phase training strategy: supervised fine-tuning (SFT) and reinforcement learning optimization [15][19]. - The training design has shown to enhance the performance of models, with the Qwen2.5-32B-Instruct model's performance on GAIA improving from 36.36% to 52.73% [20]. Experimental Validation - The Workforce framework demonstrated significant advantages in multi-agent reasoning, achieving a pass@1 accuracy of 69.70% on the GAIA validation set, outperforming previous bests from both open-source and proprietary frameworks [18][20]. - The performance comparison table highlights Workforce's superior accuracy across various levels compared to other frameworks [20]. Practical Applications - The research team identified several challenges in real-world task automation, including differences in information sources, information timeliness, language ambiguity, and network environment limitations [22][26]. Conclusion - The success of OWL paves the way for building truly general artificial intelligence systems, with Workforce's modular design and cross-domain transfer capabilities offering significant advantages [24][25]. - The framework maintains stable performance across various capability dimensions and features a self-correcting mechanism that enhances performance through dynamic strategy adjustments during testing [25].