多智能体系统
Search documents
如何实现可验证的Agentic Workflow?MermaidFlow开启安全、稳健的智能体流程新范式
机器之心· 2025-07-24 03:19
Core Viewpoint - The article discusses the advancements in Multi-Agent Systems (MAS) and introduces "Agentic Workflow" as a key concept for autonomous decision-making and collaboration among intelligent agents, highlighting the emergence of structured and verifiable workflow frameworks like "MermaidFlow" [1][4][22]. Group 1: Introduction to Multi-Agent Systems - The development of large language models is driving the evolution of AI agents from single capabilities to complex system collaborations, making MAS a focal point in both academia and industry [1]. - Leading teams, including Google and Shanghai AI Lab, are launching innovative Agentic Workflow projects to enhance the autonomy and intelligence of agent systems [2]. Group 2: Challenges in Current Systems - Existing systems face significant challenges such as lack of rationality assurance, insufficient verifiability, and difficulty in intuitive expression, which hinder the reliable implementation and large-scale deployment of MAS [3]. Group 3: Introduction of MermaidFlow - The "MermaidFlow" framework, developed by researchers from Singapore's A*STAR and Nanyang Technological University, aims to advance agent systems towards structured evolution and safe verifiability [4]. - Traditional workflow expressions often rely on imperative code like Python scripts or JSON trees, leading to three core bottlenecks: opaque structure, verification difficulties, and debugging challenges [7][10]. Group 4: Advantages of MermaidFlow - MermaidFlow introduces a structured graphical language that models agent behavior planning as a clear and verifiable flowchart, enhancing the interpretability and reliability of workflows [8][12]. - The structured representation allows for clear visibility of agent definitions, dependencies, and data flows, facilitating easier debugging and optimization [11][14]. Group 5: Performance and Evolution - MermaidFlow demonstrates a high success rate of over 90% in generating executable and structurally sound workflows, significantly improving the controllability and robustness of agent systems compared to traditional methods [18]. - The framework supports safe evolutionary optimization through a structured approach, allowing for modular adjustments and ensuring compliance with semantic constraints [16][19]. Group 6: Conclusion - As MAS and large model AI continue to evolve, achieving structured, verifiable, and efficient workflows is crucial for agent research, with MermaidFlow providing a foundational support for effective collaboration processes [22].
梳理了1400篇研究论文,整理了一份全面的上下文工程指南 | Jinqiu Select
锦秋集· 2025-07-21 14:03
Core Insights - The article discusses the emerging field of Context Engineering, emphasizing the need for a systematic theoretical framework to complement practical experiences shared by Manus' team [1][2] - A comprehensive survey titled "A Survey of Context Engineering for Large Language Models" has been published, analyzing over 1400 research papers to establish a complete technical system for Context Engineering [1][2] Context Engineering Components - Context Engineering is built on three interrelated components: Information Retrieval and Generation, Information Processing, and Information Management, forming a complete framework for optimizing context in large models [2] - The first component, Context Retrieval and Generation, focuses on engineering methods to effectively acquire and construct context information for models, including practices like Prompt Engineering, external knowledge retrieval, and dynamic context assembly [2] Prompting Techniques - Prompting serves as the starting point for model interaction, where effective prompts can unlock deeper capabilities of the model [3] - Zero-shot prompting provides direct instructions relying on pre-trained knowledge, while few-shot prompting offers a few examples to guide the model in understanding task requirements [4] Advanced Reasoning Frameworks - For complex tasks, structured thinking is necessary, with Chain-of-Thought (CoT) prompting models to think step-by-step, significantly improving accuracy in complex tasks [5] - Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT) further enhance reasoning by allowing exploration of multiple paths and dependencies, improving success rates in tasks requiring extensive exploration [5] Self-Refinement Mechanisms - Self-Refinement allows models to iteratively improve their outputs through self-feedback without requiring additional supervised training data [8][9] - Techniques like N-CRITICS and Agent-R enable models to evaluate and correct their reasoning paths in real-time, enhancing output quality [10][11] External Knowledge Retrieval - External knowledge retrieval, particularly through Retrieval-Augmented Generation (RAG), addresses the static nature of model knowledge by integrating dynamic information from external databases [12][13] - Advanced RAG architectures introduce adaptive retrieval mechanisms and hierarchical processing strategies to enhance information retrieval efficiency [14][15] Context Processing Challenges - Processing long contexts presents significant computational challenges due to the quadratic complexity of Transformer self-attention mechanisms [28] - Innovations like State Space Models and Linear Attention aim to reduce computational complexity, allowing models to handle longer sequences more efficiently [29][30] Context Management Strategies - Effective context management is crucial for organizing, storing, and utilizing information, addressing issues like context overflow and collapse [46][47] - Memory architectures inspired by operating systems and cognitive models are being developed to enhance the memory capabilities of language models [48][50] Tool-Integrated Reasoning - Tool-Integrated Reasoning transforms language models from passive text generators into active agents capable of interacting with the external world through function calling and integrated reasoning frameworks [91][92]
「0天复刻Manus」的背后,这名95后技术人坚信:“通用Agent一定存在,Agent也有Scaling Law”| 万有引力
AI科技大本营· 2025-07-11 09:10
Core Viewpoint - The emergence of AI Agents, particularly with the launch of Manus, has sparked a new wave of interest and debate in the AI community regarding the capabilities and future of these technologies [2][4]. Group 1: Development of AI Agents - Manus has demonstrated the potential of AI Agents to automate complex tasks, evolving from mere language models to actionable digital assistants capable of self-repair and debugging [2][4]. - The CAMEL AI community has been working on Agent frameworks for two years, leading to the rapid development of the OWL project, which quickly gained traction in the open-source community [6][8]. - OWL achieved over 10,000 stars on GitHub within ten days of its release, indicating strong community interest and engagement [9][10]. Group 2: Community Engagement and Feedback - The OWL project received extensive feedback from the community, resulting in rapid iterations and improvements based on user input [9][10]. - The initial version of OWL was limited to local IDE usage, but subsequent updates included a Web App to enhance user experience, showcasing the power of community contributions [10][11]. Group 3: Technical Challenges and Innovations - The development of OWL involved significant optimizations, including balancing performance and resource consumption, which were critical for user satisfaction [12][13]. - The introduction of tools like the Browser Tool and Terminal Tool Kit has expanded the capabilities of OWL, allowing Agents to perform automated tasks and install dependencies independently [12][13]. Group 4: Scaling and Future Directions - The concept of "Agent Scaling Law" is being explored, suggesting that the number of Agents could correlate with system capabilities, similar to model parameters in traditional AI [20][21]. - The CAMEL team is investigating the potential for multi-agent systems to outperform single-agent systems in various tasks, with evidence supporting this hypothesis [21][22]. Group 5: Perspectives on General Agents - There is ongoing debate about the feasibility of "general Agents," with some believing in their potential while others view them as an overhyped concept [2][4][33]. - The CAMEL framework is positioned as a versatile multi-agent system, allowing developers to tailor solutions to specific business needs, thus supporting the idea of general Agents [33][34]. Group 6: Industry Trends and Future Outlook - The rise of protocols like MCP and A2A is shaping the landscape for Agent development, with both seen as beneficial for streamlining integration and enhancing functionality [30][35]. - The industry anticipates a significant increase in Agent projects by 2025, with a focus on both general and specialized Agents, indicating a robust future for this technology [34][36].
给你一群顶尖AI,如何组队才能发挥最大战力?UIUC用一个新的多智能体协作基准寻找答案
机器之心· 2025-07-09 04:23
Core Viewpoint - The article discusses the emergence of AI teams that collaborate like human teams in software development and scientific research, highlighting the need for effective evaluation metrics for these multi-agent systems [2][3]. Group 1: Introduction of MultiAgentBench - MultiAgentBench is introduced as a comprehensive benchmark for evaluating the collaboration and competition capabilities of LLM-based multi-agent systems [4][6]. - It aims to fill the gap in existing evaluation metrics that focus primarily on individual agent capabilities rather than the essential aspects of collaboration efficiency and communication quality [3][6]. Group 2: Key Findings and Contributions - The research reveals that the gpt-4o-mini model exhibits the strongest overall task performance among various models [8]. - The decentralized collaboration model using a graph structure is found to be the most efficient, while cognitive self-evolution planning significantly enhances task completion rates [8][12]. - MultiAgentBench identifies critical moments where agents begin to exhibit emergent social behaviors, providing insights into achieving AGI-level collaboration [9][12]. Group 3: Evaluation Framework - The framework includes a collaboration engine, an agent graph to structure relationships, and a cognitive module for personalized information and adaptive strategies [12][15]. - It incorporates diverse interaction strategies and six varied evaluation scenarios, simulating real-world team dynamics [19][20]. Group 4: Performance Metrics - The evaluation system uses milestone-based KPIs to assess task completion and collaboration quality, including task scores, communication scores, and planning scores [27][28]. - The findings indicate that high collaboration does not always correlate with superior task outcomes, emphasizing the importance of individual agent capabilities [30][32]. Group 5: Organizational Structure and Team Dynamics - The study highlights that decentralized organizational structures outperform hierarchical ones, which can lead to communication costs and inefficiencies [38]. - The "Ringelmann Effect" is observed, where increasing the number of agents can lead to diminishing returns in performance, underscoring the need for efficient collaboration mechanisms [40]. Group 6: Emergence of Social Intelligence - Notable emergent behaviors, such as strategic silence and trust differentiation, are observed in competitive scenarios, indicating a shift from pure logical reasoning to initial social behavior capabilities in AI agents [43][44]. - The findings suggest that under the right conditions, AI can learn and exhibit advanced social behaviors, marking a significant step towards more sophisticated artificial intelligence [48].
探索金融多领域应用 中财融通大模型及上市公司研报智能体发布
Sou Hu Cai Jing· 2025-07-06 14:55
Group 1 - The CUFEL model and the CUFEL-A research report generation agent were officially launched at the Global Finance Forum hosted by Central University of Finance and Economics on July 5 [1] - CUFEL is described as not just a single model but a cluster of models or an efficient model fine-tuning process, enhancing performance in specific tasks while maintaining general capabilities [3] - The CUFEL-A agent produces independent and in-depth research reports on A-share listed companies through a four-step process: data aggregation, planning, structuring and reflection, and writing [5] Group 2 - The research report evaluation algorithm is built on three principles: generative, end-to-end, and multi-agent system reinforcement learning, improving the quality of report writing [5] - The model was developed by a team of faculty and students from the Central University of Finance and Economics, which is actively collaborating with leading companies in the financial industry to explore applications in smart credit, compliance, and supply chain finance [5]
当无人机遇到AI智能体:多领域自主空中智能和无人机智能体综述
具身智能之心· 2025-06-30 12:17
Core Insights - The article discusses the evolution of Unmanned Aerial Vehicles (UAVs) into Agentic UAVs, which are characterized by autonomous reasoning, multimodal perception, and reflective control, marking a significant shift from traditional automation platforms [5][6][11]. Research Background - The motivation for this research stems from the rapid development of UAVs from remote-controlled platforms to complex autonomous agents, driven by advancements in artificial intelligence (AI) [6][7]. - The increasing demand for autonomy, adaptability, and interpretability in UAV operations across various sectors such as agriculture, logistics, environmental monitoring, and public safety is highlighted [6][7]. Definition and Architecture of Agentic UAVs - Agentic UAVs are defined as a new class of autonomous aerial systems with cognitive capabilities, situational adaptability, and goal-directed behavior, contrasting with traditional UAVs that operate based on predefined instructions [11][12]. - The architecture of Agentic UAVs consists of four core layers: perception, cognition, control, and communication, enabling autonomous sensing, reasoning, action, and interaction [12][13]. Enabling Technologies - Key technologies enabling the development of Agentic UAVs include: - **Perception Layer**: Utilizes a suite of sensors (RGB cameras, LiDAR, thermal sensors) for real-time semantic understanding of the environment [13][14]. - **Cognition Layer**: Acts as the decision-making core, employing techniques like reinforcement learning and probabilistic modeling for adaptive control strategies [13][14]. - **Control Layer**: Converts planned actions into specific flight trajectories and commands [13][14]. - **Communication Layer**: Facilitates data exchange and task coordination among UAVs and other systems [13][14]. Applications of Agentic UAVs - **Precision Agriculture**: Agentic UAVs are transforming precision agriculture by autonomously identifying crop health issues and optimizing pesticide application through real-time data analysis [17][18]. - **Disaster Response and Search and Rescue**: These UAVs excel in dynamic environments, providing real-time adaptability and autonomous task reconfiguration during disaster scenarios [20][21]. - **Environmental Monitoring**: Agentic UAVs serve as intelligent, mobile environmental sentinels, capable of monitoring rapidly changing ecosystems with high spatial and temporal resolution [22][23]. - **Urban Infrastructure Inspection**: They offer a transformative approach to infrastructure inspections, enabling real-time damage detection and adaptive task planning [24]. - **Logistics and Smart Delivery**: Agentic UAVs are emerging as intelligent aerial couriers, capable of executing complex delivery tasks with minimal supervision [25][26]. Challenges and Limitations - Despite the transformative potential of Agentic UAVs, their widespread application faces challenges related to technical constraints, regulatory hurdles, and cognitive dimensions [43].
突破多智能体系统边界,开源方案OWL超越OpenAI Deep Research,获17k star
机器之心· 2025-06-17 03:22
Core Insights - The article discusses the introduction of a new multi-agent framework called Workforce, along with the OWL (Optimized Workforce Learning) training method, which achieved a 69.70% accuracy on the GAIA benchmark, surpassing both open-source and commercial systems, including OpenAI's offerings [1][18]. Background and Challenges - The rapid development of large language models (LLMs) has revealed limitations in single-agent systems for handling complex real-world tasks, leading to the emergence of multi-agent systems (MAS) [7]. - Current MAS face significant challenges in cross-domain transferability, as they are often deeply customized for specific domains, limiting flexibility and scalability [7][10]. Innovative Breakthroughs - The Workforce framework employs a "decoupled design" to address cross-domain transfer issues by decomposing the system into three core components: a domain-agnostic planner, a coordinator agent, and specialized worker nodes [8][12]. - This modular architecture allows for easy adaptation to new domains by replacing or adding worker nodes without altering the core planner and coordinator, significantly reducing complexity and costs associated with system migration [12]. Technical Innovations - The OWL training method focuses on optimizing the planner's capabilities rather than training the entire system, utilizing a two-phase training strategy: supervised fine-tuning (SFT) and reinforcement learning optimization [15][19]. - The training design has shown to enhance the performance of models, with the Qwen2.5-32B-Instruct model's performance on GAIA improving from 36.36% to 52.73% [20]. Experimental Validation - The Workforce framework demonstrated significant advantages in multi-agent reasoning, achieving a pass@1 accuracy of 69.70% on the GAIA validation set, outperforming previous bests from both open-source and proprietary frameworks [18][20]. - The performance comparison table highlights Workforce's superior accuracy across various levels compared to other frameworks [20]. Practical Applications - The research team identified several challenges in real-world task automation, including differences in information sources, information timeliness, language ambiguity, and network environment limitations [22][26]. Conclusion - The success of OWL paves the way for building truly general artificial intelligence systems, with Workforce's modular design and cross-domain transfer capabilities offering significant advantages [24][25]. - The framework maintains stable performance across various capability dimensions and features a self-correcting mechanism that enhances performance through dynamic strategy adjustments during testing [25].
Anthropic 详述如何构建多智能体研究系统:最适合 3 类场景
投资实习所· 2025-06-16 11:51
Core Insights - The article discusses the implementation and advantages of a multi-agent system for research tasks, highlighting its efficiency in handling complex topics through collaborative architecture [1][3][20]. Multi-Agent System Advantages - Multi-agent systems are particularly suited for research tasks due to their ability to adapt dynamically to new information and adjust research methods based on emerging clues [3][20]. - The system allows for parallel processing, where sub-agents work independently to explore different aspects of a problem, thus reducing path dependency and ensuring comprehensive investigation [3][4]. - Internal tests show that the multi-agent system significantly outperforms single-agent versions, with a performance improvement of 90.2% in specific research evaluations [4]. System Architecture - The research system employs a coordinator-worker model, where the main agent coordinates the process and delegates tasks to specialized sub-agents [6][11]. - The architecture supports dynamic multi-step searches, allowing for continuous discovery and adaptation of relevant information [8][11]. Performance Metrics - The performance of the multi-agent system is largely influenced by token usage, with findings indicating that token consumption accounts for 80% of performance variance in evaluations [4][5]. - The system's design allows for efficient allocation of computational resources, enhancing parallel reasoning capabilities [4][5]. Design Principles - Effective design principles for multi-agent systems include clear task delegation, appropriate tool selection, and the establishment of heuristic rules to guide agent behavior [13][17]. - The system emphasizes the importance of flexible evaluation methods to assess the correctness of results and the reasonableness of processes, given the unpredictable nature of agent interactions [14][22]. Challenges and Solutions - The article outlines challenges such as state persistence and error accumulation in agent systems, necessitating robust error handling and recovery mechanisms [16][19]. - Strategies for improving agent performance include real-time observation of agent processes, clear task definitions, and the use of parallel tool calls to enhance speed and efficiency [17][24]. Conclusion - Despite the challenges, multi-agent systems have demonstrated significant value in open-ended research tasks, enabling users to uncover business opportunities and solve complex problems more efficiently [20][21].
近期必读!Devin VS Anthropic 的多智能体构建方法论
歸藏的AI工具箱· 2025-06-15 08:02
Core Viewpoint - The article discusses the advantages and challenges of multi-agent systems, comparing the perspectives of Anthropic and Cognition on the construction and effectiveness of such systems [2][7]. Group 1: Multi-Agent System Overview - Multi-agent systems consist of multiple agents (large language models) working collaboratively, where a main agent coordinates the process and delegates tasks to specialized sub-agents [4][29]. - The typical workflow involves breaking down tasks, launching sub-agents to handle these tasks, and finally merging the results [6][30]. Group 2: Issues with Multi-Agent Systems - Cognition highlights the fragility of multi-agent architectures, where sub-agents may misunderstand tasks, leading to inconsistent results that are difficult to integrate [10]. - Anthropic acknowledges these challenges but implements constraints and measures to mitigate them, such as applying multi-agent systems to suitable domains like research tasks rather than coding tasks [8][12]. Group 3: Solutions Proposed by Anthropic - Anthropic employs a coordinator-worker model, utilizing detailed prompt engineering to clarify sub-agents' tasks and responsibilities, thereby minimizing misunderstandings [16]. - Advanced context management techniques are introduced, including memory mechanisms and file systems to address context window limitations and information loss [8][16]. Group 4: Performance and Efficiency - Anthropic's multi-agent research system has shown a 90.2% performance improvement in breadth-first queries compared to single-agent systems [14]. - The system can significantly reduce research time by parallelizing the launch of multiple sub-agents and their use of various tools, achieving up to a 90% reduction in research time [17][34]. Group 5: Token Consumption and Economic Viability - Multi-agent systems tend to consume tokens at a much higher rate, approximately 15 times more than chat interactions, necessitating that the task's value justifies the increased performance costs [28][17]. - The architecture's design allows for effective token usage by distributing work among agents with independent context windows, enhancing parallel reasoning capabilities [28]. Group 6: Challenges in Implementation - The transition from prototype to reliable production systems faces significant engineering challenges due to the compounded nature of errors in agent systems [38]. - Current synchronous execution of sub-agents creates bottlenecks in information flow, with future plans for asynchronous execution to enhance parallelism while managing coordination and error propagation challenges [39][38].
多智能体在「燃烧」Token!Anthropic公开发现的一切
机器之心· 2025-06-14 04:12
Core Insights - Anthropic's new research on multi-agent systems highlights the advantages of using multiple AI agents for complex research tasks, emphasizing their ability to adapt and explore dynamically [2][3][6][7]. Multi-Agent System Advantages - Multi-agent systems excel in research tasks that require flexibility and the ability to adjust methods based on ongoing discoveries, as they can operate independently and explore various aspects of a problem simultaneously [7][8]. - Anthropic's internal evaluations show that their multi-agent system outperforms single-agent systems by 90.2% in breadth-first query tasks [8]. - The architecture allows for efficient token consumption, with multi-agent systems demonstrating a significant performance boost compared to single-agent models [9][10]. System Architecture - The multi-agent architecture follows a "coordinator-worker" model, where a lead agent coordinates tasks among several specialized sub-agents [14][18]. - The lead agent analyzes user queries, creates sub-agents, and oversees their independent exploration of different aspects of the query [19][21]. Performance Evaluation - Traditional evaluation methods are inadequate for multi-agent systems due to their non-linear and varied paths to achieving results; flexible evaluation methods are necessary [44][45]. - Anthropic employs a "LLM-as-judge" approach for evaluating outputs, which enhances scalability and practicality in assessing the performance of multi-agent systems [49][53]. Engineering Challenges - The complexity of maintaining state in intelligent agent systems poses significant engineering challenges, as minor changes can lead to substantial behavioral shifts [56][61]. - Anthropic has implemented robust debugging and tracking mechanisms to diagnose and address failures in real-time [57]. Conclusion - Despite the challenges, multi-agent systems have shown immense potential in open-ended research tasks, provided they are designed with careful engineering, thorough testing, and a deep understanding of current AI capabilities [61].