Workflow
多智能体系统
icon
Search documents
如何借助 ADK、A2A、MCP 和 Agent Engine 构建智能体?
Founder Park· 2025-08-27 11:41
Core Insights - The article highlights a collaboration between Founder Park and Google to explore the potential of AI agents through an online sharing session featuring Google Cloud AI expert Shi Jie [2][3]. Group 1: Event Details - The online sharing session is scheduled for next Thursday, September 4, from 20:00 to 21:00, with limited slots available for registration [4]. - Participants are encouraged to register via a QR code, and the event is free but requires approval for registration [4]. Group 2: Discussion Topics - The session will cover how to build AI agents using ADK, A2A, MCP, and Agent Engine [3][8]. - It will also discuss leveraging Google’s latest AI technologies to create collaborative, efficient, and scalable multi-agent systems [3][8]. - The future of agent development will be explored, focusing on how agents will transform human-technology interaction [3][8]. Group 3: Target Audience - The event is aimed at AI startup leaders, overseas business heads, technical leaders, AI product managers, solution architects, developers, and AI engineers [8].
Chain-of-Agents: OPPO推出通用智能体模型新范式,多榜单SOTA,模型代码数据全开源
机器之心· 2025-08-23 04:42
Core Insights - The article introduces a novel agent reasoning paradigm called Chain-of-Agents (CoA), which enhances multi-agent collaboration and efficiency compared to traditional multi-agent systems (MAS) [2][6][36] - CoA allows for dynamic activation of multiple roles and tools within a single model, facilitating end-to-end multi-agent collaboration without complex prompt and workflow designs [6][36] Limitations of Traditional MAS - High computational costs due to frequent redundant communication and complex workflow designs [3] - Limited generalization ability requiring extensive prompt design and workflow configuration for new tasks [3] - Lack of data-driven learning capabilities, making it difficult to improve performance through task data [3] Advantages of CoA and AFM - CoA reduces communication overhead and supports end-to-end training, significantly improving system efficiency and generalization capabilities [6][36] - The Agent Foundation Model (AFM) demonstrates superior performance across nearly 20 complex tasks, achieving a 55.4% success rate on the GAIA benchmark with a 32B model [6][24] - AFM reduces reasoning costs (token consumption) by up to 85.5% while maintaining leading performance [6] CoA Architecture - CoA features a hierarchical agent architecture with two core components: role-playing agents (Thinking, Planning, Reflection, Verification) and tool agents (Search, Crawl, Code) [10][13] - The framework supports diverse agent reasoning and task execution types [10] Training Framework - A specialized CoA fine-tuning framework is developed to build AFM, involving task data collection, multi-agent capability distillation, supervised fine-tuning, and reinforcement learning [11][14] - Approximately 87,000 structured task-solving trajectories were generated for training [15] Experimental Validation - AFM models exhibit robust performance in multi-hop question answering (MHQA) tasks, achieving new benchmarks across various datasets [19][22] - In mathematical reasoning tasks, AFM-RL-32B achieved an average accuracy of 78.0%, outperforming existing models [26] Efficiency Analysis - AFM shows significant advantages in tool calling efficiency and reasoning costs, requiring fewer tool calls and lower token consumption per successful task [31][33] - The model's performance in test-time scaling is validated across multiple benchmarks, demonstrating robust generalization and reasoning capabilities [31] Future Directions - Potential exploration of dynamic role generation capabilities to enhance adaptability to unknown tasks [39] - Integration of cross-modal tool fusion to expand application scenarios beyond text-based tools [39] - Development of efficient memory mechanisms for long-term tasks to reduce repetitive reasoning costs [39]
内幕曝光:OpenAI模型坦承不会第六题,3人俩月拿下IMO金牌
3 6 Ke· 2025-08-12 00:57
Core Insights - OpenAI achieved a significant milestone by enabling AI to reach gold medal level in the International Mathematical Olympiad (IMO) within just two months, showcasing a breakthrough in general AI technology [1][4][6] Group 1: Team and Methodology - The core team at OpenAI consisted of only three researchers who managed to accomplish what has been a long-standing goal in the AI field [4][10] - They utilized a technique called "multi-agent systems," allowing multiple AI "assistants" to work simultaneously, which facilitated the rapid resolution of complex problems [10][25] - The team employed external IMO medalists to evaluate the AI's proofs, ensuring a reliable assessment of its capabilities [1][6] Group 2: AI Capabilities and Performance - The AI demonstrated remarkable self-awareness by acknowledging its limitations, such as admitting when it could not solve the most challenging problems [18][19] - The breakthrough involved extending reasoning time from mere seconds to hours, enabling deeper thought processes for complex issues [6][23] - The AI's performance in the IMO was a significant leap from previous benchmarks, where it struggled with elementary math problems just a few years ago [12][15] Group 3: Implications and Future Directions - This achievement is seen as a stepping stone towards developing more advanced reasoning technologies that could eventually tackle unsolved problems in mathematics and science [6][25] - The team aims to integrate their methods into more OpenAI models, enhancing reasoning capabilities across various applications [27][29] - Future challenges include enabling AI to generate new mathematical problems, which would represent a significant advancement beyond mere problem-solving [28][29]
GPT5令人失望的背后:OpenAI如何做商业战略调整 | Jinqiu Select
锦秋集· 2025-08-08 15:38
Core Insights - OpenAI claims that GPT-5 integrates "rapid response" and "deep reasoning" into a unified experience, enhancing capabilities in code generation, creative writing, multimodal abilities, and tool usage [1] - Despite these claims, there is no significant breakthrough in leading indicators for GPT-5, with user feedback indicating dissatisfaction due to the removal of older models without convincing alternatives [2] - Speculation arises that OpenAI's strategy may be shifting towards a more closed model system to drive stronger commercial monetization [3] Group 1: GPT-5 Core Upgrades - The most notable upgrade in GPT-5 is the enhancement of "reasoning integration," allowing for a one-stop solution that combines rapid response and deep reasoning [8] - OpenAI has invested heavily in post-training work, focusing on fine-tuning for both consumer and enterprise use, significantly improving the model's utility [9] - GPT-5 has made substantial advancements in code capabilities, setting new standards for reliability and practicality in software development [10][11] Group 2: Business and Infrastructure Perspective - OpenAI's ChatGPT currently boasts 700 million weekly active users, demonstrating the massive appeal of large model products [12] - 85% of ChatGPT's user base is located outside the United States, indicating its global reach and impact [12] - OpenAI has approximately 5 million paid enterprise users, showcasing rapid adoption across various industries [13] - The company has established a three-pronged business model consisting of personal subscriptions, enterprise services, and an API platform, all experiencing explosive growth [13] - OpenAI's CFO emphasizes the importance of input metrics like active user counts over traditional financial metrics, reflecting the company's mission to benefit humanity through AGI [14] Group 3: Product Experience Design Evolution - The discussion around benchmarks and rankings, particularly the ARC-AGI test, highlights the criticism of "score chasing" in AI development [21] - OpenAI's strategy focuses on delivering economic value through targeted optimization rather than blindly pursuing high scores on arbitrary benchmarks [23] Group 4: Multi-Agent System Implementation - The concept of multi-agent systems is gaining traction, with OpenAI exploring how multiple AI agents can collaborate to solve complex tasks more efficiently [24] - Real-world applications of multi-agent systems are being developed, such as using AI agents in software development to automate and streamline processes [25][26] - Challenges remain in fully realizing the potential of multi-agent systems, including the need for cultural and process changes within organizations [28] Group 5: OpenAI Technology Evolution - OpenAI's journey from GPT-1 to GPT-5 reflects a clear strategic progression, focusing on expanding model scale, enhancing alignment techniques, and building a comprehensive intelligent system [30][31] - Each generation of GPT has marked significant advancements in language capabilities, reliability, and practical applications, culminating in the widespread adoption of ChatGPT [33]
2025上半年AI核心成果及趋势报告-量子位智库
Sou Hu Cai Jing· 2025-08-01 04:37
Application Trends - General-purpose Agent products are deeply integrating tool usage, capable of automating tasks that would take hours for humans, delivering richer content [1][13] - Computer Use Agents (CUA) are being pushed to market, focusing on visual operations and merging with text-based deep research Agents [1][14] - Vertical scenarios are accelerating Agentization, with natural language control becoming part of workflows, and AI programming gaining market validation with rapid revenue growth [1][15][17] Model Trends - Reasoning capabilities are continuously improving, with significant advancements in mathematical and coding problems, and some models performing excellently in international competitions [1][20] - Large model tools are enhancing their capabilities, integrating visual and text modalities, and improving multi-modal reasoning abilities [1][22] - Small models are accelerating in popularity, lowering deployment barriers, and model evaluation is evolving towards dynamic and practical task-oriented assessments [1][30] Technical Trends - Resource investment is shifting towards post-training and reinforcement learning, with the importance of reinforcement learning increasing, and future computing power consumption potentially exceeding pre-training [1][33] - Multi-agent systems are becoming a frontier paradigm, with online learning expected to be the next generation of learning methods, and rapid iteration and optimization of Transformer and hybrid architectures [1][33] - Code verification is emerging as a frontier for enhancing AI programming automation, with system prompts significantly impacting user experience [1][33] Industry Trends - xAI's Grok 4 has entered the global top tier, demonstrating that large models lack a competitive moat [2] - Computing power is becoming a key competitive factor, with leading players expanding their computing clusters to hundreds of thousands of cores [2] - OpenAI's leading advantage is diminishing as Google and xAI catch up, with the gap between Chinese and American general-purpose large models narrowing, and China showing strong performance in multi-modal fields [2]
因赛集团:正争取成为某国内头部科技大厂在营销传播领域的战略合作伙伴
Xin Lang Cai Jing· 2025-07-30 09:28
因赛集团(300781.SZ)发布投资者关系活动记录表公告称,公司正在争取成为某国内头部科技大厂在营 销传播领域的战略合作伙伴并陪伴其全球化布局,通过因赛集团及各营销细分领域的优秀子公司为其提 供全链路营销服务。公司制定了新的研发计划,拟在Q3研发完成多智能体系统(MAS)基座并上线, 整合文案、图片、视频、语音、数字人等多样化AI智能体,研发完成支撑AI智能体高效协作的交互机 制与动态工作流中台等。 ...
AI智能体(八):构建多智能体系统
3 6 Ke· 2025-07-27 23:12
Group 1 - The article discusses the value creation potential of AI agents in workflows that are difficult to automate using traditional methods [3]. - AI agents consist of three core components: models, tools, and instructions, which are essential for their functionality [6][8]. - The selection of models should be based on the complexity of tasks, with a focus on achieving performance benchmarks while optimizing for cost and latency [3][6]. Group 2 - Function calling is the primary method for large language models (LLMs) to interact with tools, enhancing the capabilities of AI agents [6][7]. - High-quality instructions are crucial for LLM-based applications, as they reduce ambiguity and improve decision-making [8][11]. - The orchestration of AI agents can be modeled as a graph, where agents represent nodes and tool calls represent edges, facilitating effective workflow execution [11][15]. Group 3 - The article outlines a supervisor mode for managing multiple specialized agents, allowing for task delegation and efficient workflow management [16][17]. - Custom handoff tools can be created to enhance the interaction between agents, allowing for tailored task assignments [33][34]. - The implementation of a multi-layered supervisory structure is possible, enabling the management of multiple teams of agents [31].
如何实现可验证的Agentic Workflow?MermaidFlow开启安全、稳健的智能体流程新范式
机器之心· 2025-07-24 03:19
Core Viewpoint - The article discusses the advancements in Multi-Agent Systems (MAS) and introduces "Agentic Workflow" as a key concept for autonomous decision-making and collaboration among intelligent agents, highlighting the emergence of structured and verifiable workflow frameworks like "MermaidFlow" [1][4][22]. Group 1: Introduction to Multi-Agent Systems - The development of large language models is driving the evolution of AI agents from single capabilities to complex system collaborations, making MAS a focal point in both academia and industry [1]. - Leading teams, including Google and Shanghai AI Lab, are launching innovative Agentic Workflow projects to enhance the autonomy and intelligence of agent systems [2]. Group 2: Challenges in Current Systems - Existing systems face significant challenges such as lack of rationality assurance, insufficient verifiability, and difficulty in intuitive expression, which hinder the reliable implementation and large-scale deployment of MAS [3]. Group 3: Introduction of MermaidFlow - The "MermaidFlow" framework, developed by researchers from Singapore's A*STAR and Nanyang Technological University, aims to advance agent systems towards structured evolution and safe verifiability [4]. - Traditional workflow expressions often rely on imperative code like Python scripts or JSON trees, leading to three core bottlenecks: opaque structure, verification difficulties, and debugging challenges [7][10]. Group 4: Advantages of MermaidFlow - MermaidFlow introduces a structured graphical language that models agent behavior planning as a clear and verifiable flowchart, enhancing the interpretability and reliability of workflows [8][12]. - The structured representation allows for clear visibility of agent definitions, dependencies, and data flows, facilitating easier debugging and optimization [11][14]. Group 5: Performance and Evolution - MermaidFlow demonstrates a high success rate of over 90% in generating executable and structurally sound workflows, significantly improving the controllability and robustness of agent systems compared to traditional methods [18]. - The framework supports safe evolutionary optimization through a structured approach, allowing for modular adjustments and ensuring compliance with semantic constraints [16][19]. Group 6: Conclusion - As MAS and large model AI continue to evolve, achieving structured, verifiable, and efficient workflows is crucial for agent research, with MermaidFlow providing a foundational support for effective collaboration processes [22].
梳理了1400篇研究论文,整理了一份全面的上下文工程指南 | Jinqiu Select
锦秋集· 2025-07-21 14:03
Core Insights - The article discusses the emerging field of Context Engineering, emphasizing the need for a systematic theoretical framework to complement practical experiences shared by Manus' team [1][2] - A comprehensive survey titled "A Survey of Context Engineering for Large Language Models" has been published, analyzing over 1400 research papers to establish a complete technical system for Context Engineering [1][2] Context Engineering Components - Context Engineering is built on three interrelated components: Information Retrieval and Generation, Information Processing, and Information Management, forming a complete framework for optimizing context in large models [2] - The first component, Context Retrieval and Generation, focuses on engineering methods to effectively acquire and construct context information for models, including practices like Prompt Engineering, external knowledge retrieval, and dynamic context assembly [2] Prompting Techniques - Prompting serves as the starting point for model interaction, where effective prompts can unlock deeper capabilities of the model [3] - Zero-shot prompting provides direct instructions relying on pre-trained knowledge, while few-shot prompting offers a few examples to guide the model in understanding task requirements [4] Advanced Reasoning Frameworks - For complex tasks, structured thinking is necessary, with Chain-of-Thought (CoT) prompting models to think step-by-step, significantly improving accuracy in complex tasks [5] - Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT) further enhance reasoning by allowing exploration of multiple paths and dependencies, improving success rates in tasks requiring extensive exploration [5] Self-Refinement Mechanisms - Self-Refinement allows models to iteratively improve their outputs through self-feedback without requiring additional supervised training data [8][9] - Techniques like N-CRITICS and Agent-R enable models to evaluate and correct their reasoning paths in real-time, enhancing output quality [10][11] External Knowledge Retrieval - External knowledge retrieval, particularly through Retrieval-Augmented Generation (RAG), addresses the static nature of model knowledge by integrating dynamic information from external databases [12][13] - Advanced RAG architectures introduce adaptive retrieval mechanisms and hierarchical processing strategies to enhance information retrieval efficiency [14][15] Context Processing Challenges - Processing long contexts presents significant computational challenges due to the quadratic complexity of Transformer self-attention mechanisms [28] - Innovations like State Space Models and Linear Attention aim to reduce computational complexity, allowing models to handle longer sequences more efficiently [29][30] Context Management Strategies - Effective context management is crucial for organizing, storing, and utilizing information, addressing issues like context overflow and collapse [46][47] - Memory architectures inspired by operating systems and cognitive models are being developed to enhance the memory capabilities of language models [48][50] Tool-Integrated Reasoning - Tool-Integrated Reasoning transforms language models from passive text generators into active agents capable of interacting with the external world through function calling and integrated reasoning frameworks [91][92]
「0天复刻Manus」的背后,这名95后技术人坚信:“通用Agent一定存在,Agent也有Scaling Law”| 万有引力
AI科技大本营· 2025-07-11 09:10
Core Viewpoint - The emergence of AI Agents, particularly with the launch of Manus, has sparked a new wave of interest and debate in the AI community regarding the capabilities and future of these technologies [2][4]. Group 1: Development of AI Agents - Manus has demonstrated the potential of AI Agents to automate complex tasks, evolving from mere language models to actionable digital assistants capable of self-repair and debugging [2][4]. - The CAMEL AI community has been working on Agent frameworks for two years, leading to the rapid development of the OWL project, which quickly gained traction in the open-source community [6][8]. - OWL achieved over 10,000 stars on GitHub within ten days of its release, indicating strong community interest and engagement [9][10]. Group 2: Community Engagement and Feedback - The OWL project received extensive feedback from the community, resulting in rapid iterations and improvements based on user input [9][10]. - The initial version of OWL was limited to local IDE usage, but subsequent updates included a Web App to enhance user experience, showcasing the power of community contributions [10][11]. Group 3: Technical Challenges and Innovations - The development of OWL involved significant optimizations, including balancing performance and resource consumption, which were critical for user satisfaction [12][13]. - The introduction of tools like the Browser Tool and Terminal Tool Kit has expanded the capabilities of OWL, allowing Agents to perform automated tasks and install dependencies independently [12][13]. Group 4: Scaling and Future Directions - The concept of "Agent Scaling Law" is being explored, suggesting that the number of Agents could correlate with system capabilities, similar to model parameters in traditional AI [20][21]. - The CAMEL team is investigating the potential for multi-agent systems to outperform single-agent systems in various tasks, with evidence supporting this hypothesis [21][22]. Group 5: Perspectives on General Agents - There is ongoing debate about the feasibility of "general Agents," with some believing in their potential while others view them as an overhyped concept [2][4][33]. - The CAMEL framework is positioned as a versatile multi-agent system, allowing developers to tailor solutions to specific business needs, thus supporting the idea of general Agents [33][34]. Group 6: Industry Trends and Future Outlook - The rise of protocols like MCP and A2A is shaping the landscape for Agent development, with both seen as beneficial for streamlining integration and enhancing functionality [30][35]. - The industry anticipates a significant increase in Agent projects by 2025, with a focus on both general and specialized Agents, indicating a robust future for this technology [34][36].