Core Insights - The report discusses the evolution of AI applications from passive command processing tools to "intelligent partners" using an AI Agent and LLM dual-engine model. LLM acts as the "brain" for understanding intentions and planning tasks, while the AI Agent executes actions, creating a closed-loop system [1][2]. AI Application Overview - AI applications are transitioning to a new paradigm where AI Agents and LLMs work together. LLM serves as the cognitive core, responsible for understanding user intentions and planning tasks [15][21]. - The MCP service is foundational for enterprise AI applications, facilitating rapid integration of AI Agents with backend services and standardizing capabilities from disparate IT assets [17]. Development Paths for AI Applications - There are two main paths for building AI applications: 1. Brand New Development: This approach is suitable for disruptive innovation, allowing for the design and development of AI applications from scratch without being constrained by legacy systems [20]. 2. Legacy Transformation: This is the more common choice for most enterprises, embedding AI Agent capabilities into existing core business systems [21]. AI Agent System Components - The AI Agent system comprises several core components: - LLM as the "brain" - Storage services as "memory" - Various tools as "hands" - System prompts that define goals and behaviors, utilizing a ReAct reasoning model [1][26]. Functionality of AI Gateway - The AI Gateway acts as a central hub with multiple functionalities, including LLM caching, content review, and token rate limiting, playing a crucial role in unified access, security management, and high availability [2]. SAE Positioning in AI Applications - The document outlines the positioning and solutions provided by SAE in the AI application era, emphasizing advantages such as ease of use, low cost, and security assurance [2].
阿里云:2025年AI应用AI Agent架构新范式报告
Sou Hu Cai Jing·2025-08-16 03:11