上下文工程

Search documents
登上热搜!Prompt不再是AI重点,新热点是Context Engineering
机器之心· 2025-07-03 08:01
Core Viewpoint - The article emphasizes the importance of "Context Engineering" as a systematic approach to optimize the input provided to Large Language Models (LLMs) for better output generation [3][11]. Summary by Sections Introduction to Context Engineering - The article highlights the recent popularity of "Context Engineering," with notable endorsements from figures like Andrej Karpathy and its trending status on platforms like Hacker News and Zhihu [1][2]. Understanding LLMs - LLMs should not be anthropomorphized; they are intelligent text generators without beliefs or intentions [4]. - LLMs function as general, uncertain functions that generate new text based on provided context [5][6][7]. - They are stateless, requiring all relevant background information with each input to maintain context [8]. Focus of Context Engineering - The focus is on optimizing input rather than altering the model itself, aiming to construct the most effective input text to guide the model's output [9]. Context Engineering vs. Prompt Engineering - Context Engineering is a more systematic approach compared to the previously popular "Prompt Engineering," which relied on finding a perfect command [10][11]. - The goal is to create an automated system that prepares comprehensive input for the model, rather than issuing isolated commands [13][17]. Core Elements of Context Engineering - Context Engineering involves building a "super input" toolbox, utilizing various techniques like Retrieval-Augmented Generation (RAG) and intelligent agents [15][19]. - The primary objective is to deliver the most effective information in the appropriate format at the right time to the model [16]. Practical Methodology - The process of using LLMs is likened to scientific experimentation, requiring systematic testing rather than guesswork [23]. - The methodology consists of two main steps: planning from the end goal backward and constructing from the beginning forward [24][25]. - The final output should be clearly defined, and the necessary input information must be identified to create a "raw material package" for the system [26]. Implementation Steps - The article outlines a rigorous process for building and testing the system, ensuring each component functions correctly before final assembly [30]. - Specific testing phases include verifying data interfaces, search functionality, and the assembly of final inputs [30]. Additional Resources - For more detailed practices, the article references Langchain's latest blog and video, which cover the mainstream methods of Context Engineering [29].
上下文就是一切!行业热议话题:提示工程是否应该改名
歸藏的AI工具箱· 2025-06-26 11:40
Core Viewpoint - The article discusses the emerging concept of "context engineering" in AI, suggesting it is a more accurate term than "prompt engineering" to describe the skills needed for effectively utilizing large language models (LLMs) [1][2]. Group 1: Importance of Context Engineering - Context engineering is essential for optimizing the performance of AI agents, as insufficient context can lead to inconsistent actions among sub-agents and hinder the ability to follow instructions accurately [4][5]. - The performance of LLMs can decline if the context is too long or contains irrelevant information, which can also increase costs and delays [4][5]. - Instruction adherence is crucial for agents, with top models showing a significant drop in accuracy during multi-turn conversations, highlighting the need for optimized context length and accuracy [4][5]. Group 2: Strategies for Optimizing Context Engineering - Context engineering encompasses three common strategies: compression, persistence, and isolation [5][6]. - Compression aims to retain only the most valuable tokens in each interaction, with methods like context summarization being critical [6][7]. - Persistence involves creating systems for storing, saving, and retrieving context over time, considering storage methods, saving strategies, and retrieval processes [9][10]. - Isolation focuses on managing context across different agents or environments, utilizing structured runtime states to control what LLMs see in each interaction [16][18]. Group 3: Practical Experiences and Recommendations - The article emphasizes the importance of building robust context management systems for AI agents, balancing performance, cost, and accuracy [24]. - It suggests that memory systems should be simple and track specific agent preferences over time, while also considering parallelizable tasks for multi-agent architectures [26]. - The need for a token tracking mechanism is highlighted as foundational for any context engineering work [23].
提示词工程、RAG之后,LangChain:上下文工程开始火了!
机器之心· 2025-06-25 04:06
Core Viewpoint - Context engineering is emerging as a crucial skill for AI engineers, shifting the focus from traditional prompt engineering to providing structured and dynamic context for large language models (LLMs) to perform tasks effectively [3][7][15]. Group 1: Definition and Importance of Context Engineering - Context engineering involves constructing dynamic systems that provide accurate information and tools in the right format, enabling LLMs to complete tasks effectively [9][10]. - The significance of context engineering lies in its ability to address common failures in AI systems, which often stem from inadequate context or incorrect information being provided to the model [12][15]. - Unlike prompt engineering, which focuses on crafting clever prompts, context engineering emphasizes the importance of delivering complete and structured context to enhance model performance [17][19]. Group 2: Components of Effective Context Engineering - Effective context engineering requires accurate information, as models cannot infer context without being explicitly provided with it [12][19]. - The format of the context is critical; how information is communicated to the LLM can significantly impact its responses [13][19]. - Tools must be appropriately utilized to access external information, and the returned data should be formatted in a way that is easily understandable by the LLM [20]. Group 3: Transition from Prompt Engineering to Context Engineering - The transition from prompt engineering to context engineering is driven by the increasing complexity of applications, highlighting the need for a more comprehensive approach to context provision [16][17]. - Context engineering can be viewed as a subset of prompt engineering, where the focus shifts from single input prompts to managing and formatting dynamic data sets [17][18].
近期必读!Devin VS Anthropic 的多智能体构建方法论
歸藏的AI工具箱· 2025-06-15 08:02
Core Viewpoint - The article discusses the advantages and challenges of multi-agent systems, comparing the perspectives of Anthropic and Cognition on the construction and effectiveness of such systems [2][7]. Group 1: Multi-Agent System Overview - Multi-agent systems consist of multiple agents (large language models) working collaboratively, where a main agent coordinates the process and delegates tasks to specialized sub-agents [4][29]. - The typical workflow involves breaking down tasks, launching sub-agents to handle these tasks, and finally merging the results [6][30]. Group 2: Issues with Multi-Agent Systems - Cognition highlights the fragility of multi-agent architectures, where sub-agents may misunderstand tasks, leading to inconsistent results that are difficult to integrate [10]. - Anthropic acknowledges these challenges but implements constraints and measures to mitigate them, such as applying multi-agent systems to suitable domains like research tasks rather than coding tasks [8][12]. Group 3: Solutions Proposed by Anthropic - Anthropic employs a coordinator-worker model, utilizing detailed prompt engineering to clarify sub-agents' tasks and responsibilities, thereby minimizing misunderstandings [16]. - Advanced context management techniques are introduced, including memory mechanisms and file systems to address context window limitations and information loss [8][16]. Group 4: Performance and Efficiency - Anthropic's multi-agent research system has shown a 90.2% performance improvement in breadth-first queries compared to single-agent systems [14]. - The system can significantly reduce research time by parallelizing the launch of multiple sub-agents and their use of various tools, achieving up to a 90% reduction in research time [17][34]. Group 5: Token Consumption and Economic Viability - Multi-agent systems tend to consume tokens at a much higher rate, approximately 15 times more than chat interactions, necessitating that the task's value justifies the increased performance costs [28][17]. - The architecture's design allows for effective token usage by distributing work among agents with independent context windows, enhancing parallel reasoning capabilities [28]. Group 6: Challenges in Implementation - The transition from prototype to reliable production systems faces significant engineering challenges due to the compounded nature of errors in agent systems [38]. - Current synchronous execution of sub-agents creates bottlenecks in information flow, with future plans for asynchronous execution to enhance parallelism while managing coordination and error propagation challenges [39][38].