Core Viewpoint - The article discusses the choice between end-to-end training and context engineering in developing general AI agents, highlighting the latter as a more adaptable approach in a rapidly evolving landscape of large models [1][3]. Group 1: Context Engineering Insights - Manus AI's decision to adopt context engineering was influenced by past experiences where self-trained models quickly became obsolete after the release of GPT-3, emphasizing the need for flexibility in model development [4][5]. - The article outlines six core practices derived from Manus's experience, which significantly reduced product iteration cycles from weeks to hours, showcasing an effective technical path for startups [2][3]. Group 2: Key Practices for KV-Cache Optimization - The KV-cache hit rate is identified as the most critical metric for AI agents in production, directly affecting latency and cost, with a notable example showing a 10x cost difference between cached and uncached tokens [7][8]. - Strategies to enhance KV-cache hit rates include maintaining stable prompt prefixes, using only appended context, and employing file systems as external memory to overcome context limitations [8][19]. Group 3: Managing Tool Complexity - The article advises against dynamically adding or removing tools in the agent's action space, suggesting instead to manage tool availability through context-aware masking of token logits to maintain stability [12][13]. - This approach helps prevent confusion in the model when previous actions reference tools that are no longer defined, thereby reducing the risk of erroneous actions [12][17]. Group 4: Utilizing External Memory - Manus employs a file system as an externalized memory solution to address the limitations of context windows, allowing for persistent and unlimited storage that can be directly manipulated by the agent [18][22]. - This method mitigates the risks associated with irreversible context compression, ensuring that critical information is not lost [22]. Group 5: Attention Manipulation Techniques - The use of a todo.md file to continuously update task goals serves as a mechanism to keep the model focused on its objectives, preventing it from losing track during complex tasks [23][26]. - This technique helps maintain the model's attention on the task at hand, especially in lengthy interactions requiring multiple tool calls [26]. Group 6: Learning from Errors - Retaining failed attempts in the context is emphasized as a crucial learning mechanism, allowing the model to adapt and reduce the likelihood of repeating mistakes [30][31]. - The article argues that error recovery is a significant indicator of an agent's performance, yet it is often underrepresented in academic benchmarks [30]. Group 7: Avoiding Few-Shot Traps - The article warns against the pitfalls of few-shot learning in agent systems, where repetitive patterns in context can lead to suboptimal decision-making [32][34]. - Introducing structured variability in actions and observations can help break these patterns and enhance the model's adaptability [34]. Conclusion - Context engineering is presented as an essential and emerging science for agent systems, with the design of context playing a pivotal role in defining agent behavior, speed, recovery, and scalability [35].
Manus季逸超:构建Manus的经验教训 | Jinqiu Select
锦秋集·2025-07-19 05:00