Workflow
Deep Agent CLI
icon
Search documents
Approaches for Managing Agent Memory
LangChain· 2025-12-18 17:53
Memory Updating Mechanisms for Agents - Explicit memory updating involves directly instructing the agent to remember specific information, similar to how cloud code functions [2][5][6][29] - Implicit memory updating occurs through the agent learning from natural interactions with users, revealing preferences without explicit instructions [7][19][29] Deep Agent CLI and Memory Management - Deep agents have a configuration home directory with an `agent MD` file that stores global memory, similar to Claude's `cloud MD` [3][4][6] - The `agent MD` files are automatically loaded into the system prompt of deep agents, ensuring consistent memory access [6] - Deep agent CLI allows adding information to global memory using natural language commands, updating the `agent MD` file [5] Implicit Memory Updating and Reflection - Agents can reflect on past interactions (sessions or trajectories) to generate higher-level insights and update their memory [8][9][10][28] - Reflection involves summarizing session logs (diaries) and using these summaries to refine and update the agent's memory [11][12] - Accessing session logs is crucial for implicit memory updating; Langsmith can be used to store and manage deep agent traces [13][14][15] Practical Implementation and Workflow - A utility can be used to programmatically access threads and traces from Langsmith projects [21] - The deep agent can be instructed to read interaction threads, identify user preferences, and update global memory accordingly [24][25] - Reflecting on historical threads allows the agent to distill implicit preferences and add them to its global memory, improving future interactions [26][27][28]
How Agents Use Context Engineering
LangChain· 2025-11-12 16:36
Context Engineering Principles for AI Agents - The industry recognizes the increasing task length AI agents can perform, with task length doubling approximately every seven months [2] - The industry faces challenges related to context rot, where performance degrades with longer context lengths, impacting cost and latency [3][4] - Context engineering, involving offloading, reducing, and isolating context, is crucial for managing context rot in AI agents [8][9][10] Context Offloading - Giving agents access to a file system is beneficial for saving and recalling information during long-running tasks and across different agent invocations [11][15][18] - Offloading actions from tools to scripts in a file system expands the agent's action space while minimizing the number of tools and instructions [19][22] - Progressive disclosure of actions, such as with Claude skills, saves tokens by selectively loading skill information only when needed [26][30] Context Reduction - Compaction, summarization, and filtering are techniques used to reduce context size and prevent excessively large tool results from being passed to the language model [32][33][39] - Manis compacts old tool results by saving them to a file and referencing the file in the message history [34] - Deep agents package applies summarization after a threshold of 170,000 tokens [38] Context Isolation - Context isolation, using separate context windows or sub-agents for individual tasks, helps manage context and improve performance [10][39][40] - Sub-agents can have shared context with the parent agent, such as access to the same file system [42] Tool Usage - Agent harnesses often employ a minimal number of general, atomic tools to save tokens and minimize decision-making complexity [44] - Cloud code uses around a dozen tools, Manis uses less than 20, and the deep agent CLI uses 11 [24][25][44]