Workflow
LangChain
icon
Search documents
Stop Endless Back-and-Forth — Add Model Call Limits in LangChainJS
LangChain· 2025-11-18 16:30
Agent Capabilities & Problem - LChain aims to provide customer support agents capable of handling routine questions and escalating complex issues to human support [1][2] - The industry faces challenges in preventing unproductive, lengthy conversations with AI agents, necessitating graceful escalation strategies [2][15] Solution: Model Call Limit Middleware - LChain introduces a model call limit middleware to control the number of model calls an agent can make, triggering escalation when a threshold is reached [3][4] - This middleware avoids complex conditional logic by setting limits on both thread model count (total conversation) and run model count (single invocation), effectively limiting tool calls [3][5][6] - The middleware uses "after model" and "after agent" hooks to track model call counts, resetting the run model count after each agent interaction [7] - When the model call limit is reached, the middleware can either throw an error or end the conversation with a predefined AI message, providing a customizable escalation path [8][11] Implementation & Example - LChain's example application demonstrates a customer support agent that answers questions about customer accounts and escalates when the model call limit is hit [8] - The agent utilizes predefined customer data, tools for data interaction, and the model call limit middleware configured with a thread limit and run limit, exemplified by a hard-coded limit of eight model calls [9][10] - The demo showcases how the agent initially answers customer queries but escalates to human support when the conversation becomes unproductive or exceeds the model call limit [11][12][13] Benefits & Conclusion - The model call limit middleware offers a reliable guardrail, preventing agents from overthinking and ensuring responsible escalation in real-world applications [14][15] - LChain encourages users to explore and combine various middlewares to enhance agent capabilities, providing a path to build more robust and stable AI agents [16]
LangChain Academy New Course: LangSmith Essentials
LangChain· 2025-11-13 17:24
I'm excited to announce the release of our latest LangChain Academy course, LangSmith Essentials. In this quickstart course, you'll learn to observe, evaluate, and deploy an AI agent in less than 30 minutes. Testing applications is an essential part of the development lifecycle, but LLM systems are non-deterministic, meaning we can't predict exactly what output a given input will produce.When you add multi-turn interactions and tool-calling agents into the mix, the process becomes even more complex and less ...
To-Do List Middleware (Python)
LangChain· 2025-11-13 17:01
Hey folks, it's Sydney from LinkChain and I'm super excited to share with you our next middleware demo for our to-do list middleware. Did you know you're 42% more likely to achieve a goal if you write it down. Turns out agents actually benefit from the same agents equipped with a to-do list often perform better when given complex tasks.In fact, you might have already seen this in action with coding agents like Claude Code that draft a to-do list and continuously update it throughout a conversation. First, l ...
Why Most AI Agents Fail — and How a Simple Todo List Fixes It
LangChain· 2025-11-13 17:01
Hi, this is Christian from Lchain. Most AI agents today don't think ahead. They just react one step at a time.And that's why exactly they get sometimes stuck, loop, hallucinate, or just burn money. But here's a twist. With just one piece of state, a simple to-do list, an agent can suddenly plan, execute reliably, and finish task like a professional.The to-do list middleware for longchain agents will help you with exactly that. Today I will show you why planning can change everything and when it actually mak ...
Execute code with sandboxes for Deep Agents
LangChain· 2025-11-13 16:21
Hey, I'm VC and in this video I'm excited to introduce sandboxes for deep agents. We're going to talk about what these are and why you might want to use them in developing your deep agents. So, a common thing that you might do is you might have your local machine that's running your deep agent.And a common ask that we hear is you want to safely run the code that your agent is generating, but you don't want to mess up the machine that you're working on because the the agent could be generating arbitrary code ...
Add a Human-in-the-Loop to Your LangChain Agent (Next.js + TypeScript Tutorial)
LangChain· 2025-11-12 17:01
Core Concept - Introduces the concept of "human-in-the-loop" middleware for Langchain agents, allowing human review and intervention in agent workflows [5][18] - Explains the agent's reasoning loop: reason, act, observe, and how human intervention fits into this loop [3][5] - Highlights the three decision types for human reviewers: approve, edit, and reject, and how these decisions guide the agent's subsequent actions [7] Technical Implementation - Demonstrates the integration of a Langchain agent with human-in-the-loop middleware in a Nextjs application for sending emails [2][17] - Emphasizes the importance of a checkpointer (using Redis database) to store the agent's state and enable resuming the workflow after human intervention [13][14] - Describes how the middleware intercepts tool calls (e g, sending emails) and pauses the agent's execution, awaiting human input [5][6] Benefits and Use Cases - Positions human-in-the-loop as a way to combine agent autonomy with human oversight, especially for actions with risk or requiring judgment [18][19] - Suggests use cases such as sending emails, updating records, or writing to external systems, where human review is valuable [19] - Underscores the flexibility of the middleware, allowing customization of interruption logic based on tool name, arguments, or runtime context [19][20] Practical Example - Provides a practical example of using the middleware to allow a human to revise an email drafted by the agent before it is sent [2][16] - Showcases how to reject a proposed action and provide feedback to the agent, influencing its subsequent behavior [16] - Mentions a publicly available repository (github com/christian broman/lunghat) for users to experiment with the human-in-the-loop concept [20]
How Agents Use Context Engineering
LangChain· 2025-11-12 16:36
Context Engineering Principles for AI Agents - The industry recognizes the increasing task length AI agents can perform, with task length doubling approximately every seven months [2] - The industry faces challenges related to context rot, where performance degrades with longer context lengths, impacting cost and latency [3][4] - Context engineering, involving offloading, reducing, and isolating context, is crucial for managing context rot in AI agents [8][9][10] Context Offloading - Giving agents access to a file system is beneficial for saving and recalling information during long-running tasks and across different agent invocations [11][15][18] - Offloading actions from tools to scripts in a file system expands the agent's action space while minimizing the number of tools and instructions [19][22] - Progressive disclosure of actions, such as with Claude skills, saves tokens by selectively loading skill information only when needed [26][30] Context Reduction - Compaction, summarization, and filtering are techniques used to reduce context size and prevent excessively large tool results from being passed to the language model [32][33][39] - Manis compacts old tool results by saving them to a file and referencing the file in the message history [34] - Deep agents package applies summarization after a threshold of 170,000 tokens [38] Context Isolation - Context isolation, using separate context windows or sub-agents for individual tasks, helps manage context and improve performance [10][39][40] - Sub-agents can have shared context with the parent agent, such as access to the same file system [42] Tool Usage - Agent harnesses often employ a minimal number of general, atomic tools to save tokens and minimize decision-making complexity [44] - Cloud code uses around a dozen tools, Manis uses less than 20, and the deep agent CLI uses 11 [24][25][44]
Building a Typescript deep research agent
LangChain· 2025-11-06 18:30
Check this out. I just asked an agent to answer one of the world's greatest debates. Is Messi or Ronaldo the greatest soccer player of all time.This isn't an easy question to answer, and it definitely requires a good amount of research. The agent automatically spawned two parallel sub agents to look into each of their achievements. This meant searching the web over a dozen times, compiling a comprehensive report with cited sources.To be extra thorough, the agent then critiqued its own report and plugged any ...
Build a Streaming LangChain Agent in Next.js with useStream
LangChain· 2025-11-06 17:45
Hi there, this is Christian from Langchain. Just a couple of weeks ago, we released version one of Langchain and Lang Graph. And one of the cool features of it is that it makes it really easy to stream events and results from the agent down to any type of front end that you're using, whether it's React, Vue, or Swelt.So, in this video, I want to build a little CHPT clone that shows you how you can build and create agent right in your Nex. js application. Every longchain agent maintains a state throughout it ...
Human in the Loop Middleware (Python)
LangChain· 2025-11-04 17:45
LangChain Middleware - LangChain 提供 human-in-the-loop 中间件,用于在工具调用执行前进行审批、编辑和拒绝 [1] - 该中间件适用于需要人工反馈的场景,例如邮件助手在发送敏感邮件前 [1] Use Case - 示例展示了如何使用该中间件来构建一个邮件助手代理,该代理在发送敏感邮件之前需要人工反馈 [1] Resources - 更多关于中间件的文档可以在 LangChain 官方文档中找到 [1] - 示例代码可以在 Gist 上找到 [1]