Workflow
Traces
icon
Search documents
Production Monitoring for Agents
LangChain· 2026-03-26 19:33
Welcome to our webinar. We will we will let people trickle in, but today we will be talking about production monitoring for agents. Um so this is the second one that we've done recently or at least that I've done recently.Um the first one was more focused on some of the testing and evals and debugging of agents. And now in this one, we're going to be talking pretty heavily about monitoring agents in production, which we think is a bit of a different uh story and different different kind of like part of of t ...
Observability and Evals for AI Agents: A Simple Breakdown
LangChain· 2026-02-17 16:30
Two of the most crucial things when building production agents is setting up proper observability and setting up proper evaluation. And these are actually tied and coupled, and this is different than in software engineering and the role observability and evaluation play when building agents is different than in software engineering as well. So I wanna talk a little bit about how we view observability and how it powers a lot of agent evaluation.Maybe starting briefly highlighting some of the things that we t ...
Le paradoxe du protocole | Olivier Ribaux | TEDxEcublens
TEDx Talks· 2026-02-03 16:34
Voici la scène d'un événement particulièrement marquant qui demande l'intervention de spécialiste pour expliquer ce qui s'est passé. Imaginez que vous deviez intervenir. Vous allez rechercher les traces de cet événement.Pour cela, vous disposez de technologies, de procédures, de connaissances plus ou moins spécialisées et de votre expérience des situations vécues précédemment. La recherche sur ce genre d'intervention converge vers un résultat particulièrement étonnant. Les personnes qui interviennent dans l ...
Context Engineering Our Way to Long-Horizon Agents: LangChain’s Harrison Chase
Sequoia Capital· 2026-01-21 13:01
People use traces from the start to just tell what's going on under the hood. And it's way more impactful in agents than in single LLM applications because in single LM applications, you get some bad response from the LLM. You know exactly what your prompt is. You know exactly what the context that goes in is because that's determined by code and then you get something out. In agents, they're running and and and repeating and so you don't actually know what the context at step 14 will be because there's 13 ...