Tracing

Search documents
Tracing Claude Code to LangSmith
LangChain· 2025-08-06 14:32
Setup and Configuration - Setting up tracing from Claude Code to Langsmith requires creating a Langsmith account and generating an API key [1] - Enabling telemetry for Claude Code involves setting the `CLOUD_CODE_ENABLE_TELEMETRY` environment variable to 1 [3] - Configuring the OTLP (OpenTelemetry Protocol) exporter with HTTP transport and JSON encoding is necessary for Langsmith ingestion [4] - The Langsmith Cloud endpoint needs to be specified for logs from Claude Code, or a self-hosted instance URL if applicable [5] - Setting the API key in the headers allows authentication and connection to Langsmith, along with specifying a tracing project [5] - Enabling logging of user prompts and inputs is done by setting the appropriate environment variable to true [6] Monitoring and Observability - Langsmith collects and displays events from Claude Code, providing detailed logs of Claude Code sessions [3] - Traces in Langsmith show individual actions performed by Claude Code, including model names, token usage, and latency [8] - Claude Code sends cost information associated with each request to Langsmith [8] - Langsmith's waterfall view groups runs based on timestamps, showing the sequence of user prompts and Claude Code actions [13] - Langsmith provides pre-built dashboards for monitoring general usage, including the total number of traces, token usage, and costs over time [14]
n8n Tracing to LangSmith
LangChain· 2025-08-05 14:30
Hey there. Today you're going to learn how you can set up tracing from N8N to Langsmith in just a few minutes. NADN is an AI workflow builder.It has this really nice interface where I can quickly string together a few nodes into a simple AI agent. You can start by testing out these workflows directly in this canvas. And then once you're happy with it, you can also set up external triggers to execute these workflows automatically.Now, whether you're just getting started building or if you already have an age ...
X @Avi Chawla
Avi Chawla· 2025-06-30 19:06
LLM Application Evaluation - Deepeval enables component-level evaluation and tracing of LLM applications, addressing the need to identify issues within retrievers, tool calls, or the LLM itself [1] - The "@observe" decorator allows tracing of individual LLM components like tools, retrievers, and generators [2] - Metrics can be attached to each component for detailed analysis [2] - Deepeval provides a visual breakdown of component performance [2] Open Source and Data Control - Deepeval is a 100% open-source tool with over 8.5 thousand stars [2] - Users can self-host Deepeval to maintain control over their data [2] Ease of Use - Implementing Deepeval requires only 3 lines of code [1] - No refactoring of existing code is needed [1]
X @Avi Chawla
Avi Chawla· 2025-06-30 06:33
Core Functionality - DeepEval provides open-source tracing for LLM applications using a Python decorator `@observe` [1] - The solution enables component-level evaluations of LLM apps, addressing issues within retrievers, tool calls, or the LLM itself [1] - It allows attaching different metrics to each component of the LLM application [1] - DeepEval offers a visual breakdown of the performance of each component [1] Open Source and Hosting - DeepEval is 100% open-source with over 8,500 stars [2] - The solution can be self-hosted, ensuring data privacy [2]
Getting Started with LangSmith (1/7): Tracing
LangChain· 2025-06-25 00:47
Langsmith Platform Overview - Langsmith is an observability and evaluation platform for AI applications, focusing on tracing application behavior [1] - The platform uses tracing projects to collect logs associated with applications, with each project corresponding to an application [2] - Langsmith is framework agnostic, designed to monitor AI applications regardless of the underlying build [5] Tracing and Monitoring AI Applications - Tracing is enabled by importing environment variables, including Langmouth tracing, Langmith endpoint, and API key [6] - The traceable decorator is added to functions to enable tracing within the application [8] - Langsmith provides a detailed breakdown of each step within the application, known as the run tree, showing inputs, outputs, and telemetry [12][14] - Telemetry includes token cost and latency of each step, visualized through a waterfall view to identify latency sources [14][15] Integration with Langchain and Langraph - Langchain and Langraph, Langchain's open-source libraries, work out of the box with Langsmith, simplifying tracing setup [17] - When using Langraph or Langchain, the traceable decorator is not required, streamlining the tracing process [17]