Workflow
LangSmith
icon
Search documents
Tracing Claude Code to LangSmith
LangChain· 2025-08-06 14:32
Setup and Configuration - Setting up tracing from Claude Code to Langsmith requires creating a Langsmith account and generating an API key [1] - Enabling telemetry for Claude Code involves setting the `CLOUD_CODE_ENABLE_TELEMETRY` environment variable to 1 [3] - Configuring the OTLP (OpenTelemetry Protocol) exporter with HTTP transport and JSON encoding is necessary for Langsmith ingestion [4] - The Langsmith Cloud endpoint needs to be specified for logs from Claude Code, or a self-hosted instance URL if applicable [5] - Setting the API key in the headers allows authentication and connection to Langsmith, along with specifying a tracing project [5] - Enabling logging of user prompts and inputs is done by setting the appropriate environment variable to true [6] Monitoring and Observability - Langsmith collects and displays events from Claude Code, providing detailed logs of Claude Code sessions [3] - Traces in Langsmith show individual actions performed by Claude Code, including model names, token usage, and latency [8] - Claude Code sends cost information associated with each request to Langsmith [8] - Langsmith's waterfall view groups runs based on timestamps, showing the sequence of user prompts and Claude Code actions [13] - Langsmith provides pre-built dashboards for monitoring general usage, including the total number of traces, token usage, and costs over time [14]
Langchain,又一家AI独角兽要诞生了,红杉是股东
Hua Er Jie Jian Wen· 2025-07-09 02:36
Core Insights - The valuation of AI infrastructure startup LangChain has reached approximately $1 billion following a new funding round led by IVP, marking its entry into the unicorn club [1] - LangChain's valuation has significantly increased from $200 million during its Series A funding led by Sequoia Capital in 2023, driven by the commercial success of its product LangSmith [1][2] - LangSmith has generated annual recurring revenue between $12 million and $16 million since its launch last year, with notable clients including Klarna, Rippling, and Replit [1] Company Development - LangChain originated as an open-source project created by Harrison Chase in late 2022, who was previously an engineer at Robust Intelligence [2] - The project gained significant developer interest, leading to its transformation into a commercial entity, securing $10 million in seed funding in April 2023, followed by $25 million in Series A funding [2] - The open-source code addressed the lack of real-time information access in early large language models, providing a framework for building applications on LLMs [2] Competitive Landscape - The rapid evolution of the large language model ecosystem has intensified competition for LangChain, with rivals like LlamaIndex, Haystack, and AutoGPT offering similar functionalities [3] - Major model providers such as OpenAI, Anthropic, and Google have begun to offer comparable features, which were previously LangChain's core differentiators [3] - In response to competition, LangChain launched the closed-source product LangSmith, focusing on observability, evaluation, and monitoring of large language model applications [3] Product Strategy - LangSmith has emerged as a key driver of revenue growth for LangChain, adopting a freemium model where developers can use basic features for free, with a subscription fee of $39 per month for small team collaboration [3] - The company also offers customized solutions for larger organizations, further expanding its market reach [3]
登上热搜!Prompt不再是AI重点,新热点是Context Engineering
机器之心· 2025-07-03 08:01
Core Viewpoint - The article emphasizes the importance of "Context Engineering" as a systematic approach to optimize the input provided to Large Language Models (LLMs) for better output generation [3][11]. Summary by Sections Introduction to Context Engineering - The article highlights the recent popularity of "Context Engineering," with notable endorsements from figures like Andrej Karpathy and its trending status on platforms like Hacker News and Zhihu [1][2]. Understanding LLMs - LLMs should not be anthropomorphized; they are intelligent text generators without beliefs or intentions [4]. - LLMs function as general, uncertain functions that generate new text based on provided context [5][6][7]. - They are stateless, requiring all relevant background information with each input to maintain context [8]. Focus of Context Engineering - The focus is on optimizing input rather than altering the model itself, aiming to construct the most effective input text to guide the model's output [9]. Context Engineering vs. Prompt Engineering - Context Engineering is a more systematic approach compared to the previously popular "Prompt Engineering," which relied on finding a perfect command [10][11]. - The goal is to create an automated system that prepares comprehensive input for the model, rather than issuing isolated commands [13][17]. Core Elements of Context Engineering - Context Engineering involves building a "super input" toolbox, utilizing various techniques like Retrieval-Augmented Generation (RAG) and intelligent agents [15][19]. - The primary objective is to deliver the most effective information in the appropriate format at the right time to the model [16]. Practical Methodology - The process of using LLMs is likened to scientific experimentation, requiring systematic testing rather than guesswork [23]. - The methodology consists of two main steps: planning from the end goal backward and constructing from the beginning forward [24][25]. - The final output should be clearly defined, and the necessary input information must be identified to create a "raw material package" for the system [26]. Implementation Steps - The article outlines a rigorous process for building and testing the system, ensuring each component functions correctly before final assembly [30]. - Specific testing phases include verifying data interfaces, search functionality, and the assembly of final inputs [30]. Additional Resources - For more detailed practices, the article references Langchain's latest blog and video, which cover the mainstream methods of Context Engineering [29].
LangChain Academy New Course: Building Ambient Agents with LangGraph
LangChain· 2025-06-26 15:38
Our latest LangChain Academy course – Building Ambient Agents with LangGraph – is now available! Most agents today handle one request at a time through chat interfaces. But as models have improved, agents can now run in the background – and take on long-running, complex tasks. LangGraph is built for these “ambient agents,” with support for human-in-the-loop workflows and memory. LangGraph Platform provides the infrastructure to run these agents at scale, and LangSmith helps you observe, evaluate, and improv ...
Getting Started with LangSmith (5/6): Automations & Online Evaluation
LangChain· 2025-06-25 01:12
Automations & Online Evaluations Overview - Automations are configurable rules applied to every trace in production applications [1] - Online evaluations, a type of automation, measure application output metrics on live user interactions [1][5] Automation Configuration - Automations can be configured with a name, filters to define which runs to execute on, and a sampling rate [3] - Sampling rate allows tuning of automation execution on a subset of traces, especially for expensive evaluations [3][4] - Actions include adding traces to annotation queues or datasets, applying evaluators, and adding feedback [4] Online Evaluations - Online evaluations use LLM as a judge or custom code evaluators on traces without reference outputs [5] - Feedback added by online evaluators is visible in the feedback column and individual trace views [11][12] Additional Automation Features - Automations can trigger webhooks for workflows like creating Jira tickets for trace errors [6] - PagerDuty can be configured for alerting flows [6] - Automations can extend the default 14-day trace retention period by adding feedback or adding traces to a dataset [7] Example Use Case: Simplicity Evaluation - An online evaluator assesses if a chatbot's answer is simple enough for children, scoring from 1 to 10 [7][8] - A second automation samples traces with high simplicity scores and adds them to an annotation queue for review [9] - Rules that add feedback to a trace will send the trace back through other automations [10]
Getting Started with LangSmith (4/6): Annotation Queues
LangChain· 2025-06-25 01:09
Resources & Tools - Eli5 代码库位于 GitHub,方便开发者访问和贡献 [1] - LangSmith 提供免费试用,助力用户快速上手 [1] - LangSmith 提供完善的文档,方便用户查阅和学习 [1] LangChain Ecosystem - LangChain 鼓励用户了解 LangSmith,网址为 langchain.com [1] - LangChain 通过 YouTube 等社交媒体渠道推广 LangSmith [1] - LangSmith 的网址为 smith.langchain.com [1]
Getting Started with LangSmith (3/6): Datasets & Evaluations
LangChain· 2025-06-25 01:05
Resources & Tools - Eli5 代码库位于 GitHub:https://github.com/xuro-langchain/eli5 [1] - LangSmith 提供免费试用:https://smith.langchain.com/ [1] - LangSmith 文档地址:https://docs.smith.langchain.com/ [1] LangChain Platform - LangSmith 平台详情:https://www.langchain.com/langsmith/?utm_medium=social&utm_source=youtube&utm_campaign=q2-2025_onboarding-videos_co [1]
Getting Started with LangSmith (2/6): Playground & Prompts
LangChain· 2025-06-25 00:55
Core Features of Langsmith for Prompt Engineering - Langsmith offers a prompt playground for modifying and testing LLM prompts, accessible via the left-hand navigation or from individual traces containing LLM calls [2][3][4] - The platform includes a prompt hub for saving and versioning LLM prompts, facilitating collaboration and managing frequently changing prompts [6][7] - Langsmith provides a prompt canvas, which uses an LLM agent to help optimize prompts, useful for refining wording and targeting specific sections of the prompt [15][16] Workflow and Application - Users can import existing prompts and outputs from traces into the playground to iterate and refine the prompt based on actual application behavior [4] - The prompt hub allows users to save prompts with input variables, making them more flexible and reusable across different contexts [7][8] - Saved prompts can be accessed via code snippets, enabling dynamic pulling of prompts from the prompt hub into applications, avoiding hardcoding [10][11] - Specific versions or commits of prompts can be used in applications by specifying the commit hash when pulling from the prompt hub [18] Optimization and Version Control - The prompt canvas can rewrite prompts to achieve specific goals, such as returning responses in a different language, and can be constrained to modify only selected sections [16][17] - The platform supports version control, allowing users to track changes and revert to previous versions of prompts as needed [9][13]
Getting Started with LangSmith (1/7): Tracing
LangChain· 2025-06-25 00:47
Langsmith Platform Overview - Langsmith is an observability and evaluation platform for AI applications, focusing on tracing application behavior [1] - The platform uses tracing projects to collect logs associated with applications, with each project corresponding to an application [2] - Langsmith is framework agnostic, designed to monitor AI applications regardless of the underlying build [5] Tracing and Monitoring AI Applications - Tracing is enabled by importing environment variables, including Langmouth tracing, Langmith endpoint, and API key [6] - The traceable decorator is added to functions to enable tracing within the application [8] - Langsmith provides a detailed breakdown of each step within the application, known as the run tree, showing inputs, outputs, and telemetry [12][14] - Telemetry includes token cost and latency of each step, visualized through a waterfall view to identify latency sources [14][15] Integration with Langchain and Langraph - Langchain and Langraph, Langchain's open-source libraries, work out of the box with Langsmith, simplifying tracing setup [17] - When using Langraph or Langchain, the traceable decorator is not required, streamlining the tracing process [17]
Cisco TAC’s GenAI Transformation: Building Enterprise Support Agents with LangSmith and LangGraph
LangChain· 2025-06-23 15:30
[Music] My name is John Gutsinger. Uh I work for Cisco. I'm a principal engineer and I work in the technical assistance center or TAC for short.Uh really I'm focused on AI engineering, agentic engineering in the face of customer support. We've been doing a IML for you know a couple years now maybe five or six years. really it started with trying to figure out how do we handle these mass scale issues type problems right where uh some trending issues going to pop up we know we're going to have tens of thousan ...