Workflow
Online Evaluation
icon
Search documents
Getting Started with LangSmith (5/6): Automations & Online Evaluation
LangChainยท 2025-06-25 01:12
Automations & Online Evaluations Overview - Automations are configurable rules applied to every trace in production applications [1] - Online evaluations, a type of automation, measure application output metrics on live user interactions [1][5] Automation Configuration - Automations can be configured with a name, filters to define which runs to execute on, and a sampling rate [3] - Sampling rate allows tuning of automation execution on a subset of traces, especially for expensive evaluations [3][4] - Actions include adding traces to annotation queues or datasets, applying evaluators, and adding feedback [4] Online Evaluations - Online evaluations use LLM as a judge or custom code evaluators on traces without reference outputs [5] - Feedback added by online evaluators is visible in the feedback column and individual trace views [11][12] Additional Automation Features - Automations can trigger webhooks for workflows like creating Jira tickets for trace errors [6] - PagerDuty can be configured for alerting flows [6] - Automations can extend the default 14-day trace retention period by adding feedback or adding traces to a dataset [7] Example Use Case: Simplicity Evaluation - An online evaluator assesses if a chatbot's answer is simple enough for children, scoring from 1 to 10 [7][8] - A second automation samples traces with high simplicity scores and adds them to an annotation queue for review [9] - Rules that add feedback to a trace will send the trace back through other automations [10]