Workflow
LangChain
icon
Search documents
From Quora to Poe: Adam D'Angelo on Building Platforms for LLMs and Agents | LangChain Interrupt
LangChain· 2025-06-27 16:44
AI Platform & Business Model - Poe平台提供用户通过订阅访问多种语言模型和代理的能力 [1] - Poe的Bot创建者每年收入数百万美元 (millions) [1] - 推理模型正在推动增长 [1] Consumer AI Usage - 揭示了消费者在使用AI方面的惊人模式 [1] AI Development Challenges - 在快速变化的AI领域中构建产品面临独特的挑战 [1] - 规划周期已从数年缩短至仅两个月 [1]
LangChain Academy New Course: Building Ambient Agents with LangGraph
LangChain· 2025-06-26 15:38
Our latest LangChain Academy course – Building Ambient Agents with LangGraph – is now available! Most agents today handle one request at a time through chat interfaces. But as models have improved, agents can now run in the background – and take on long-running, complex tasks. LangGraph is built for these “ambient agents,” with support for human-in-the-loop workflows and memory. LangGraph Platform provides the infrastructure to run these agents at scale, and LangSmith helps you observe, evaluate, and improv ...
Getting Started with LangSmith (5/6): Automations & Online Evaluation
LangChain· 2025-06-25 01:12
Automations & Online Evaluations Overview - Automations are configurable rules applied to every trace in production applications [1] - Online evaluations, a type of automation, measure application output metrics on live user interactions [1][5] Automation Configuration - Automations can be configured with a name, filters to define which runs to execute on, and a sampling rate [3] - Sampling rate allows tuning of automation execution on a subset of traces, especially for expensive evaluations [3][4] - Actions include adding traces to annotation queues or datasets, applying evaluators, and adding feedback [4] Online Evaluations - Online evaluations use LLM as a judge or custom code evaluators on traces without reference outputs [5] - Feedback added by online evaluators is visible in the feedback column and individual trace views [11][12] Additional Automation Features - Automations can trigger webhooks for workflows like creating Jira tickets for trace errors [6] - PagerDuty can be configured for alerting flows [6] - Automations can extend the default 14-day trace retention period by adding feedback or adding traces to a dataset [7] Example Use Case: Simplicity Evaluation - An online evaluator assesses if a chatbot's answer is simple enough for children, scoring from 1 to 10 [7][8] - A second automation samples traces with high simplicity scores and adds them to an annotation queue for review [9] - Rules that add feedback to a trace will send the trace back through other automations [10]
Getting Started with LangSmith (4/6): Annotation Queues
LangChain· 2025-06-25 01:09
Resources & Tools - Eli5 代码库位于 GitHub,方便开发者访问和贡献 [1] - LangSmith 提供免费试用,助力用户快速上手 [1] - LangSmith 提供完善的文档,方便用户查阅和学习 [1] LangChain Ecosystem - LangChain 鼓励用户了解 LangSmith,网址为 langchain.com [1] - LangChain 通过 YouTube 等社交媒体渠道推广 LangSmith [1] - LangSmith 的网址为 smith.langchain.com [1]
Getting Started with LangSmith (3/6): Datasets & Evaluations
LangChain· 2025-06-25 01:05
Resources & Tools - Eli5 代码库位于 GitHub:https://github.com/xuro-langchain/eli5 [1] - LangSmith 提供免费试用:https://smith.langchain.com/ [1] - LangSmith 文档地址:https://docs.smith.langchain.com/ [1] LangChain Platform - LangSmith 平台详情:https://www.langchain.com/langsmith/?utm_medium=social&utm_source=youtube&utm_campaign=q2-2025_onboarding-videos_co [1]
Getting Started with LangSmith (2/6): Playground & Prompts
LangChain· 2025-06-25 00:55
Core Features of Langsmith for Prompt Engineering - Langsmith offers a prompt playground for modifying and testing LLM prompts, accessible via the left-hand navigation or from individual traces containing LLM calls [2][3][4] - The platform includes a prompt hub for saving and versioning LLM prompts, facilitating collaboration and managing frequently changing prompts [6][7] - Langsmith provides a prompt canvas, which uses an LLM agent to help optimize prompts, useful for refining wording and targeting specific sections of the prompt [15][16] Workflow and Application - Users can import existing prompts and outputs from traces into the playground to iterate and refine the prompt based on actual application behavior [4] - The prompt hub allows users to save prompts with input variables, making them more flexible and reusable across different contexts [7][8] - Saved prompts can be accessed via code snippets, enabling dynamic pulling of prompts from the prompt hub into applications, avoiding hardcoding [10][11] - Specific versions or commits of prompts can be used in applications by specifying the commit hash when pulling from the prompt hub [18] Optimization and Version Control - The prompt canvas can rewrite prompts to achieve specific goals, such as returning responses in a different language, and can be constrained to modify only selected sections [16][17] - The platform supports version control, allowing users to track changes and revert to previous versions of prompts as needed [9][13]
Getting Started with LangSmith (1/7): Tracing
LangChain· 2025-06-25 00:47
Langsmith Platform Overview - Langsmith is an observability and evaluation platform for AI applications, focusing on tracing application behavior [1] - The platform uses tracing projects to collect logs associated with applications, with each project corresponding to an application [2] - Langsmith is framework agnostic, designed to monitor AI applications regardless of the underlying build [5] Tracing and Monitoring AI Applications - Tracing is enabled by importing environment variables, including Langmouth tracing, Langmith endpoint, and API key [6] - The traceable decorator is added to functions to enable tracing within the application [8] - Langsmith provides a detailed breakdown of each step within the application, known as the run tree, showing inputs, outputs, and telemetry [12][14] - Telemetry includes token cost and latency of each step, visualized through a waterfall view to identify latency sources [14][15] Integration with Langchain and Langraph - Langchain and Langraph, Langchain's open-source libraries, work out of the box with Langsmith, simplifying tracing setup [17] - When using Langraph or Langchain, the traceable decorator is not required, streamlining the tracing process [17]
How Rakuten AI for Business AI Builds Production-Ready Agents with LangGraph
LangChain· 2025-06-24 16:30
AI Platform & Solutions - Rakuten is building AI products to empower employees and customers, including Rakuten AI for Business to support business clients in essential operations [1] - Rakuten has built an internal generative AI platform designed for over 70+ businesses across Japan and beyond [1] - Rakuten's agentic workflows are powered by LangGraph, enabling employees to create and share AI agents with minimal coding, aiming to democratize AI [2] Challenges & Solutions - Before using LangGraph and LangSmith, Rakuten struggled with evaluating new models and tools, as well as implementing and testing new agent architectures [2] - LangSmith provided a structured way to test new approaches, improving decision-making beyond intuition, including pairwise A/B testing and accuracy analysis (e.g., from 70% to 80%) [3] Benefits of LangGraph & LangSmith - LangGraph provides an intuitive debugging experience, saving engineering time and becoming the go-to for building production-ready agents [4] - LangGraph helps avoid vendor lock-in by easily swapping models and keeping everything in one ecosystem across teams, improving coordination [4] - Using LangChain's tools, Rakuten has achieved faster time to market, iterating and releasing new AI features faster than competitors [4][5] - Reusable evaluation templates, faster debugging, and easier deployment flows have enabled the engineering team to test and deliver more features [5] Strategic Vision - Rakuten views LangGraph, LangSmith, and the LangChain ecosystem as a foundation for innovation, allowing them to move faster and stay flexible in a fast-changing landscape [6]
Cisco TAC’s GenAI Transformation: Building Enterprise Support Agents with LangSmith and LangGraph
LangChain· 2025-06-23 15:30
[Music] My name is John Gutsinger. Uh I work for Cisco. I'm a principal engineer and I work in the technical assistance center or TAC for short.Uh really I'm focused on AI engineering, agentic engineering in the face of customer support. We've been doing a IML for you know a couple years now maybe five or six years. really it started with trying to figure out how do we handle these mass scale issues type problems right where uh some trending issues going to pop up we know we're going to have tens of thousan ...
How Pigment Built an AI-Powered Business Planning Platform with LangGraph
LangChain· 2025-06-20 15:30
Pigment's Business and Technology - Pigment is an enterprise planning and performance management platform that helps companies build strategic plans and adapt to changing market conditions [1] - Pigment AI consists of conversational AI and autonomous agents that accelerates insight generation and scenario creation across the organization [2] - Pigment's autonomous agents framework allows users to schedule and automate reports and scenario creation, saving hundreds of hours of manual work [3] Challenges with Previous AI Architecture - Linear chain pipelines limited flexibility and made experimentation with agent-based workflows complex and cumbersome [4] - Managing graphs, memory, state transitions, and interruptions for custom agents was too complex [5] - Strong control over tools and agents, simple state management, and asynchronous processing were critical needs for financial use cases [5] Benefits of Long Graph - Long Graph offers graph-based orchestration, long-term memory, streaming, and interrupt capabilities [6] - Graph orchestration is easy to set up, allowing easy definition and tweaking of agent iteration and collaboration [6] - Full visibility and control over message flow between agents enables building reliable and testable logic [7] - Agent topologies can be abstracted into configuration files, enabling rapid prototyping and deployment of new workflows [7] Impact of Long Graph - Reduced time to insight from hours to seconds using natural language search and agent analysis [8] - Faster decision-making by surfacing anomalies and key performance gaps in real time [8] - Users can focus on higher value work by automating routine analysis and planning tasks [9] - Engineering team has more time to experiment and innovate, focusing on higher impact features [9] - Significantly less time is spent implementing key site capabilities like persistent, long-term memory [9]