Workflow
LangChain
icon
Search documents
Getting Started with LangSmith (2/6): Playground & Prompts
LangChain· 2025-06-25 00:55
Core Features of Langsmith for Prompt Engineering - Langsmith offers a prompt playground for modifying and testing LLM prompts, accessible via the left-hand navigation or from individual traces containing LLM calls [2][3][4] - The platform includes a prompt hub for saving and versioning LLM prompts, facilitating collaboration and managing frequently changing prompts [6][7] - Langsmith provides a prompt canvas, which uses an LLM agent to help optimize prompts, useful for refining wording and targeting specific sections of the prompt [15][16] Workflow and Application - Users can import existing prompts and outputs from traces into the playground to iterate and refine the prompt based on actual application behavior [4] - The prompt hub allows users to save prompts with input variables, making them more flexible and reusable across different contexts [7][8] - Saved prompts can be accessed via code snippets, enabling dynamic pulling of prompts from the prompt hub into applications, avoiding hardcoding [10][11] - Specific versions or commits of prompts can be used in applications by specifying the commit hash when pulling from the prompt hub [18] Optimization and Version Control - The prompt canvas can rewrite prompts to achieve specific goals, such as returning responses in a different language, and can be constrained to modify only selected sections [16][17] - The platform supports version control, allowing users to track changes and revert to previous versions of prompts as needed [9][13]
Getting Started with LangSmith (1/7): Tracing
LangChain· 2025-06-25 00:47
Langsmith Platform Overview - Langsmith is an observability and evaluation platform for AI applications, focusing on tracing application behavior [1] - The platform uses tracing projects to collect logs associated with applications, with each project corresponding to an application [2] - Langsmith is framework agnostic, designed to monitor AI applications regardless of the underlying build [5] Tracing and Monitoring AI Applications - Tracing is enabled by importing environment variables, including Langmouth tracing, Langmith endpoint, and API key [6] - The traceable decorator is added to functions to enable tracing within the application [8] - Langsmith provides a detailed breakdown of each step within the application, known as the run tree, showing inputs, outputs, and telemetry [12][14] - Telemetry includes token cost and latency of each step, visualized through a waterfall view to identify latency sources [14][15] Integration with Langchain and Langraph - Langchain and Langraph, Langchain's open-source libraries, work out of the box with Langsmith, simplifying tracing setup [17] - When using Langraph or Langchain, the traceable decorator is not required, streamlining the tracing process [17]
How Rakuten AI for Business AI Builds Production-Ready Agents with LangGraph
LangChain· 2025-06-24 16:30
[Music] So my name is Yuk Kaji and I'm reading the product and engineering and rakuten as a general manager AI for business. So at Rakuten uh we are building a suite of AI product uh that empower both our employee and our customers. So we have built the racketen AI for business to support our business client in essential business operation from market analysis to customer support. So in addition uh we have built our internal generative AI platform designed for over 70 plus uh business across Japan and beyon ...
Cisco TAC’s GenAI Transformation: Building Enterprise Support Agents with LangSmith and LangGraph
LangChain· 2025-06-23 15:30
[Music] My name is John Gutsinger. Uh I work for Cisco. I'm a principal engineer and I work in the technical assistance center or TAC for short.Uh really I'm focused on AI engineering, agentic engineering in the face of customer support. We've been doing a IML for you know a couple years now maybe five or six years. really it started with trying to figure out how do we handle these mass scale issues type problems right where uh some trending issues going to pop up we know we're going to have tens of thousan ...
How Pigment Built an AI-Powered Business Planning Platform with LangGraph
LangChain· 2025-06-20 15:30
Pigment's Business and Technology - Pigment is an enterprise planning and performance management platform that helps companies build strategic plans and adapt to changing market conditions [1] - Pigment AI consists of conversational AI and autonomous agents that accelerates insight generation and scenario creation across the organization [2] - Pigment's autonomous agents framework allows users to schedule and automate reports and scenario creation, saving hundreds of hours of manual work [3] Challenges with Previous AI Architecture - Linear chain pipelines limited flexibility and made experimentation with agent-based workflows complex and cumbersome [4] - Managing graphs, memory, state transitions, and interruptions for custom agents was too complex [5] - Strong control over tools and agents, simple state management, and asynchronous processing were critical needs for financial use cases [5] Benefits of Long Graph - Long Graph offers graph-based orchestration, long-term memory, streaming, and interrupt capabilities [6] - Graph orchestration is easy to set up, allowing easy definition and tweaking of agent iteration and collaboration [6] - Full visibility and control over message flow between agents enables building reliable and testable logic [7] - Agent topologies can be abstracted into configuration files, enabling rapid prototyping and deployment of new workflows [7] Impact of Long Graph - Reduced time to insight from hours to seconds using natural language search and agent analysis [8] - Faster decision-making by surfacing anomalies and key performance gaps in real time [8] - Users can focus on higher value work by automating routine analysis and planning tasks [9] - Engineering team has more time to experiment and innovate, focusing on higher impact features [9] - Significantly less time is spent implementing key site capabilities like persistent, long-term memory [9]
Factory Co-Founder & CTO on Building Reliable AI Agents | LangChain Interrupt
LangChain· 2025-06-18 18:40
Core Idea - Factory believes software development is transitioning to agent-driven from human-driven [1] - To achieve significant productivity gains (5-20x), a shift from collaborating with AI to delegating tasks entirely to AI is needed [3] - Factory is building a platform for managing and scaling AI agents, integrating various engineering systems [3][4][5] Agentic System Characteristics - Agentic systems require planning to decide future actions [11] - Decision-making is crucial for agents to make calls based on the existing state [13][14] - Environmental grounding is necessary for agents to interact with and adapt to the external environment [14] Human-AI Collaboration - Humans will remain in software development, focusing on the outer loop (reasoning, requirements) [15][16] - Agents will handle the inner loop (coding, testing, code review) [17] - AI UX should blend delegation with control for situations where agents cannot complete tasks [17] Agent Reliability - Clear planning and boundaries are essential for reliable agents [32] - Subtask decomposition, model predictive control, and explicit plan templating can improve planning [19][20] - Control over the tools agents use is the most important differentiator in agent reliability [28] Environmental Interaction - New AI computer interfaces are needed for agents to interact with the world [28] - Processing information from the environment is crucial for complex systems [29][30] - Agents need to ground themselves in the environment to perform full software development work [32] Call to Action - Factory encourages teams not delegating at least 50% of engineering tasks to AI agents to engage with them [34]
No Code LangSmith Evaluations
LangChain· 2025-06-18 15:10
LangChain Agent Evaluation - LangChain 降低了 Agent 评估的门槛,使得非开发者也能轻松进行 [1] - Langraph Studio 新增了快速评估 Langraph Agent 的功能 [3] - 用户可以在 Langraph Studio 中选择数据集并启动评估实验 [3][4] - 评估结果可在 Langsmith 中查看,包括模型输出和评估分数 [5] Evaluation Importance and Accessibility - 评估对于构建有效的 Agent 至关重要 [7] - 传统评估对开发者有较高要求,需要掌握 SDK、Piest 和 Evaluate API 等 [7] - LangChain 旨在提供一种无需代码的方式,让任何人都能评估 Langraph Agent [8] - 非技术用户可以基于直觉评估模型选择和提示词等 [9] Configuration and Customization - 用户可以在 Studio 界面中轻松切换 graph 配置,并以此为基础启动评估 [9] - 开发者可以预先设置包含输入主题和参考输出的数据集 [10] - 可以将评估器(Evaluator)绑定到数据集,并自定义评估标准和评分规则 [11][12][13] - 用户可以在 Studio 中修改 graph 配置(如模型、提示词),并启动新的评估实验 [15][16][17] - Studio 提供了无代码配置方式,方便快速迭代 [18]
Vizient’s Healthcare AI Platform: Scaling LLM Queries with LangSmith and LangGraph
LangChain· 2025-06-18 15:01
Company Overview - Vizian serves 97% of academic medical centers in the US, over 69% of acute care hospitals, and more than 35% of the ambulatory market [1] - Vizian is developing a generative AI platform to improve healthcare providers' data access and analysis [2] Challenges Before Langraph and Langsmith - Scaling LLM queries using Azure OpenAI faced token limit issues, impacting performance [3] - Limited visibility into system performance made it difficult to track token usage, prompt efficiency, and reliability [3] - Continuous testing was not feasible, leading to reactive problem-solving [4] - Multi-agent architecture introduced complexity, requiring better orchestration [4] - Lack of observability tools early on resulted in technical debt [4] Impact of Integrating Langraph and Langsmith - Gained the ability to accurately estimate token usage, enabling proper capacity provisioning in Azure OpenAI [5] - Real-time insights into system performance facilitated faster issue diagnosis and resolution [6] - Langraph provided structure and orchestration for multi-agent workflows [6] - Resolved LLM rate limiting issues by optimizing token usage and throughput allocation [7] - Development and debugging processes became significantly faster [8] - Shift to automated continuous testing dramatically improved system quality and reliability [8] - Rapidly turn beta user feedback into actionable improvements [8] Recommendations - Start with a slim proof of concept and model one high impact user flow in Langraph [9] - Integrate with Langsmith from day one and treat every run as a data point [9] - Define a handful of golden query response pairs upfront and use them for acceptance testing [9] - Budget a short weekly review of Langsmith's run history [9]
Morningstar’s AI Assistant "Mo": Saving 30% of Analysts' Time Spent on Research with LangGraph
LangChain· 2025-06-17 15:00
[Music] I'm Isis Julian and I'm a senior software engineer at Morning Star and I work on the intelligence engine. Morning Star is a global leader in providing investment research data and analysis and we pride ourselves in empowering investor success by serving transparent, accessible, and reliable investment information. So, with AI gaining increasing popularity nearing the end of 2022 and early 2023, with a scrappy team of just five engineers, we were able to launch our first ever AI research assistant na ...
Why LLM Data Processing Pipelines Fail: UC Berkeley Research Insights | LangChain Interrupt
LangChain· 2025-06-16 17:36
Hey everyone, my name is Shrea. I am finishing up my PhD at UC Berkeley, so that's quite exciting for me. Um, and I'm here to give you a different kind of talk.This is about research, what we're learning through research and how to help people build reliable LLM pipelines. Just to give a picture of the kind of research that we do at Berkeley. This is around data processing agents.What do I mean by data processing. Organizations have lots of unstructured data documents that they want to extract and analyze, ...