Workflow
Agent architecture
icon
Search documents
Open Deep Research
LangChain· 2025-07-16 16:01
Hi there. Today you're going to learn all about the Langchain deep research agent and how you can use it as a starting point for your projects. It's highly configurable and allows you to add your own MCP servers and is open source so you can tailor it to your own specific use cases.Let's see how it works. So later this year, my roommates and I want to take a trip to Amsterdam and Norway. We want to leave New York on September 12th and get back on the following Sunday.I want to ask Deep Research if it can he ...
LangGraph Assistants: Building Configurable AI Agents
LangChain· 2025-07-02 14:45
Imagine you've built a perfect agent for your blog writing team. Now your social media team wants to use it but they need different prompts, different models and different tools. But modifying your underlying code for each use case is not only time consuming but also prone to errors.This creates two distinct problems. Developers get stuck in constant code changing cycles that slow down iteration while business teams can't experiment without engineering support. That's where Lang graph assistants come in.Tod ...
Vizient’s Healthcare AI Platform: Scaling LLM Queries with LangSmith and LangGraph
LangChain· 2025-06-18 15:01
Company Overview - Vizian serves 97% of academic medical centers in the US, over 69% of acute care hospitals, and more than 35% of the ambulatory market [1] - Vizian is developing a generative AI platform to improve healthcare providers' data access and analysis [2] Challenges Before Langraph and Langsmith - Scaling LLM queries using Azure OpenAI faced token limit issues, impacting performance [3] - Limited visibility into system performance made it difficult to track token usage, prompt efficiency, and reliability [3] - Continuous testing was not feasible, leading to reactive problem-solving [4] - Multi-agent architecture introduced complexity, requiring better orchestration [4] - Lack of observability tools early on resulted in technical debt [4] Impact of Integrating Langraph and Langsmith - Gained the ability to accurately estimate token usage, enabling proper capacity provisioning in Azure OpenAI [5] - Real-time insights into system performance facilitated faster issue diagnosis and resolution [6] - Langraph provided structure and orchestration for multi-agent workflows [6] - Resolved LLM rate limiting issues by optimizing token usage and throughput allocation [7] - Development and debugging processes became significantly faster [8] - Shift to automated continuous testing dramatically improved system quality and reliability [8] - Rapidly turn beta user feedback into actionable improvements [8] Recommendations - Start with a slim proof of concept and model one high impact user flow in Langraph [9] - Integrate with Langsmith from day one and treat every run as a data point [9] - Define a handful of golden query response pairs upfront and use them for acceptance testing [9] - Budget a short weekly review of Langsmith's run history [9]