LangGraph

Search documents
登上热搜!Prompt不再是AI重点,新热点是Context Engineering
机器之心· 2025-07-03 08:01
Core Viewpoint - The article emphasizes the importance of "Context Engineering" as a systematic approach to optimize the input provided to Large Language Models (LLMs) for better output generation [3][11]. Summary by Sections Introduction to Context Engineering - The article highlights the recent popularity of "Context Engineering," with notable endorsements from figures like Andrej Karpathy and its trending status on platforms like Hacker News and Zhihu [1][2]. Understanding LLMs - LLMs should not be anthropomorphized; they are intelligent text generators without beliefs or intentions [4]. - LLMs function as general, uncertain functions that generate new text based on provided context [5][6][7]. - They are stateless, requiring all relevant background information with each input to maintain context [8]. Focus of Context Engineering - The focus is on optimizing input rather than altering the model itself, aiming to construct the most effective input text to guide the model's output [9]. Context Engineering vs. Prompt Engineering - Context Engineering is a more systematic approach compared to the previously popular "Prompt Engineering," which relied on finding a perfect command [10][11]. - The goal is to create an automated system that prepares comprehensive input for the model, rather than issuing isolated commands [13][17]. Core Elements of Context Engineering - Context Engineering involves building a "super input" toolbox, utilizing various techniques like Retrieval-Augmented Generation (RAG) and intelligent agents [15][19]. - The primary objective is to deliver the most effective information in the appropriate format at the right time to the model [16]. Practical Methodology - The process of using LLMs is likened to scientific experimentation, requiring systematic testing rather than guesswork [23]. - The methodology consists of two main steps: planning from the end goal backward and constructing from the beginning forward [24][25]. - The final output should be clearly defined, and the necessary input information must be identified to create a "raw material package" for the system [26]. Implementation Steps - The article outlines a rigorous process for building and testing the system, ensuring each component functions correctly before final assembly [30]. - Specific testing phases include verifying data interfaces, search functionality, and the assembly of final inputs [30]. Additional Resources - For more detailed practices, the article references Langchain's latest blog and video, which cover the mainstream methods of Context Engineering [29].
LangChain Academy New Course: Building Ambient Agents with LangGraph
LangChain· 2025-06-26 15:38
Our latest LangChain Academy course – Building Ambient Agents with LangGraph – is now available! Most agents today handle one request at a time through chat interfaces. But as models have improved, agents can now run in the background – and take on long-running, complex tasks. LangGraph is built for these “ambient agents,” with support for human-in-the-loop workflows and memory. LangGraph Platform provides the infrastructure to run these agents at scale, and LangSmith helps you observe, evaluate, and improv ...
Cisco TAC’s GenAI Transformation: Building Enterprise Support Agents with LangSmith and LangGraph
LangChain· 2025-06-23 15:30
[Music] My name is John Gutsinger. Uh I work for Cisco. I'm a principal engineer and I work in the technical assistance center or TAC for short.Uh really I'm focused on AI engineering, agentic engineering in the face of customer support. We've been doing a IML for you know a couple years now maybe five or six years. really it started with trying to figure out how do we handle these mass scale issues type problems right where uh some trending issues going to pop up we know we're going to have tens of thousan ...
Vizient’s Healthcare AI Platform: Scaling LLM Queries with LangSmith and LangGraph
LangChain· 2025-06-18 15:01
Company Overview - Vizian serves 97% of academic medical centers in the US, over 69% of acute care hospitals, and more than 35% of the ambulatory market [1] - Vizian is developing a generative AI platform to improve healthcare providers' data access and analysis [2] Challenges Before Langraph and Langsmith - Scaling LLM queries using Azure OpenAI faced token limit issues, impacting performance [3] - Limited visibility into system performance made it difficult to track token usage, prompt efficiency, and reliability [3] - Continuous testing was not feasible, leading to reactive problem-solving [4] - Multi-agent architecture introduced complexity, requiring better orchestration [4] - Lack of observability tools early on resulted in technical debt [4] Impact of Integrating Langraph and Langsmith - Gained the ability to accurately estimate token usage, enabling proper capacity provisioning in Azure OpenAI [5] - Real-time insights into system performance facilitated faster issue diagnosis and resolution [6] - Langraph provided structure and orchestration for multi-agent workflows [6] - Resolved LLM rate limiting issues by optimizing token usage and throughput allocation [7] - Development and debugging processes became significantly faster [8] - Shift to automated continuous testing dramatically improved system quality and reliability [8] - Rapidly turn beta user feedback into actionable improvements [8] Recommendations - Start with a slim proof of concept and model one high impact user flow in Langraph [9] - Integrate with Langsmith from day one and treat every run as a data point [9] - Define a handful of golden query response pairs upfront and use them for acceptance testing [9] - Budget a short weekly review of Langsmith's run history [9]
Case Study + Deep Dive: Telemedicine Support Agents with LangGraph/MCP - Dan Mason
AI Engineer· 2025-06-17 18:58
Industry Focus: Autonomous Agents in Healthcare - The workshop explores building autonomous agents for managing complex processes like multi-day medical treatments [1] - The system aims to help patients self-administer medication regimens at home [1] - A key challenge is enabling agents to adhere to protocols while handling unexpected patient situations [1] Technology Stack - The solution utilizes a hybrid system of code and prompts, leveraging LLM decision-making to drive a web application, message queue, and database [1] - The stack includes LangGraph/LangSmith, Claude, MCP, Nodejs, React, MongoDB, and Twilio [1] - Treatment blueprints, designed in Google Docs, guide LLM-powered agents [1] Agent Evaluation and Human Support - The system incorporates an agent evaluation system using LLM-as-a-judge to assess interaction complexity [1] - The evaluation system escalates complex interactions to human support when needed [1] Key Learning Objectives - Participants will learn how to build a hybrid system of code and prompts that leverages LLM decisioning [1] - Participants will learn how to design and maintain flexible agentic workflow blueprints [1] - Participants will learn how to create an agent evaluation system [1]
深度|吴恩达:语音是一种更自然、更轻量的输入方式,尤其适合Agentic应用;未来最关键的技能,是能准确告诉计算机你想要什么
Z Potentials· 2025-06-16 03:11
Core Insights - The discussion at the LangChain Agent Conference highlighted the evolution of Agentic systems and the importance of focusing on the degree of Agentic capability rather than simply categorizing systems as "Agents" [2][3][4] - Andrew Ng emphasized the need for practical skills in breaking down complex processes into manageable tasks and establishing effective evaluation systems for AI systems [8][10][12] Group 1: Agentic Systems - The conversation shifted from whether a system qualifies as an "Agent" to discussing the spectrum of Agentic capabilities, suggesting that all systems can be classified as Agentic regardless of their level of autonomy [4][5] - There is a significant opportunity in automating simple, linear processes within enterprises, as many workflows remain manual and under-automated [6][7] Group 2: Skills for Building Agents - Key skills for building Agents include the ability to integrate various tools like LangGraph and establish a comprehensive data flow and evaluation system [8][9] - The importance of a structured evaluation process was highlighted, as many teams still rely on manual assessments, which can lead to inefficiencies [10][11] Group 3: Emerging Technologies - The MCP (Multi-Context Protocol) is seen as a transformative standard that simplifies the integration of Agents with various data sources, aiming to reduce the complexity of data pipelines [21][22] - Voice technology is identified as an underutilized component with significant potential, particularly in enterprise applications, where it can lower user interaction barriers [15][19] Group 4: Future of AI Programming - The concept of "Vibe Coding" reflects a shift in programming practices, where developers increasingly rely on AI assistants, emphasizing the need for a solid understanding of programming fundamentals [23][24] - The establishment of AI Fund aims to accelerate startup growth by focusing on speed and deep technical knowledge as key success factors [26]
Agents和Workflows孰好孰坏,LangChain创始人和OpenAI杠上了
Founder Park· 2025-04-21 12:23
但 LangChain 创始人 Harrison Chase 对于 OpenAI 在文中的一些观点持有异议,尤其是「通过 LLMs 来主导 Agent」的路线,迅速发表了一篇长文回 应。 Harrison Chase 认为,并非要通过严格的「二元论」来区分 Agent,目前我们看到大多数的「Agentic 系统」都是 Workflows 和 Agents 的结合。理想 的 Agent 框架应该允许从「结构化工作流」逐步过渡到「由模型驱动」,并在两者之间灵活切换。 相比 OpenAI 的文章,Harrison Chase 更认同 Anthropic 此前发布的如何构建高效 Agents 的文章,对于 Agent 的定义,Anthropic 提出了「Agentic 系 统」的概念,并且把 Workflows 和 Agents 都看作是其不同表现形式。 总的来说, 这是大模型派(Big Model)和工作流派(Big Workflow)的又一次争锋, 前者认为每次模型升级都可能让精心设计的工作流瞬间过 时,这种「苦涩的教训」让他们更倾向于构建通用型、结构最少的智能体系统。而以 LangGraph 为代表的后者,强调 ...