Workflow
LangChain
icon
Search documents
超越 Chatbot:Long-horizon Agent 如何重新定义 AI 产品形态|Jinqiu Select
锦秋集· 2026-02-05 11:40
「Jinqiu Select」 跨越语言与时差,传递科技圈最值得被听到的声音。 < Overview > Harrison Chase 作为 LangChain 的联合创始人 兼 CEO,是 AI Agent(AI 智能体)基础设施领域最具影响力的工程实践者之一。 自 2022 年开源 LangChain 以来,他 亲历了从早期 GPT-3.5 的简单链式调用,到如今 Claude Code、Deep Research 等长程 Agent 爆发的完整技术周期。 Chatbot 已经不适合 作为新一代 AI 产品的主流形态了。 这不是模型能力的问题,而是产品形态的问题。 Chatbot 的本质是"即时响应",追求的是低延迟和高流畅度。 但真正有价值的一些日常工作从来不是这样运转的。一份优质的研究报告需要反复检索、交叉验证、结构化整理;一个可靠的代码 PR 需要理解上下文、规划方 案、测试验证、处理边界情况。 这些任务的共同特点是:它们需要时间,需要多步骤的自主决策,需要在过程中不断调整策略。换句话说, 它们需要的不是一个"即时响应者",而是一个"长程执 行者"。 这正是 Long-horizon Agent 崛 ...
寻找桌面Agent红利下的卖铲人
Hua Er Jie Jian Wen· 2026-01-31 09:17
AI圈的惊喜,依旧不断。 日前一款名为OpenClaw(原名Clawdbot/Moltbot)的产品在中外技术社区与社交媒体上迅速走红。 作为能跑在自己的电脑里,能深度访问用户的电脑系统、文件、应用和聊天记录的一种深度互动Agent,用户可以在最自然的聊天界面中对AI下指令、沟通 协作。 在开发者分享出的用例中,这个桌面Agent可以完成比较十几家汽车经销商的报价、自动发送邮件、跟踪回复、整理价格差异等复杂任务,也可以完成批量 取消邮件订阅、处理保险理赔申请、预订航班并自动值机等日常事务。 重要的是,它有着长期记忆上下文,它能记住本地项目、重复性任务和个人偏好,甚至无需主动触发,就能主动发送简报、提醒或警报,被业界形容为"24 小时待命贾维斯"。 企业创始人、开发者到科技爱好者都纷纷试水,一夜间"OpenClaw保姆级部署教程"也成了小红书和B站的流量密码。业内人士直言,这是桌面agent的 ChatGPT时刻。 借助网络效应和口碑传播,越来越多人试图构建自己的"贾维斯"时,冰面之下,国产模型玩家和云厂商悄然成了桌面Agent背后隐形赢家。 "贾维斯"的卖铲人 OpenClaw不是市场里第一个能干活的Agen ...
LangChain 创始人警告:2026 成为“Agent 工程”分水岭,传统软件公司的生存考验开始了
AI前线· 2026-01-31 05:33
编译 | Tina 过去几十年,软件工程有一个稳定不变的前提:系统的行为写在代码里。工程师读代码,就能推断系 统在大多数场景下会怎么运行;测试、调试、上线,也都围绕"确定性"展开。但 Agent 的出现正在动 摇这个前提:在 Agent 应用里,决定行为的不再只是代码,还有模型本身——一个在代码之外运 行、带着非确定性的黑箱。你无法只靠读代码理解它,只能让它跑起来、看它在真实输入下做了什 么,才知道系统"到底在干什么"。 在播客中,LangChain 创始人 Harrison Chase 还把最近一波"能连续跑起来"的编程 Agent、Deep Research 等现象视为拐点,并判断这类"长任务 Agent"的落地会在 2025 年末到 2026 年进一步加 速。 这也把问题推到了台前:2026 被很多人视为"长任务 Agent 元年",现有的软件公司还能不能熬过 去?就像当年从 on-prem 走向云,并不是所有软件公司都成功转型一样,工程范式一旦变化,就会 重新筛选参与者。长任务 Agent 更像"数字员工"——它不是多回合聊天那么简单,而是能在更长时间 里持续执行、反复试错、不断自我修正。 在这期与红 ...
LangChain 创始人警告:2026 成为“Agent 工程”分水岭,传统软件公司的生存考验开始了
程序员的那些事· 2026-01-31 03:16
转自:InfoQ ,编译 | Tina 过去几十年,软件工程有一个稳定不变的前提:系统的行为写在代码里。工程师读代码,就能推断系 统在大多数场景下会怎么运行;测试、调试、上线,也都围绕"确定性"展开。但 Agent 的出现正在动 摇这个前提:在 Agent 应用里,决定行为的不再只是代码,还有模型本身——一个在代码之外运 行、带着非确定性的黑箱。你无法只靠读代码理解它,只能让它跑起来、看它在真实输入下做了什 么,才知道系统"到底在干什么"。 在播客中,LangChain 创始人 Harrison Chase 还把最近一波"能连续跑起来"的编程 Agent、Deep Research 等现象视为拐点,并判断这类"长任务 Agent"的落地会在 2025 年末到 2026 年进一步加 速。 这也把问题推到了台前:2026 被很多人视为"长任务 Agent 元年",现有的软件公司还能不能熬过 去?就像当年从 on-prem 走向云,并不是所有软件公司都成功转型一样,工程范式一旦变化,就会 重新筛选参与者。长任务 Agent 更像"数字员工"——它不是多回合聊天那么简单,而是能在更长时间 里持续执行、反复试错、不断自 ...
红杉对话 LangChain 创始人:2026 年 AI 告别对话框,步入 Long-Horizon Agents 元年
海外独角兽· 2026-01-27 12:33
Core Insights - The article asserts that AGI represents the ability to "figure things out," marking a shift from the era of "Talkers" to "Doers" in AI by 2026, driven by Long Horizon Agents [2] - Long Horizon Agents are characterized by their ability to autonomously plan, operate over extended periods, and exhibit expert-level features across complex tasks, expanding from coding to various domains [3][4] - The emergence of these agents is seen as a significant turning point, with the potential to revolutionize how complex tasks are approached and executed [3][21] Long Horizon Agents' Explosion - Long Horizon Agents are finally beginning to work effectively, with the core idea being to allow LLMs to operate in a loop and make autonomous decisions [4] - The ideal interaction with agents combines asynchronous management and synchronous collaboration, enhancing their utility in various applications [3][4] - The coding domain has seen the most rapid adoption of these agents, with examples like AutoGPT demonstrating their capabilities in executing complex multi-step tasks [4][5] Transition from General Framework to Harness Architecture - The distinction between models, frameworks, and harnesses is crucial, with harnesses being more opinionated and designed for specific tasks, while frameworks are more abstract [8][9] - The evolution of harness engineering is particularly advanced in coding companies, which have successfully integrated these concepts into their products [12][14] - The integration of file system permissions into agents is essential for effective context management and task execution [24] Future Interactions and Production Forms - Memory is identified as a critical component for self-improvement in agents, allowing them to retain and utilize past interactions to enhance performance [35] - The future of agent interaction is expected to blend asynchronous and synchronous modes, facilitating better user engagement and task management [36] - The necessity for agents to access file systems is emphasized, as it significantly enhances their operational capabilities [39]
A CPU-CENTRIC PERSPECTIVE ON AGENTIC AI
2026-01-22 02:43
Summary of Key Points from the Conference Call Industry and Company Overview - The discussion revolves around **Agentic AI** frameworks, which enhance traditional Large Language Models (LLMs) by integrating decision-making orchestrators and external tools, transforming them into autonomous problem solvers [2][4]. Core Insights and Arguments - **Agentic AI Workloads**: The paper profiles five representative agentic AI workloads: **Haystack RAG**, **Toolformer**, **ChemCrow**, **LangChain**, and **SWE-Agent**. These workloads are analyzed for latency, throughput, and energy metrics, highlighting the significant role of CPUs in these metrics compared to GPUs [3][10][20]. - **Latency Contributions**: Tool processing on CPUs can account for up to **90.6%** of total latency in agentic workloads, indicating a need for joint CPU-GPU optimization rather than focusing solely on GPU improvements [10][34]. - **Throughput Bottlenecks**: Throughput is bottlenecked by both CPU factors (coherence, synchronization, core over-subscription) and GPU factors (memory capacity and bandwidth). This dual limitation affects the performance of agentic AI systems [10][45]. - **Energy Consumption**: At large batch sizes, CPU dynamic energy consumption can reach up to **44%** of total dynamic energy, emphasizing the inefficiency of CPU parallelism compared to GPU [10][49]. Important but Overlooked Content - **Optimizations Proposed**: The paper introduces two key optimizations: 1. **CPU and GPU-Aware Micro-batching (CGAM)**: This method aims to improve performance by capping batch sizes and using micro-batching to optimize latency [11][50]. 2. **Mixed Agentic Workload Scheduling (MAWS)**: This approach adapts scheduling strategies for heterogeneous workloads, balancing CPU-heavy and LLM-heavy tasks to enhance overall efficiency [11][58]. - **Profiling Insights**: The profiling of agentic AI workloads reveals that tool processing, rather than LLM inference, is the primary contributor to latency, which is a critical insight for future optimizations [32][34]. - **Diverse Computational Patterns**: The selected workloads represent a variety of applications and computational strategies, showcasing the breadth of agentic AI systems and their real-world relevance [21][22]. Conclusion - The findings underscore the importance of a CPU-centric perspective in optimizing agentic AI frameworks, highlighting the need for comprehensive strategies that address both CPU and GPU limitations to enhance performance, efficiency, and scalability in AI applications [3][10][11].
LangChain Academy New Course: Introduction to LangChain - Python
LangChain· 2025-12-18 16:01
I’m excited to announce the release of our latest LangChain Academy foundations course, Introduction to LangChain in Python. We’ve entered a new era of AI, one where our apps don’t just respond, they think, plan, and act autonomously. Today, we're building agents – AI systems that can reason and interact with their environments to get real work done.Imagine a team of assistants that can summarize your inbox, schedule meetings, and perform market research 24/7. In this course, you'll learn to build deploymen ...
What are Deep Agents?
LangChain· 2025-11-24 07:14
Hey, this is Lance. I want to talk a bit about the deep agents package that we recently released. Now, the length of tasks that an agent can take every seven months.And we see numerous examples of popular longrunning agents like Claude Code, Deep Research, Manis. The average Manis task, for example, can be up to 50 different tool calls. And so, it's increasingly clear that agents are needed to do what we might consider deeper work or more challenging tasks that take longer periods of time.Hence, this term d ...
NotebookLM 功能逆天了:我是如何用它来深度学习的
3 6 Ke· 2025-11-23 00:06
Core Insights - The article emphasizes the importance of teaching AI how to effectively educate users, rather than relying solely on AI to provide knowledge [1][72]. Group 1: NotebookLM Features - NotebookLM has evolved to include features that allow users to customize how AI teaches them based on their learning stages [7][71]. - The "Discover" function in NotebookLM helps users filter sources to find the most relevant and reliable information [11][12]. - Users can create customized reports in various formats, such as briefing documents and study guides, tailored to their learning needs [19][20]. Group 2: Learning Strategies - The article outlines several strategies for using NotebookLM, including filtering sources from specific platforms like Reddit and YouTube to gather beginner-friendly content [12][13]. - Different learning styles can be accommodated through various formats, such as audio overviews and video presentations, enhancing the learning experience [28][37]. - The use of flashcards and quizzes in NotebookLM helps users test their understanding and identify knowledge gaps [49][58]. Group 3: Practical Applications - The integration of AI tools like NotebookLM can facilitate the development of personalized learning systems, making complex topics more accessible [71][72]. - Users are encouraged to leverage AI to create a structured learning path that aligns with their current knowledge and future goals [73][74]. - The article highlights the significance of understanding the connections between concepts, rather than just memorizing definitions [60][61].
Human in the Loop Middleware (Python)
LangChain· 2025-11-04 17:45
LangChain Middleware - LangChain 提供 human-in-the-loop 中间件,用于在工具调用执行前进行审批、编辑和拒绝 [1] - 该中间件适用于需要人工反馈的场景,例如邮件助手在发送敏感邮件前 [1] Use Case - 示例展示了如何使用该中间件来构建一个邮件助手代理,该代理在发送敏感邮件之前需要人工反馈 [1] Resources - 更多关于中间件的文档可以在 LangChain 官方文档中找到 [1] - 示例代码可以在 Gist 上找到 [1]