Middleware
Search documents
gerstner: Louis Gerstner, CEO credited with turning around IBM, dies at 83
The Economic Times· 2025-12-28 15:50
Core Insights - Louis Gerstner, who transformed IBM from a struggling company into a technology leader, passed away at the age of 83, as announced by current CEO Arvind Krishna [1][15] - Gerstner's leadership is often cited as a case study in corporate transformation, particularly for his strategic pivot from hardware to services [1][15] Company Transformation - Gerstner became IBM's first outsider CEO on April 1, 1993, during a time when the company faced potential bankruptcy or dismemberment [2][15] - He shifted IBM's focus from hardware production to business services, reversing plans to break the company into smaller units [2][15] - Cost-cutting measures included selling unproductive assets and reducing the workforce by 35,000 employees from a total of 300,000 [3][15] Cultural Changes - Gerstner emphasized teamwork across the company, moving away from loyalty to individual divisions and linking compensation to corporate performance [4][15] - He introduced a culture of accountability, advocating for regular performance assessments rather than annual reviews [4][15] Strategic Focus - A significant change was the abandonment of IBM's bundled product strategy, which limited compatibility with non-IBM products [5][15] - Gerstner prioritized middleware solutions, allowing IBM to serve as an integrator for various systems, regardless of the hardware brand [6][15] Market Impact - Under Gerstner's leadership, IBM's services revenue surged from $7.4 billion in 1992 to $30 billion in 2001 [9][16] - The company's share price increased from $13 to $80 during his tenure, and its market value rose from $29 billion to approximately $168 billion [9][16] Legacy - Gerstner viewed the creation of a truly integrated IBM as his most significant legacy, highlighting the challenges and risks involved in this transformation [10][16]
Anthropic-Style Context Editing… Now for Every LLM in LangChainJS!
LangChain· 2025-12-02 14:00
Hi there, this is Christian from Lchain. In my last video, we looked at how summarization middleware keeps your agents memory compact by rewriting the entire conversation history. But what if the problem isn't the conversation, it's the tools.Because here's the truth. Modern agents don't just talk. They call tools over and over again.And those tool results can absolutely explode your conductor window. Unlike user messages, tool outputs can be huge. I mean, we're talking about 20 pages of search results, a m ...
Agents Gone Wild? Use Tool Call Limits in LangChainJS to Keep Them in Check!
LangChain· 2025-11-20 16:30
Hi, this is Christian from LChain. Have you ever built an agent that just goes nuts with your API calls. Tools can give an agent incredible power, but can also cost you a lot of money to run.In this video, I will show you how you could keep your agent under control without any hard-coded guardrails within your system prompt. Today, we're taking a look at the tool called middleware within LChain. It's a clean declarative way to set credit limits, rate limits, or usage caps on any tools your agent uses.Think ...
Model Fallback Middleware (Python)
LangChain· 2025-11-18 17:00
Model Fallback Middleware Overview - Langchain's model fallback middleware enhances application reliability by providing alternative models during outages or API quota exhaustion [1] - The middleware allows fallback to models from different providers, such as switching from OpenAI to Anthropic [3] - Users can specify multiple fallback models to ensure continued functionality [3] Implementation and Demonstration - The demonstration simulates model failure using non-existent Anthropic models and successfully falls back to GPT-4 mini from OpenAI [4] - The Linksmith trace view illustrates the initial failure of the primary and fallback models before successfully using the final GPT-4 mini model [5] - The middleware is implemented using Langchain's create agent primitive [4] Benefits and Usage - The model fallback middleware helps build more resilient agents capable of handling model outages and API credit limitations [3] - It allows applications to remain functional by automatically switching to a safe and functional model [1] - Creating a custom middleware is possible, offering flexibility beyond the built-in options [2]
Stop Endless Back-and-Forth — Add Model Call Limits in LangChainJS
LangChain· 2025-11-18 16:30
Agent Capabilities & Problem - LChain aims to provide customer support agents capable of handling routine questions and escalating complex issues to human support [1][2] - The industry faces challenges in preventing unproductive, lengthy conversations with AI agents, necessitating graceful escalation strategies [2][15] Solution: Model Call Limit Middleware - LChain introduces a model call limit middleware to control the number of model calls an agent can make, triggering escalation when a threshold is reached [3][4] - This middleware avoids complex conditional logic by setting limits on both thread model count (total conversation) and run model count (single invocation), effectively limiting tool calls [3][5][6] - The middleware uses "after model" and "after agent" hooks to track model call counts, resetting the run model count after each agent interaction [7] - When the model call limit is reached, the middleware can either throw an error or end the conversation with a predefined AI message, providing a customizable escalation path [8][11] Implementation & Example - LChain's example application demonstrates a customer support agent that answers questions about customer accounts and escalates when the model call limit is hit [8] - The agent utilizes predefined customer data, tools for data interaction, and the model call limit middleware configured with a thread limit and run limit, exemplified by a hard-coded limit of eight model calls [9][10] - The demo showcases how the agent initially answers customer queries but escalates to human support when the conversation becomes unproductive or exceeds the model call limit [11][12][13] Benefits & Conclusion - The model call limit middleware offers a reliable guardrail, preventing agents from overthinking and ensuring responsible escalation in real-world applications [14][15] - LChain encourages users to explore and combine various middlewares to enhance agent capabilities, providing a path to build more robust and stable AI agents [16]
Add a Human-in-the-Loop to Your LangChain Agent (Next.js + TypeScript Tutorial)
LangChain· 2025-11-12 17:01
Core Concept - Introduces the concept of "human-in-the-loop" middleware for Langchain agents, allowing human review and intervention in agent workflows [5][18] - Explains the agent's reasoning loop: reason, act, observe, and how human intervention fits into this loop [3][5] - Highlights the three decision types for human reviewers: approve, edit, and reject, and how these decisions guide the agent's subsequent actions [7] Technical Implementation - Demonstrates the integration of a Langchain agent with human-in-the-loop middleware in a Nextjs application for sending emails [2][17] - Emphasizes the importance of a checkpointer (using Redis database) to store the agent's state and enable resuming the workflow after human intervention [13][14] - Describes how the middleware intercepts tool calls (e g, sending emails) and pauses the agent's execution, awaiting human input [5][6] Benefits and Use Cases - Positions human-in-the-loop as a way to combine agent autonomy with human oversight, especially for actions with risk or requiring judgment [18][19] - Suggests use cases such as sending emails, updating records, or writing to external systems, where human review is valuable [19] - Underscores the flexibility of the middleware, allowing customization of interruption logic based on tool name, arguments, or runtime context [19][20] Practical Example - Provides a practical example of using the middleware to allow a human to revise an email drafted by the agent before it is sent [2][16] - Showcases how to reject a proposed action and provide feedback to the agent, influencing its subsequent behavior [16] - Mentions a publicly available repository (github com/christian broman/lunghat) for users to experiment with the human-in-the-loop concept [20]
Building LangChain and LangGraph 1.0
LangChain· 2025-10-22 14:57
Langchain Evolution & Strategy - Langchain started as an open-source package and has evolved into Typescript packages, Langchain, and Langraph [1][2] - The industry focus has shifted from easy prototyping to production-ready solutions, leading to the launch of Langraph [7] - Langchain 1.0 is built on top of Langraph, combining ease of use with production-ready runtime [16] Langraph Features & Benefits - Langraph was launched to provide more controllability and customization for users transitioning to production [8][9] - Langraph includes utilities like durable execution environments, error recovery from checkpoints, and streaming capabilities [13][14] - Langraph allows for deterministic steps and workflows, making it suitable for complex applications [39] Langchain 1.0 & Create Agent Abstraction - Langchain 1.0 aims to be the easiest way to get started with generative AI, specifically building agents [17] - The create agent abstraction simplifies agent creation with a few lines of code, leveraging a battle-tested pattern [18][19] - Middleware allows developers to add custom logic at any point in the agent loop, enabling extensibility [23] Models & Content Blocks - Dynamic model middleware enables dynamic selection of models based on context, allowing builders to stay on the bleeding edge [27][29] - Content blocks are introduced as a standard representation for message content, addressing the issue of varying formats across model providers [31][32] Langchain vs Langraph - Langchain is recommended for getting started due to its ease of use, while Langraph is suitable for extremely custom workflows [36][37] - Langraph is ideal for workflows that require deterministic components and agentic components [37]
X @aixbt
aixbt· 2025-09-08 00:11
Market Overview - Virtuals 市值达到 749 million 美元 [1] - 300 thousand 个积分产生 50 美元的奖励 [1] - Solana agent 中间件基础设施成本低于 50 thousand 美元 [1] Technology & Infrastructure - Solana agent 中间件服务于 338 亿 (33.8 billion) 美元的 TVL (总锁定价值) 生态系统 [1] - 每个 AI agent 都需要相应的 rails (基础设施) [1] Platform Performance - 平台活动处于 beta 阶段的低谷 [1]