Workflow
LMCache
icon
Search documents
LMCache:基于KV缓存复用的LLM推理优化方案
Xin Lang Cai Jing· 2025-12-09 13:41
(来源:DeepHub IMBA) # Old way: Slow as molasses def get_answer(prompt): memory = build_memory_from_zero(prompt) # GPU cries return model.answer(memo LLM推理服务中,(Time-To-First-Token) 一直是个核心指标。用户发起请求到看见第一个token输出,这段时间越短体验越好,但实际部署中往往存在各 种问题。 LMCache针对TTFT提出了一套KV缓存持久化与复用的方案。项目开源,目前已经和vLLM深度集成。 原理 大模型推理有个特点:每次处理输入文本都要重新计算KV缓存。KV缓存可以理解为模型"阅读"文本时产生的中间状态,类似于做的笔记。 问题在于传统方案不复用这些"笔记"。同样的文本再来一遍,整个KV缓存从头算。 LMCache的做法是把KV缓存存下来——不光存GPU显存里,还能存到CPU内存、磁盘上。下次遇到相同文本(注意不只是前缀匹配,是任意位置的文本复 用),直接取缓存,省掉重复计算。 实测效果:搭配vLLM,在多轮对话、RAG这类场景下,响 ...
英伟达、DeepSeek集体跟进,18个月前被忽视,如今统治AI推理
3 6 Ke· 2025-11-10 04:11
Core Insights - The article discusses the emergence of the "Decoupled Inference" concept introduced by the Peking University and UCSD teams, which has rapidly evolved from a laboratory idea to an industry standard adopted by major frameworks like NVIDIA and vLLM, indicating a shift towards "modular intelligence" in AI [1] Group 1: Decoupled Inference Concept - The DistServe system, launched in March 2024, proposes a bold idea of splitting the inference process of large models into two stages: "prefill" and "decode," allowing them to scale and schedule independently in separate resource pools [1][19] - This decoupled architecture addresses two fundamental limitations of previous inference frameworks: interference and coupled scaling, which hindered efficiency and increased costs in production environments [10][15][18] - By separating prefill and decode, DistServe enables independent scaling to meet latency requirements for both stages, significantly improving overall efficiency [19][22] Group 2: Adoption and Impact - Initially, the decoupled inference concept faced skepticism in the open-source community due to the engineering investment required for deep architectural changes [21] - However, by 2025, it gained widespread acceptance as businesses recognized the critical importance of latency control for their core operations, leading to its adoption as a default solution in major inference stacks [22][23] - The decoupled architecture allows for high resource utilization and flexibility in resource allocation, especially as model sizes and access traffic increase [22][23] Group 3: Current State and Future Directions - The decoupled inference has become a primary design principle in large model inference frameworks, influencing orchestration layers, inference engines, storage systems, and emerging hardware architectures [23][31] - Future research is exploring further disaggregation at the model level, such as "Attention-FFN Disaggregation," which separates different components of the model across various nodes [33][34] - The trend is moving towards a more modular approach in AI systems, where different functional modules can evolve, expand, and optimize independently, marking a significant shift from centralized to decoupled architectures [47][48]
独家|对话Tensormesh三位联创:如何从学术界走到大模型推理产业前线?
Z Potentials· 2025-10-24 08:18
Core Insights - Tensormesh, a company focused on providing cache-accelerated inference optimization for enterprises, has officially launched and secured $4.5 million in seed funding led by Laude Ventures [2] - The founding team, consisting of Junchen Jiang, Yihua Cheng, and Kuntai Du, aims to bridge the gap between AI inference engines and storage services, leveraging their academic backgrounds to create a commercially viable product [3][4] Company Overview - Tensormesh is the first commercial platform to productize large-scale AI inference caching, inspired by the open-source project LMCache, which combines advanced technology with enterprise-level usability, security, and manageability [2][4] - The company’s product allows enterprises to deploy large model services easily, significantly reducing operational costs to about one-tenth of public API usage while enhancing performance by up to ten times compared to mainstream solutions [4][29] Funding and Growth - The funding process for Tensormesh was unconventional, relying on personal connections rather than traditional methods like business plans or roadshows, resulting in a swift investment agreement [5][48] - The seed funding will primarily be used for product refinement and team expansion, with a strategic focus on creating a strong open-source engine as an entry point for commercial value [5][40] Market Position and Challenges - The inference industry is emerging, with the cost of inference surpassing training costs due to increased usage, highlighting the need for efficient solutions [30][32] - Tensormesh addresses three main challenges in deploying large models: privacy concerns, complex cluster management, and high operational costs [26][28] Product Features and Innovations - The product offers a one-click deployment solution for in-house large model services, ensuring data privacy while significantly lowering costs and improving performance [29][30] - Tensormesh aims to fill a market gap by providing a comprehensive solution that integrates inference engines, storage, scheduling, and routing, which is currently lacking in the industry [38] Future Aspirations - The company aspires to become the go-to solution for large model inference, similar to how Databricks is recognized in big data [44][45] - The long-term vision includes evolving with AI advancements, ensuring that Tensormesh remains relevant as the industry shifts from reliance on single models to more complex systems [51][52]
X @Avi Chawla
Avi Chawla· 2025-07-09 06:30
LLM Serving Engine - LMCache is an open-source LLM serving engine designed to reduce time-to-first-token and increase throughput, especially under long-context scenarios [1] - LMCache boosts vLLM with 7x faster access to 100x more KV caches [1] Open Source - LMCache is 100% open-source [1]
X @Avi Chawla
Avi Chawla· 2025-07-09 06:30
Key Features - LMCache is an open-source LLM serving engine designed to reduce time-to-first-token and increase throughput [1] - LMCache significantly improves vLLM, achieving 7x faster access to 100x more KV caches [1] Technical Advantages - The solution is particularly beneficial in long-context scenarios [1]