Workflow
榨干GPU性能,中兴Mariana(马里亚纳)突破显存壁垒
量子位·2025-08-26 05:46

Core Insights - The article discusses the challenges of expanding Key-Value Cache (KV Cache) storage in large language models (LLMs), highlighting the conflict between reasoning efficiency and memory cost [1] - It emphasizes the need for innovative solutions to enhance KV Cache storage without compromising performance [1] Industry Exploration - Nvidia's Dynamo project implements a multi-level caching algorithm for storage systems, but faces complexities in data migration and latency issues [2] - Microsoft's LMCahce system is compatible with inference frameworks but has limitations in distributed storage support and space capacity [3] - Alibaba proposed a remote storage solution extending KV Cache to Tair database, which offers easy scalability but struggles with low-latency requirements for LLM inference [3] Emerging Technologies - CXL (Compute Express Link) is presented as a promising high-speed interconnect technology that could alleviate memory bottlenecks in AI and high-performance computing [5] - Research on using CXL to accelerate LLM inference is still limited, indicating a significant opportunity for exploration [5] Mariana Exploration - ZTE Corporation and East China Normal University introduced a distributed shared KV storage technology named Mariana, which is designed for high-performance distributed KV indexing [6] - Mariana's architecture is tailored for GPU and KV Cache storage, achieving 1.7 times higher throughput and 23% lower tail latency compared to existing solutions [6] Key Innovations of Mariana - The Multi-Slot lock-based Concurrency Scheme (MSCS) allows fine-grained concurrency control at the entry level, significantly reducing contention and improving throughput [8] - Tailored Leaf Node (TLN) design optimizes data layout for faster access, enhancing read speeds by allowing simultaneous loading of key arrays into SIMD registers [10] - An adaptive caching strategy using Count-Min Sketch algorithm identifies and caches hot data efficiently, improving read performance [11] Application Validation - Mariana's architecture supports large-capacity storage by distributing data across remote memory pools, theoretically allowing unlimited storage space [13] - Experimental results indicate that Mariana significantly improves read/write throughput and latency performance in KV Cache scenarios [14] Future Prospects - Mariana's design is compatible with future CXL hardware, allowing seamless migration and utilization of CXL's advantages [18] - The advancements in Mariana and CXL technology could lead to efficient operation of large models on standard hardware, democratizing AI capabilities across various applications [18]