Core Insights - Huawei introduced a new AI inference technology called UCM (Unified Cache Manager) aimed at optimizing the efficiency of token flow across various business processes, thereby reducing the inference cost per token [1][2] - There is a significant gap in inference efficiency between leading Chinese internet companies and their overseas counterparts, with foreign models achieving user output speeds of 200 Tokens/s compared to less than 60 Tokens/s for domestic models [1] - The industry currently lacks a universally applicable framework and acceleration mechanism for AI inference, prompting Huawei to seek collaboration with industry players to enhance the maturity of these frameworks [3] Group 1 - UCM focuses on KV Cache and memory management to accelerate inference processes, optimizing the flow of tokens [1] - Huawei's testing indicates that UCM can reduce the first token latency by up to 90% and increase system throughput by a factor of 22, while also achieving a tenfold expansion of context windows [2] - The development of a multi-level, flexible resource system is essential to address the limitations of high bandwidth memory (HBM) in AI inference processes [2] Group 2 - Huawei plans to open-source UCM in September to foster collaboration among framework, storage, and GPU manufacturers [3] - The optimization of system-level inference architecture requires a comprehensive approach that includes chip-level, software-level, and framework-level considerations [3] - The current state of domestic software solutions for AI inference, particularly those based on KV Cache, is not yet mature or widely applicable compared to established foreign solutions [2]
降低传统路径依赖,华为推出AI推理新技术
Di Yi Cai Jing·2025-08-12 12:43