Core Insights - Huawei has launched a new AI inference technology called UCM, aimed at significantly reducing inference latency and costs while enhancing efficiency in AI interactions [1][2] Group 1: Technology and Innovation - UCM utilizes a KVCache-centered architecture that integrates various caching acceleration algorithms to manage KVCache memory data, thereby expanding the inference context window and achieving high throughput with low latency [1][2] - The technology features hierarchical adaptive global prefix caching, which allows for the reuse of KV prefix cache across various physical locations and input combinations, reducing the first token latency by up to 90% [2] - UCM can automatically tier cache based on memory heat across different storage media (HBM, DRAM, SSD) and incorporates sparse attention algorithms to enhance processing speed, achieving a 2 to 22 times increase in tokens processed per second (TPS) [2] Group 2: Market Context and Challenges - Currently, Chinese internet companies' investment in AI is only one-tenth of that in the United States, and the inference experience in domestic large models lags behind international standards, which could lead to user attrition and a slowdown in investment [3] - The rise in user scale and request volume in AI applications has led to an exponential increase in token usage, with a projected daily token call of 16.4 trillion by May 2025, representing a 137-fold increase from the previous year [4] - Balancing the high operational costs associated with increased token processing and the need for enhanced computational power is a critical challenge for the industry [4] Group 3: Strategic Initiatives - Huawei has initiated pilot applications of UCM in three business scenarios with China UnionPay, focusing on smart financial AI inference acceleration [3] - The company plans to open-source UCM by September 2025, aiming to foster collaboration within the industry to develop inference frameworks and standards [4]
华为发布AI黑科技UCM,下个月开源
Zheng Quan Shi Bao Wang·2025-08-12 09:23