Core Insights - The article discusses a new academic paper released by the DeepSeek team in collaboration with Peking University and Tsinghua University, focusing on inference speed optimization for large language models (LLMs) [1] Group 1: Innovation and Technology - The paper introduces an innovative inference system named DualPath, specifically designed to enhance the inference performance of LLMs under agent workloads [1] - The DualPath system implements a "dual-path reading KV-Cache" mechanism, which reallocates storage network load [1] Group 2: Performance Improvements - The offline inference throughput is reported to have increased by up to 1.87 times [1] - The average number of agent operations per second for online services has improved by 1.96 times [1]
DeepSeek联合北大、清华发布新论文