
Core Insights - Tencent's technical team has optimized the DeepEP communication framework, achieving significant performance improvements in various network environments, with a 100% enhancement in RoCE and a 30% enhancement in IB networks, facilitating more efficient AI large model training solutions [2][3] - The optimization addresses key bottlenecks in the original DeepEP framework, particularly in bandwidth utilization and CPU control delays, which were limiting its broader application [2][3] Group 1 - The optimization includes intelligent bandwidth allocation through topology-aware multi-QP chaining technology, ensuring full utilization of dual-port network card bandwidth and preventing bandwidth waste [3] - Tencent has resolved CPU control bottlenecks in GPU communication by optimizing the control plane operations to bypass CPU intermediaries, reducing latency and energy consumption [3] - A new "QP internal sequencing lock" mechanism has been introduced to ensure accurate and sequential data transmission among multiple GPUs, even when handling over 1,000 simultaneous data transfer tasks [3] Group 2 - The optimized DeepEP framework has been fully open-sourced and successfully applied in Tencent's mixed Yuan large model training and inference projects, demonstrating excellent versatility in high-performance environments built with Tencent's Xingmai and H20 servers [3]