MoE训练

Search documents
昇腾+鲲鹏联手上大招!华为爆改MoE训练,吞吐再飙升20%,内存省70%
华尔街见闻· 2025-06-04 11:01
Core Insights - Huawei has introduced new solutions for MoE training systems, achieving a 20% increase in system throughput and a 70% reduction in memory usage through three core operator optimizations [1][4][33] Group 1: MoE Training System Enhancements - MoE has become a preferred path for tech giants towards more powerful AI [2] - The scaling law indicates that as long as it holds, the parameter scale of large models will continue to expand, enhancing AI intelligence levels [3] - Huawei's previous Adaptive Pipe & EDPB framework improved distributed computing efficiency, and the latest advancements further enhance training operator efficiency and memory utilization [4][5] Group 2: Challenges in MoE Training - MoE model training faces significant challenges, particularly in single-node efficiency [6][7] - Low operator computation efficiency and frequent interruptions due to expert routing mechanisms hinder overall throughput [8][10] - The need for extensive model parameters leads to memory constraints, risking out-of-memory (OOM) errors during training [11][13][14] Group 3: Solutions Proposed by Huawei - Huawei has proposed a comprehensive solution to address the challenges in MoE training [15] - The Ascend operator acceleration has led to a 15% increase in training throughput, with core operators like FlashAttention, MatMul, and Vector accounting for over 75% of total computation time [16][18] - Three optimization strategies—"Slimming," "Balancing," and "Transporting"—have been implemented to enhance computation efficiency [17] Group 4: Specific Operator Optimizations - FlashAttention optimization has improved forward and backward performance by 50% and 30%, respectively [24] - MatMul optimization has increased Cube utilization by 10% through enhanced data transport strategies [28] - Vector operator performance has surged by over 300% due to reduced data transport times [32] Group 5: Collaboration Between Ascend and Kunpeng - The collaboration between Ascend and Kunpeng has achieved nearly zero waiting time for operator dispatch and a 70% reduction in memory usage [33] - Innovations in operator dispatch optimization and Selective R/S memory surgery have been key to these improvements [33][43] - The training throughput has been further enhanced by 4% through effective task binding and scheduling strategies [42] Group 6: Selective R/S Memory Optimization - The Selective R/S memory optimization technique allows for a customized approach to memory management, saving over 70% of activation memory during training [43] - This technique includes fine-grained recomputation and adaptive memory management mechanisms to optimize memory usage [45][51] - The overall strategy aims to maximize the efficiency of memory usage while minimizing additional computation time [52] Group 7: Conclusion - Huawei's deep collaboration between Ascend and Kunpeng, along with operator acceleration and memory optimization technologies, provides an efficient and cost-effective solution for MoE training [53] - These advancements not only remove barriers for large-scale MoE model training but also offer valuable reference paths for the industry [54]
昇腾+鲲鹏双核暴击!华为打通MoE训练任督二脉再加速20%,内存省70%
雷峰网· 2025-06-04 09:31
Core Viewpoint - Huawei's advancements in MoE (Mixture of Experts) training systems demonstrate its leading capabilities in AI foundational technology and engineering implementation [1][2]. Group 1: MoE Training System Enhancements - Huawei has introduced new solutions for MoE training operators and memory optimization, achieving a 20% increase in system throughput and a 70% reduction in memory usage [2][7]. - The MoE framework is becoming a preferred path for tech giants aiming for more powerful AI systems [3]. - The unique architecture of MoE is key to overcoming computational bottlenecks in large-scale model training [4]. Group 2: Challenges in MoE Training - MoE model training faces significant challenges, particularly in single-node efficiency, due to low operator computation efficiency and memory constraints [10][11]. - The complexity of the expert routing mechanism leads to frequent operator dispatch interruptions, creating a Host-Bound bottleneck [12]. - The need for extensive model parameters results in high memory demands, often leading to out-of-memory (OOM) issues during training [13][15]. Group 3: Solutions and Innovations - Huawei has developed a comprehensive solution to address the challenges in MoE training, focusing on enhancing operator computation efficiency and memory utilization [17]. - The collaboration between Ascend and Kunpeng architectures has significantly improved training operator efficiency and memory usage [6][34]. - The implementation of three optimization strategies—"Slimming," "Balancing," and "Transporting"—has led to a 15% increase in overall training throughput for the Pangu Ultra MoE 718B model [20][21]. Group 4: Specific Operator Optimizations - FlashAttention optimization has improved performance by 50% for forward and 30% for backward processes through efficient computation order and reduced redundancy [23][25]. - Matrix multiplication operator enhancements have increased core utilization by 10% through optimized data transport strategies [26][28]. - Vector operator optimizations have resulted in performance improvements exceeding three times by minimizing data transport during reordering operations [30][32]. Group 5: Memory Optimization Techniques - The Selective R/S memory optimization technique has enabled a 70% reduction in activation memory during training by implementing fine-grained recomputation and adaptive memory management [46][49]. - The self-adaptive memory optimization mechanism focuses on maximizing the efficiency of memory usage relative to additional computation time [55][56]. Group 6: Industry Implications - Huawei's deep collaboration between Ascend and Kunpeng, along with its innovative operator acceleration and memory optimization techniques, provides an efficient and cost-effective solution for MoE training [58]. - These advancements not only eliminate barriers for large-scale MoE model training but also offer valuable reference paths for the industry [59].