Workflow
训练大模型,终于可以“既要又要还要”了
虎嗅APP·2025-05-29 10:34

Core Insights - The article discusses the advancements in the MoE (Mixture of Experts) model architecture, particularly focusing on Huawei's Pangu Ultra MoE, which aims to balance model performance and efficiency while addressing challenges in training large-scale models [1][6][33] Group 1: MoE Model Innovations - Huawei's Pangu Ultra MoE model features a parameter scale of 718 billion, designed to optimize the performance and efficiency of large-scale MoE architectures [6][9] - The model incorporates advanced architectures such as MLA (Multi-head Latent Attention) and MTP (Multi-token Prediction), enhancing its training and inference capabilities [6][7] - The Depth-Scaled Sandwich-Norm (DSSN) and TinyInit methods are introduced to improve training stability, reducing gradient spikes by 51% and enabling long-term stable training with over 10 trillion tokens [11][12][14] Group 2: Load Balancing and Efficiency - The EP (Expert Parallelism) group load balancing method is designed to ensure efficient token distribution among experts, enhancing training efficiency without compromising model specialization [19][20] - The Pangu Ultra MoE model employs an EP-Group load balancing loss that allows for flexible routing choices, promoting expert specialization while maintaining computational efficiency [20][21] Group 3: Training Techniques and Performance - The model's pre-training phase utilizes dropless training, achieving a long sequence capability of 128k, which enhances its learning efficiency on target data [8][14] - The introduction of MTP allows for speculative inference, significantly improving the acceptance length by 38% compared to single-token predictions [24][27] - The reinforcement learning system designed for post-training focuses on iterative hard example mining and multi-capability collaboration, ensuring comprehensive performance across various tasks [28][31] Group 4: Future Implications - The advancements presented in Pangu Ultra MoE provide a viable path for deploying sparse large models at scale, pushing the performance limits and engineering applicability of MoE architectures [33]