Workflow
昇腾Atlas 800T A2
icon
Search documents
华为的准万亿大模型,是如何训练的?
虎嗅APP· 2025-05-30 10:18
Core Viewpoint - The article discusses Huawei's advancements in AI training systems, particularly focusing on the MoE (Mixture of Experts) architecture and its optimization through the MoGE (Mixture of Generalized Experts) framework, which enhances efficiency and reduces costs in AI model training [1][2]. Summary by Sections Introduction to MoE and Huawei's Innovations - The MoE model, initially proposed by Canadian scholars, has evolved significantly, with Huawei now optimizing this architecture to address inefficiencies and cost issues [1]. - Huawei's MoGE architecture aims to create a more balanced and efficient training environment for AI models, contributing to the ongoing AI competition [1]. Performance Metrics and Achievements - Huawei's training system, utilizing the "昇腾+Pangu Ultra MoE" combination, has achieved significant performance metrics, including a 41% MFU (Model Floating Utilization) during pre-training and a throughput of 35K Tokens/s during post-training on the CloudMatrix 384 super node [2][26][27]. Challenges in MoE Training - Six main challenges in MoE training processes are identified: difficulty in parallel strategy configuration, All-to-All communication bottlenecks, uneven system load distribution, excessive operator scheduling overhead, complex training process management, and limitations in large-scale expansion [3][4]. Solutions and Innovations - **First Strategy: Enhancing Training Cluster Utilization** - Huawei implemented intelligent parallel strategy selection and global dynamic load balancing to improve overall training efficiency [6][11]. - A modeling simulation framework was developed to automate the selection of optimal parallel configurations for the Pangu Ultra MoE model [7]. - **Second Strategy: Releasing Computing Power of Single Nodes** - The focus shifted to optimizing operator computation efficiency, achieving a twofold increase in micro-batch size (MBS) and reducing host-bound issues to below 2% [15][16][17]. - **Third Strategy: High-Performance Scalable RL Post-Training Technologies** - The introduction of RL Fusion technology allows for flexible deployment modes and significantly improves resource utilization during post-training [19][21]. - The system's design enables a 50% increase in overall training throughput while maintaining model accuracy [21]. Technical Specifications of Pangu Ultra MoE - The Pangu Ultra MoE model features 718 billion parameters, with a structure that includes 61 layers of Transformer architecture, achieving high performance and scalability [26]. - The training utilized a large-scale cluster of 6K - 10K cards, demonstrating strong generalization capabilities and efficient scaling potential [26][27].
每2秒吃透一道高数大题!华为终于揭秘准万亿MoE昇腾训练系统全流程
华尔街见闻· 2025-05-30 09:38
Core Viewpoint - Huawei has achieved significant advancements in training large models through its "Ascend + Pangu Ultra MoE" system, demonstrating a fully domestic and GPU-free training process that enhances computational efficiency and model performance [3][4][38]. Group 1: Technical Innovations - Huawei's training system has achieved a model training efficiency with a utilization rate (MFU) of 41% during the pre-training phase using the Ascend Atlas 800T A2 cluster [4][38]. - The Pangu Ultra MoE model consists of 718 billion parameters, featuring a unique architecture with 61 layers, including 58 MoE layers, and is designed for high performance and scalability [38][39]. - The system supports a high throughput of 35K Tokens/s during the reinforcement learning (RL) post-training phase, showcasing its capability to process complex tasks rapidly [39]. Group 2: Challenges Addressed - The report identifies six key challenges in the current MoE pre-training and RL post-training processes, including difficulties in parallel strategy configuration, communication bottlenecks, and uneven system load distribution [7][10][12][13]. - Huawei has developed a comprehensive end-to-end solution to address these challenges, focusing on optimizing training cluster utilization and enhancing communication efficiency [14][16][25]. Group 3: Specific Solutions - The first strategy involves improving training cluster utilization through intelligent parallel strategy selection and global dynamic load balancing, significantly enhancing overall training efficiency [16][23]. - The second strategy focuses on releasing computational power at the single-node level by optimizing training operators and enhancing memory management, achieving a twofold increase in micro-batch size [26][30]. - The third strategy introduces high-performance scalable RL post-training technologies, allowing for flexible deployment modes and doubling the utilization rate of RL post-training clusters [33][34].
华为AI实力!不用GPU,大模型每2秒吃透一道高数大题!
第一财经· 2025-05-30 09:32
Core Viewpoint - Huawei has achieved significant advancements in training large models through its "Ascend + Pangu Ultra MoE" combination, enabling a fully controllable training process without the need for GPUs, showcasing industry-leading performance in cluster training systems [2][3]. Group 1: Technical Innovations - Huawei's training system has improved the model training efficiency significantly, with a pre-training model utilization rate (MFU) reaching 41% and a post-training throughput of 35K Tokens/s on the CloudMatrix 384 super node [3][34]. - The company has introduced a series of innovative solutions to address challenges in the MoE pre-training and reinforcement learning (RL) post-training processes, including intelligent parallel strategy selection and global dynamic load balancing [11][17]. - The training system utilizes a hierarchical All-to-All communication architecture to reduce communication overhead to nearly zero, enhancing the efficiency of expert parallel communication [14][15]. Group 2: Training Process Optimization - The training cluster's utilization has been optimized through a simulation-driven intelligent parallel optimization framework, which automates the selection of optimal deployment configurations [12][13]. - The team has implemented a memory optimization framework that achieves over 70% savings in activation memory, ensuring reliable long-term training even under increased memory pressure [25]. - The RL Fusion technology allows for flexible deployment modes, significantly improving resource scheduling during the inference phase and doubling the utilization rate in RL post-training [27][28]. Group 3: Model Specifications - The Pangu Ultra MoE model features 718 billion parameters, with a structure that includes 61 layers of Transformer architecture, designed for high sparsity and performance [32]. - The model's training utilized a cluster of 6K - 10K Ascend 800T A2 cards, achieving a high model utilization rate during the pre-training phase [32]. - The architecture supports efficient scaling to larger parameter models and clusters, with expectations of achieving an MFU greater than 50% in future iterations [32].