Workflow
华为首个开源大模型来了!Pro MoE 720亿参数,4000颗昇腾训练
Hua Er Jie Jian Wen·2025-06-30 07:27

Core Insights - Huawei has announced the open-sourcing of its Pangu models, including the 70 billion parameter dense model and the 720 billion parameter mixture of experts (MoE) model, marking a significant step in the domestic large model open-source competition [1][3][20] Model Performance - The Pangu Pro MoE model achieves a single-card inference throughput of 1148 tokens/s on the Ascend 800I A2, which can be further enhanced to 1528 tokens/s using speculative acceleration technology, outperforming similar-sized dense models [3][11] - The Pangu Pro MoE model is built on the MoGE architecture, with a total parameter count of 720 billion and an active parameter count of 160 billion, optimized specifically for Ascend hardware [4][11] Training and Evaluation - Huawei utilized 4000 Ascend NPUs for pre-training on a high-quality corpus of 13 trillion tokens, divided into general, inference, and annealing phases to progressively enhance model capabilities [11] - The Pangu Pro MoE model has demonstrated superior performance in various benchmarks, including achieving a score of 91.2 in the DROP benchmark, closely matching the best current models [12][14] Competitive Landscape - The open-sourcing of Pangu models coincides with a wave of domestic AI model releases, with leading companies like MiniMax and Alibaba also upgrading their open-source models, leading to a price reduction of 60%-80% for large models [3][20] - The Pangu Pro MoE model ranks fifth in the SuperCLUE Chinese large model benchmark, surpassing several existing models and indicating its competitive position in the market [17][18] Technological Integration - Huawei's ecosystem, integrating chips (Ascend NPU), frameworks (MindSpore), and models (Pangu), represents a significant technological achievement, providing a viable high-performance alternative to Nvidia's dominance in the industry [20]