Workflow
让算力航母稳健远航,华为首次披露昇腾算力基础设施的压舱石
21世纪经济报道·2025-06-09 12:08

Core Viewpoint - The article discusses the advancements in AI computing clusters, emphasizing their critical role in enhancing the capabilities of AI models through innovative engineering solutions and fault tolerance mechanisms [1]. Group 1: Supernode High Availability - AI training and inference require continuous operation, with each computer in the cluster having a backup to ensure seamless task execution during failures [1]. - Huawei's fault tolerance solutions include system-level, business-level, and operational-level strategies to manage faults gracefully [1]. Group 2: Cluster Linearity - The ideal scenario for computing clusters is linear scalability, where the performance increases proportionally with the number of computers [1]. - Huawei employs advanced task allocation algorithms and technologies to achieve high linearity in model training, with results showing linearity rates of 96% for various configurations [1]. Group 3: Rapid Recovery in Large-Scale Training - The system can automatically save training progress, allowing for quick recovery from failures without starting over [1]. - Innovations include process-level rescheduling and online recovery techniques that significantly reduce recovery times to under 3 minutes [1]. Group 4: Large-Scale MoE Model Inference Recovery - The article outlines a three-tier fault tolerance strategy for large-scale MoE model inference, minimizing user impact during hardware failures [1]. - Techniques such as rapid instance restart and token-level retries have been validated to reduce recovery times significantly [1]. Group 5: Fault Management and Diagnostic Awareness - A real-time monitoring system continuously tracks the health of each computer in the cluster, enabling quick fault detection and diagnosis [1]. - Huawei's comprehensive fault management solutions enhance reliability through advanced diagnostic capabilities and proactive maintenance strategies [1]. Group 6: Simulation Modeling - The article introduces a Markov modeling simulation platform that allows for pre-testing of AI models in a virtual environment, identifying potential bottlenecks before real-world deployment [1]. - This approach optimizes resource allocation and enhances the overall efficiency of the computing cluster [1]. Group 7: Framework Migration - Huawei's MindSpore framework supports seamless integration with mainstream ecosystems, facilitating the deployment of large models and improving inference performance [1]. - The framework includes tools for adapting third-party frameworks, ensuring compatibility and efficiency in AI model training and inference [1].