超节点时代
Search documents
下载量超 1300 万,昇思 MindSpore:AI 框架迈入“超节点时代”
AI前线· 2025-12-30 05:32
Core Insights - The MindSpore community has achieved significant growth, with over 13 million cumulative downloads, more than 52,000 core contributors, and over 120,000 code contributions, serving users in over 150 countries and regions [2] - MindSpore has developed three core capabilities in AI frameworks, focusing on collaboration with training acceleration libraries, model communities, and evaluation tools [3] - The rise of large language models has shifted computational paradigms from single-machine to cluster-based approaches, leading to the development of various parallelization techniques [4] Group 1 - MindSpore supports over 25 model types, providing a comprehensive out-of-the-box capability for script development, parallel training, fine-tuning, and deployment [3] - The framework has achieved over 15% performance improvement in large model inference scenarios through seamless integration with the vLLM community [3] - MindSpore's HyperParallel architecture treats supernodes as a single supercomputer, enhancing programming and scheduling capabilities [6] Group 2 - The HyperParallel architecture introduces key technologies such as Hyperoffload, which separates computation and state to alleviate storage bottlenecks, improving training performance by approximately 20% and increasing sequence length support by about 70% in inference scenarios [4] - MindSpore's native support for ultra-large-scale cluster parallelism can cover tens of thousands of computing nodes and support trillion-parameter models [5] - The framework has been deployed across a wide range of devices, from data center servers to small terminals, establishing itself as a foundational AI capability for numerous smart devices [5] Group 3 - The official version of the HyperParallel architecture and associated acceleration suites for multimodal and reinforcement learning will be released in the first half of next year [7] - Future developments in the MindSpore community will focus on edge intelligence, open architecture, and industry enablement, covering large models and agent acceleration [7] - The introduction of HyperMPMD and Hypershard aims to enhance resource utilization and reduce parallelization modification time significantly [11]
昇思MindSpore AI框架下载量超1300万
Huan Qiu Wang Zi Xun· 2025-12-26 00:56
Core Insights - The MindSpore AI framework has surpassed 13 million downloads and is utilized in 156 countries, with over 52,000 community contributors, indicating its global reach and community engagement [1][3] - The framework focuses on innovations in super-node technology, aiming to lead the AI framework into the "super-node era" with advanced technology and user-friendly experiences [1][3] - The development of the "Yufeng·Zhiying" intelligent design system for civil aircraft, based on the MindSpore framework, showcases its application in the aerospace industry [3] Group 1 - The MindSpore framework is designed to be super-node friendly, integrating various scenarios, and promoting an open architecture to facilitate intelligent transformation across industries [1][3] - The HyperParallel architecture treats super-nodes as a "supercomputer," enabling advanced programming and scheduling capabilities, which enhances the performance of new AI models [4] - The conference recognized outstanding developers and evangelists, highlighting the community's commitment to fostering talent in the AI sector [4] Group 2 - The evolution of AI models is moving towards long sequences and sparse trillion-level structures, presenting both challenges and opportunities for AI frameworks [3] - The MindSpore open-source community emphasizes collaborative governance and aims to enhance the developer experience while supporting AI talent development [3][4] - The event was co-hosted by various organizations, indicating a collaborative effort in advancing AI technology and community engagement [4]
昇思MindSpore开源五年下载量超1300万,AI框架进入“超节点时代”
Xin Lang Cai Jing· 2025-12-25 12:14
Core Insights - The conference focused on the theme "MindSpore for Super Nodes," highlighting the innovation in super node technology and the introduction of the HyperParallel architecture to accelerate new model structures and training paradigms in AI frameworks [2][3] Group 1: MindSpore Framework - MindSpore aims to create an AI framework that is super node-friendly, fully integrated across various scenarios, open in architecture, and agile in enabling technology [2] - Since its open-source launch on March 28, 2020, MindSpore has seen over 13 million downloads, covering 156 countries and regions, with more than 120,000 merge requests and over 52,000 community contributors [2] - MindSpore supports over 25 large model series, has 2000+ community partners, and has facilitated over 3100 industry application practices, contributing to nearly 2500 academic papers, ranking first in China and second globally among AI frameworks [2] Group 2: Super Node Technology - The rapid development of AI large model technology is leading to models with long sequences and sparse structures, transitioning AI infrastructure from the "server cluster era" to the "super node era" [3] - The HyperParallel architecture treats super nodes as a "supercomputer" for programming and scheduling, leveraging its advantages to achieve features like HyperShard declarative parallel programming and HyperMPMD heterogeneous irregular parallelism [3] - The framework enhances resource utilization, which is crucial for training large models and practical AI applications, improving task scheduling efficiency compared to other AI frameworks [4]
中兴通讯20251010
2025-10-13 01:00
Summary of ZTE Corporation Conference Call Industry Overview - The conference call discusses the advancements in the AI computing industry, particularly focusing on the emergence of the "super node" era, which is characterized by the integration of computing, storage, and networking [3][4][7] - Key players in this space include NVIDIA and Huawei, with NVIDIA's GB200 and NVL72 products and Huawei's Cloud Metric 384 and Atlas 900 SuperPod being highlighted as leading solutions [2][5] Key Points on ZTE Corporation - ZTE has launched the "Intelligent Computing Super Node System," which won the Annual Major Breakthrough Achievement Award at the 2025 China Computing Conference. This system is based on high-speed interconnection between GPU cards and features self-developed AI switching chips [9] - The Nebula 64 super node system supports multiple mainstream interconnection protocols, enabling large-scale interconnectivity [9] - ZTE's server and storage revenue grew over 200% year-on-year in the first half of 2025, with AI servers accounting for 55% of this revenue [11] - The company has secured multiple large procurement projects, totaling over 16 billion yuan, and ranks first in these bids [11] - ZTE's self-developed components, such as microelectronics, motherboards, and network cards, enhance profit margins and ensure supply chain security [10] Competitive Landscape - The "super node" era emphasizes system capabilities over individual card capabilities, with a focus on high bandwidth, low latency, and reliability [7] - System vendors like ZTE and Huawei have a competitive advantage in the computing field due to their strong interconnection capabilities and experience in the telecommunications sector [8] - ZTE's automated production facilities and intelligent manufacturing technologies significantly improve production efficiency and quality [10] Technological Developments - ZTE has over 5,000 patents in the AI field and has developed more than 130 types of self-developed chips, covering the entire ICP industry computing network terminal [12][10] - The company is actively involved in the development of large-scale AI computing clusters, with plans to support up to 2,084 GPUs in a single server [11] Future Outlook - ZTE is positioned as a key player in the IT and computing sectors, with a strong focus on AI and intelligent computing. Investors are encouraged to monitor ZTE's developments in these areas [17]