Workflow
Atlas 850
icon
Search documents
中国AI高速路,华为给出开源开放方案
量子位· 2025-09-23 11:01
Core Viewpoint - Huawei is leading the development of an open and shared AI computing ecosystem through its innovative supernode architecture, which aims to create an "AI highway" that benefits various industries and players of all sizes [1][2][26]. Group 1: Supernode Technology - Huawei unveiled the supernode architecture at the Huawei Connect conference, introducing a range of supernode products that cover all scenarios from data centers to workstations [3]. - The Atlas 950 SuperPoD is designed for large-scale AI computing tasks, featuring innovations in system-level design, including zero-cable interconnect and enhanced cooling reliability [4]. - Compared to NVIDIA's upcoming products, the Atlas 950 supernode shows significant advantages in scale, total computing power, memory capacity, and interconnect bandwidth, achieving 56.8 times, 6.7 times, 15 times (1152TB), and 62 times (16.3PB/s) respectively [5]. Group 2: Open Source and Collaboration - Huawei is fully opening its supernode technology to the industry, allowing for shared technological benefits and collaborative innovation [16]. - The company is also opening its hardware components, including NPU modules and AI cards, to facilitate incremental development by customers and partners [18]. - On the software side, Huawei is making its operating system components open source, enabling users to integrate and maintain versions according to their needs [20]. Group 3: Industry Impact and Ecosystem - The supernode technology is designed to serve various industries, including internet, finance, telecommunications, and manufacturing, enhancing computing efficiency and business capabilities [29]. - The UnifiedBus protocol enables high bandwidth and low latency interconnectivity among computing and storage units, addressing traditional cluster reliability issues [33]. - Huawei's approach fosters an open ecosystem where different hardware manufacturers and software developers can collaborate, breaking down barriers in the AI computing landscape [42]. Group 4: Future Prospects - The Atlas 950 SuperCluster is set to be 2.5 times larger and 1.3 times more powerful than the current largest global cluster, xAI Colossus, positioning Huawei as a leader in computing power [48]. - By promoting an open and collaborative AI computing environment, Huawei aims to establish a sustainable and secure foundation for the AI industry in China, potentially leading to a new cycle of innovation [52][53].
【招商电子】国产算力芯片链深度跟踪:华为披露AI芯片3年规划,国内自主可控加速发展
招商电子· 2025-09-19 15:21
Core Viewpoint - Huawei's Full Connect 2025 Conference showcased the Lingqu Unified Interconnection Protocol, announcing the launch of Ascend 950/960/970 and Kunpeng 950/960 over the next three years, highlighting the gradual enhancement of domestic AI computing chip capabilities amid US-China tensions [9][58]. Group 1: AI Computing Chip Development - The Ascend NPU roadmap includes the release of Ascend 950 (PR and DT versions) in 2026, followed by 960 in 2027 and 970 in 2028, with significant performance improvements [15][20]. - The Kunpeng CPU will see the launch of Kunpeng 950 in late 2026 and Kunpeng 960 in early 2028, supporting advanced computing needs [20][34]. - Domestic chip manufacturers like Haiguang and Cambricon are projecting substantial revenue growth, with Haiguang targeting a CAGR of 44% over three years [3][58]. Group 2: Advanced Manufacturing and Semiconductor Industry - The domestic lithography machine industry is focusing on complete machines and related components, with expectations for advanced process expansion by 2026 [3][59]. - The domestic semiconductor industry is expected to benefit from the acceleration of independent and controllable demands, particularly in advanced logic and storage production lines [3][62]. Group 3: Storage and Edge Computing - The demand for inference and edge computing storage is increasing, with significant growth expected in AI PCs, smartphones, and wearable devices by 2026 [4][58]. - Domestic manufacturers are enhancing their enterprise storage product lines, with companies like Jiangbolong and Baiwei Storage launching new enterprise-level solutions [4][58]. Group 4: Investment Recommendations - Investment opportunities are suggested in AI computing chips, high-end chip manufacturing, packaging, storage, and related equipment and materials [5].
华为超节点:用「一台机器」的逻辑,驱动AI万卡集群
机器之心· 2025-09-19 13:23
Core Viewpoint - The article discusses Huawei's innovative "super node" architecture, which aims to redefine large-scale effective computing power in AI by addressing the limitations of traditional server architectures and enhancing interconnectivity through the self-developed UnifiedBus protocol [3][4][12]. Group 1: Super Node Architecture - The super node architecture represents a deep restructuring of computing system architecture, moving from a "stacked" model to a "fused" model that allows multiple machines to function as a single device [4][9]. - This architecture aims to eliminate the communication bottlenecks inherent in traditional server setups, where data exchange between servers can lead to significant delays and inefficiencies [5][11]. - Huawei's super node can reduce communication latency to the nanosecond level, significantly improving cluster utilization and lowering communication costs, with the goal of achieving linear scalability of effective computing power [11][12]. Group 2: Product Offerings - Huawei introduced the Atlas 950 SuperPoD and Atlas 960 SuperPoD, which support 8192 and 15488 Ascend cards respectively, showcasing superior performance in key metrics such as card scale, total computing power, memory capacity, and interconnect bandwidth [17][20]. - The Atlas 850, an enterprise-grade air-cooled AI super node server, lowers the barrier for enterprises to adopt super node architecture without requiring complex liquid cooling modifications [21]. - The TaiShan 950 SuperPoD extends the super node architecture to general computing, offering ultra-low latency and memory pooling capabilities beneficial for databases and big data applications [25]. Group 3: Ecosystem Strategy - Huawei emphasizes an ecosystem strategy of "hardware openness and software open-source," encouraging industry partners to engage in secondary development and enrich product offerings based on the UnifiedBus protocol [26][28]. - The company aims to build a unified, scalable computing foundation that provides a consistent, high-performance computing experience across various environments, from cloud to enterprise [28].
华为宣布推出超节点架构,可将多台物理机器深度互联
Xin Lang Ke Ji· 2025-09-18 06:39
Core Viewpoint - Huawei has introduced an innovative super node architecture aimed at redefining large-scale effective computing power, emphasizing open-source and hardware openness to foster industry collaboration and innovation [2][3]. Group 1: Super Node Architecture - The super node architecture allows multiple physical machines to be deeply interconnected, enabling them to function as a single logical unit for learning, reasoning, and thinking [2]. - This architecture is designed to meet the computing needs of large data centers, enterprise-level data centers, and small workstations across various industries [2]. - Key features of the super node architecture include resource pooling, scalable expansion, and reliable performance, facilitating high bandwidth and low latency interconnections for computing and storage units [2]. Group 2: New Product Launch - Huawei has launched several new products based on the super node architecture, including the AI super node Atlas 950 SuperPoD, enterprise-level AI super node servers Atlas 850 and Atlas 860, AI next-generation cards Atlas 350, and the first universal super node Taishan 950 SuperPoD [2]. - These products are designed to enhance the capabilities of data centers and support a wide range of computing scenarios [2]. Group 3: Open Source Commitment - Huawei is fully opening its super node technology to share the technological benefits with the industry, promoting inclusive and collaborative innovation [3]. - The operating system components of the Lingqu protocol will be open-sourced, with code being integrated into various upstream open-source communities such as openEuler [3].