Core Viewpoint - The recent release of the Torch-MUSA v2.7.0 deep learning framework by Moole Thread marks a significant advancement in the company's MUSA ecosystem, showcasing its commitment to rapid iteration and performance optimization in GPU computing [1][4]. Group 1: Product Development - Moole Thread has successfully updated the Torch-MUSA framework twice within a month, indicating strong investment and development capabilities in the MUSA ecosystem [4]. - The new version, v2.7.0, includes over 1,050 operators and features enhancements such as Dynamic Double Cast and Distributed Checkpoint, improving performance and stability for large model training and inference [5][9]. - The MUSA architecture integrates GPU hardware and software into a unified system, enhancing the efficiency of complex computational tasks and optimizing memory usage with support for Unified Memory [9][12]. Group 2: Market Position and Strategy - Moole Thread's MUSA architecture is positioned to potentially replace the dominant GPU ecosystem led by NVIDIA and CUDA, providing a comprehensive software stack for developers [15]. - The company is rapidly advancing its business development alongside its IPO process, which has seen a swift approval timeline of 88 days, with an expected fundraising total of 8 billion yuan [16]. - A strategic partnership with the National Information Center aims to foster collaboration in the field of computing power, contributing to the development of a national integrated computing network [18].
摩尔线程Torch-MUSA重磅升级,支持1050+算子,深度学习生态持续跃升
Shang Hai Zheng Quan Bao·2025-11-28 09:43