Workflow
Moore Threads Technology(688795)
icon
Search documents
摩尔线程张建中:智算集群将做到50万卡、100万卡规模
Di Yi Cai Jing· 2025-12-20 08:37
Core Viewpoint - MoE Technology is launching its first generation of GPU clusters in 2024, aiming to reach 10,000 cards this year and plans for future expansions to 100,000 cards [1] Group 1: Product Development - MoE Technology held its first MUSA Developer Conference on December 20, announcing a new GPU architecture and three new chips based on this architecture [1] - The new architecture, named Huagang, improves computing density by 50% compared to the previous generation and supports full precision calculations from FP4 to FP64 [1] - The three new chips introduced are Huashan (AI training and inference chip), Lushan (graphics rendering chip), and Changjiang (system-on-chip) [1] Group 2: Performance Metrics - The previous generation S4000 card has performance metrics of 25 TFLOPS (FP32), 49 TFLOPS (TF32), 98 TFLOPS (FP16), and 196 TOPS (INT8) with a maximum power consumption of 450W [2] - In comparison, NVIDIA's A100 chip has performance metrics of 19.5 TFLOPS (FP32), 156 TFLOPS (TF32), 312 TFLOPS (FP16), and 624 TOPS (INT8) with a maximum power consumption of 300W [2] - The new S5000 card's performance in distributed inference scenarios is reported to be approximately 2.5 times and 1.3 times that of common chips for specific tasks [3] Group 3: Market Position and Financials - MoE Technology's stock debuted on the Sci-Tech Innovation Board at a price of 114.28 CNY per share, with significant fluctuations leading to a closing price of 664.1 CNY on December 19 [5] - The company has not yet achieved profitability, with cumulative losses of 1.6 billion CNY as of June this year, but it anticipates profitability by 2027 [5]
五一视界与摩尔线程深度合作 释放机器人测试训练无限可能
Ge Long Hui A P P· 2025-12-20 08:20
Group 1 - The core focus of the collaboration between Moore Threads and 51WORLD's simulation platform 51Sim is to build a next-generation physical AI simulation system based on domestic GPU computing power and advanced simulation and world model technologies [1][3] - The traditional simulation methods have limitations such as long construction cycles, high costs, and limited generalization capabilities, leading to a "confidence gap" with the real world [3] - 51Sim's approach of "4DGS reconstruction + generative world model" aims to transition simulation from manual construction to AI-driven generation, addressing the computational demands of physical AI simulation [3] Group 2 - The partnership has already achieved large-scale applications in the intelligent driving sector, supporting the closed-loop validation of end-to-end intelligent driving algorithms [3] - Moore Threads is a strategic shareholder of 51WORLD, which is aiming to become the first publicly listed company in the Physical AI sector by December 30, 2025, on the Hong Kong Stock Exchange [4] - The collaboration is expected to enhance funding for technology development and market expansion, thereby strengthening the synergy and promoting the development of the domestic GPU and Physical AI industry chain [4]
摩尔线程,重大发布!
Zheng Quan Shi Bao· 2025-12-20 07:54
Core Viewpoint - Moores Threads has launched its new GPU architecture "Huagang" at the MUSA Developer Conference, showcasing significant advancements in computing power and energy efficiency [1][2]. Group 1: Product Launch and Features - The "Huagang" architecture features a 50% increase in computing density and a 10-fold improvement in energy efficiency, supporting large-scale intelligent computing clusters of over 100,000 cards [1][2]. - The full-featured GPU includes four main functional engines: AI computing acceleration, graphics rendering, physical simulation and scientific computing, and ultra-high-definition video encoding and decoding [1]. - The company plans to release high-performance AI training and inference chip "Huashan" and a chip focused on high-performance graphics rendering "Lushan" based on the new architecture [2]. Group 2: Market Position and Financial Performance - Moores Threads is regarded as the "Chinese version of Nvidia" and recently went public, experiencing a significant stock price increase of over 400% on its first trading day [6]. - The stock price has seen fluctuations, currently at 664.1 yuan per share, down from a peak of 940 yuan [6]. - For the first nine months of 2025, the company reported a revenue of 785 million yuan and a net loss of 724 million yuan, with projections indicating a continued net loss for the year [7].
Global Markets Brace for AI Chip Scramble, EV Slowdown, and Japan’s Economic Resurgence
Stock Market News· 2025-12-20 07:38
AI and Semiconductor Market - OpenAI has secured deals to purchase approximately 40% of the global raw, undiced DRAM wafer output until 2029, indicating a significant demand for advanced memory chips to support AI infrastructure and data centers globally [2][7] - Moore Threads Technology has emerged as a competitor to Nvidia in the AI chip market, completing a successful IPO in China with shares increasing by as much as 500%, aiming to reduce reliance on Nvidia's hardware [3][7] Global EV Market - The electric vehicle market is experiencing a slowdown, with Asian battery and car manufacturers adjusting strategies due to softening demand, high production costs, and slower consumer adoption [4][7] - Ford Motor Company is recalling over 270,000 electric and hybrid vehicles in the U.S. due to a parking function issue, which poses a roll-away risk [5][7] Japan's Corporate Landscape - Japan is undergoing significant corporate governance reforms, leading to a record $350 billion in mergers and acquisitions, indicating a shift towards improved shareholder returns and profitability [6][7] - Companies and households in Japan are reevaluating their financial strategies following a recent rate hike by the Bank of Japan [8][7] American Airlines Loyalty Program Changes - American Airlines will no longer offer AAdvantage miles or Loyalty Points for basic economy fares starting December 17, 2025, aligning its policy with competitors and impacting budget travelers [9][7]
摩尔线程公布新GPU架构和万卡集群
Guan Cha Zhe Wang· 2025-12-20 07:27
Core Insights - The article discusses the launch of new GPU products by the company Moore Threads at the first MUSA Developer Conference, highlighting advancements in GPU architecture and AI training chips [1][7]. Group 1: Product Announcements - Moore Threads unveiled its next-generation GPU architecture "Huagang," which supports full precision computing from FP4 to FP64, with a 50% increase in density and a 10-fold improvement in efficiency [7]. - The company introduced the AI training and inference chip "Huashan" and the graphics rendering chip "Lushan," along with the "Kua'a" 10,000-card computing cluster [1][7]. - The "Kua'a" computing cluster boasts a floating-point computing capability of 10 Exa-Flops, with a training utilization rate of 60% for dense models and 40% for MOE models, achieving a linear scaling efficiency of 95% [9]. Group 2: Industry Context and Challenges - The development of "sovereign AI" is emphasized as crucial for enhancing national competitiveness, focusing on achieving autonomy in computing power, algorithm strength, and ecosystem independence [2]. - The performance gap between domestic graphics cards and leading international products is narrowing, although building large-scale intelligent computing systems remains a significant challenge [2]. - The competitive landscape for GPU companies is intense, with major players like NVIDIA and Huawei holding a combined market share of 94.4% in the intelligent computing chip market, indicating a fragmented market with over 15 participants [20]. Group 3: Financial Performance and Market Outlook - Moore Threads reported a revenue of 785 million yuan and a net loss of 724 million yuan for the first three quarters of the year, with projections indicating a continued net loss in 2025 [17]. - The company’s market capitalization fluctuated, initially exceeding 400 billion yuan but currently around 310 billion yuan [17]. - The article notes that many GPU startups are experiencing significant losses, with competitors like Muxi and Biran Technology also facing financial challenges [19]. Group 4: Ecosystem Development - The CEO of Moore Threads highlighted the importance of building a user-friendly development environment to foster a robust ecosystem, which is seen as a critical competitive advantage in the GPU industry [23]. - The company aims to enhance its research and development efforts to overcome core technological challenges and deepen collaboration with ecosystem partners [23].
摩尔线程亮出全栈技术底牌:“花港”新架构与万卡集群冲击高端GPU市场格局
Huan Qiu Wang· 2025-12-20 07:00
Core Insights - The article highlights the significant advancements made by Moore Threads in the GPU sector, particularly through the introduction of the new "Huagang" architecture and the "Kua'e" ten-thousand card intelligent computing cluster, which supports trillion-parameter model training [2][3]. Architecture Innovations - The "Huagang" architecture showcases a 50% increase in computing density and up to 10 times improvement in efficiency, fully supporting precision calculations from FP4 to FP64. It integrates the self-developed MTLink high-speed interconnect technology, facilitating cluster expansion beyond 100,000 cards [3][5]. - Two chips have been planned based on the "Huagang" architecture: "Huashan" for AI training and inference integration, and "Lushan" aimed at high-performance graphics rendering, with performance improvements of 64 times for AI computation, 16 times for geometric processing, and 50 times for ray tracing [5]. Cluster Capabilities - The "Kua'e" ten-thousand card intelligent computing cluster has publicly disclosed key engineering efficiency metrics, achieving a model compute utilization (MFU) of 60% for dense models and 40% for mixture of experts (MOE) models, with a linear scaling efficiency of 95% and effective training time exceeding 90% [6]. Ecosystem Development - Moore Threads announced the iteration of its unified software architecture MUSA to version 5.0, with plans to gradually open-source core components, including computation acceleration libraries and system management frameworks [8]. - The "Moore Academy" platform has attracted nearly 200,000 learners and collaborates with over 200 universities nationwide, reflecting a comprehensive approach to ecosystem building through technology open-sourcing, developer tool provision, and early talent cultivation [9]. Technological Integration and Exploration - The release indicates a trend towards the deep integration of graphics, AI, and high-performance computing, with hardware-level ray tracing acceleration and the introduction of the AI generative rendering technology MTAGR 1.0 [10]. - The company is also exploring cutting-edge fields such as embodied intelligence and AI for science, showcasing its ambition to redefine the value of GPUs as a general computing platform [10]. Industry Context - The comprehensive technology showcase reflects the current stage of domestic high-end computing power development, transitioning from single-chip innovations to tackling large-scale system engineering and building a thriving application ecosystem [11]. - The efficiency disclosure of the ten-thousand card cluster signifies that domestic computing infrastructure is beginning to undergo rigorous testing in large-scale, high-load scenarios, while the architecture iteration and integration of graphics and AI demonstrate the company's intent to define the next generation of computing architecture [11].
预售价9999元,摩尔线程发布AI算力笔记本
Core Insights - The article discusses the launch of the MTT AIBOOK notebook by Moore Threads, which is now available for pre-sale on JD.com at a price of 9,999 yuan for the 32GB, 1TB version [2] Product Details - The MTT AIBOOK is equipped with Moore Threads' newly developed proprietary SoC chip "Yangtze," which integrates a high-performance full-core CPU and a fully functional GPU [2] - The notebook supports MUSA unified architecture and offers heterogeneous AI computing power of 50 TOPS [2] - It is designed for development, office work, and entertainment, supporting Windows virtual machines, Linux, Android containers, and all domestic operating systems [2]
国产算力迈入“万卡”时代:摩尔线程发布新一代GPU架构,中科曙光发布万卡超集群
Jing Ji Guan Cha Wang· 2025-12-20 06:47
Core Insights - The article discusses the advancements in the domestic GPU industry, highlighting the launch of the "Huagang" architecture by Moore Threads and the "scaleX" supercluster system by Inspur, indicating a shift in focus from individual GPU performance to building scalable systems capable of handling massive computational tasks [2][6]. Group 1: Moore Threads Developments - Moore Threads unveiled its latest "Huagang" architecture, which boasts a 50% increase in computing density and a 10-fold improvement in efficiency compared to the previous generation [3]. - The "Huagang" architecture supports full precision calculations from FP4 to FP64 and introduces new support for MTFP6, MTFP4, and mixed low precision [3]. - Future chip plans include "Huashan," aimed at AI training and inference, and "Lushan," focused on high-performance graphics rendering, with "Lushan" showing a 64-fold increase in AI computing performance and a 50% improvement in ray tracing performance [4]. Group 2: Inspur Developments - Inspur's "scaleX" supercluster system, which publicly debuted, consists of 16 scaleX640 supernodes interconnected via the scaleFabric high-speed network, capable of deploying 10,240 AI accelerator cards [10]. - The scaleX system employs immersion phase change liquid cooling technology to address heat dissipation challenges, achieving a 20-fold increase in computing density per rack and a PUE (Power Usage Effectiveness) of 1.04 [11][12]. - The system supports multi-brand accelerator cards and has optimized compatibility with over 400 mainstream large models, reflecting a strategy to provide a versatile platform for various domestic computing resources [14]. Group 3: Industry Challenges and Solutions - The industry faces challenges in scaling up computational power, particularly in managing heat, power supply, and physical space limitations when deploying thousands of high-power chips in data centers [8][9]. - Both companies are addressing communication delays in distributed computing, with Moore Threads integrating a new asynchronous programming model and self-developed MTLink technology to support clusters exceeding 100,000 cards, while Inspur's scaleFabric network achieves 400 Gb/s bandwidth and sub-microsecond communication latency [12][13]. Group 4: Software Ecosystem and Compatibility - As the hardware specifications approach international standards, the focus is shifting towards optimizing the software stack, with Moore Threads announcing an upgrade to its MUSA unified architecture and achieving over 98% efficiency in core computing libraries [13]. - Inspur emphasizes the compatibility of its systems with various brands of accelerator cards, promoting an open architecture strategy that allows for coexistence of multiple chips [14].
摩尔线程发布新一代GPU架构,打造MUSA生态对标英伟达CUDA
Xin Lang Cai Jing· 2025-12-20 06:42
来源:钛媒体 图片由AI生成 登陆A股科创板引发国产芯片股狂欢后,市场对这家公司后续的研发、产品、经营愈发关注。 GPU行业的竞争,本质上也是开发者生态的竞争。为此,摩尔线程在12月20日-21日举办首届MUSA(MUSA Developer Conference)开发者大会。 在今天(12月20日)上午的发布会上,摩尔线程创始人、董事长兼CEO张建中发布了新一代GPU架构"花港",AI训推一体新GPU"华山",游戏领域专业图形 GPU"庐山"、智能SoC"芯片"长江等产品,以及KUAE万卡智算集群。 根据发布会现场的介绍,即将于2026年量产的相关产品较上一代性能大幅提升。而这背后,继续对标、追赶甚至挑战以英伟达为代表的国际领先芯片产品、 架构及生态,成为了发布会的隐含主题。 摩尔线程在经营模式、产品体系和发展方向上,也一直对标着英伟达,尤其是在生态和基础算力设施构建、对物理AI的布局、高毛利率等方面,相比于"国 产GPU四小龙"中刚刚上市的沐曦股份,以及宣布赴港IPO的壁仞科技等公司来说。 不过,摩尔线程也正在尝试超越英伟达。其高调宣扬的"全功能GPU",是尝试在一颗GPU芯片上集成支撑AI计算、图形渲染 ...
摩尔线程推出新一代“花港”架构及芯片路线
摩尔线程的数据显示,华山在浮点算力、访存带宽、访存容量超越某国际厂商已发售的上一代产品;相 比上一代显卡S80,庐山在3A游戏性能表现上有了15倍提升。 (文章来源:中国经营报) 12月20日,在首届MUSA开发者大会(MUSA Developer Conference)上,摩尔线程(688795.SH)创始 人、董事长兼CEO张建中发布了MUSA新架构"花港"及芯片路线图,包括基于"花港"架构的新一代高性 能芯片——华山和庐山。 张建中表示,该架构具备以下性能和特点:支持新一代指令集;算力密度提升50%,能效提升10倍;全 精度端到端加速技术;新一代异步编程模型;支持十万卡以上规模智算集群;第一代AI生成式渲染架 构(AGR)等。 ...