主权AI
Search documents
摩尔线程,展现最新成果
财联社· 2025-12-20 11:18
Core Viewpoint - The article highlights the rapid expansion of the domestic GPU leader, Moore Threads, and its significant advancements in GPU architecture and ecosystem development, particularly with the launch of the "Huagang" architecture and its associated products [1][2][17]. Group 1: Technological Advancements - Moore Threads introduced the "Huagang" architecture, which boasts a 50% increase in computing density and a 10-fold improvement in energy efficiency compared to the previous generation, set for mass production in the coming year [2]. - The "Huagang" architecture supports full precision from FP4 to FP64 and integrates AI generative rendering architecture and hardware acceleration for ray tracing [2]. - Two core chips were announced based on the "Huagang" architecture: "Huashan," designed for AI training and inference, and "Lushan," focused on high-performance graphics rendering [3][4]. Group 2: Performance Metrics - The "Huashan" chip features a new asynchronous programming model and achieves a 64-fold increase in AI computing performance and a 16-fold increase in geometric processing performance [4]. - The "Kua'e" supercomputing cluster was unveiled, achieving a floating-point computing capability of 10 Exa-Flops, with a training efficiency of 60% for dense models and 40% for mixture of experts models [6]. - The MTT S5000 single card achieved a prefill throughput of over 4000 tokens/s and a decode throughput of over 1000 tokens/s on the DeepSeek R1 671B model, indicating substantial breakthroughs in handling large-scale parameter models [7]. Group 3: Software Ecosystem - The company announced a full-stack software upgrade for its self-developed MUSA architecture, with the core computing library muDNN achieving over 98% efficiency in GEMM/FlashAttention and 97% in communication [9]. - An open-source plan was introduced, aiming to gradually release core components of the computing acceleration library, communication library, and system management framework to the developer community [10]. - The company plans to launch an intermediate language, MTX, compatible with cross-generation GPU instruction architectures, and a programming language, muLang, to facilitate developer adaptation [11]. Group 4: Market Position and Strategy - Moore Threads is entering the personal intelligent computing terminal hardware market with the launch of the MTT AIBOOK, priced at 9999 yuan, featuring the self-developed SoC chip "Changjiang" [12][13]. - The MTT AIBOOK is designed as a ready-to-use tool for developers, integrating AI capabilities and supporting multiple operating systems to enhance the MUSA ecosystem [14]. - The company aims to transition from being a single hardware supplier to a platform-level computing infrastructure provider, reflecting a strategic shift in the evolving global computing market [17]. Group 5: Financial Performance - The stock price of Moore Threads has shown significant volatility, closing at 664.10 yuan per share on December 19, with a cumulative decline of 29.4% from its peak on December 11, although it remains up over 481% from its issue price [16]. - The company's market capitalization is maintained at a high level of 312.146 billion yuan [16].
摩尔线程公布新GPU架构和万卡集群
Guan Cha Zhe Wang· 2025-12-20 07:27
12月20日上午,刚刚登陆科创板的GPU新秀摩尔线程,召开了首届MUSA开发者大会(MDC 2025)。 会上,摩尔线程公布了新一代GPU架构"花港",AI训推一体芯片"华山",用于游戏和图形渲染等场景的 芯片"庐山",以及"夸娥"万卡训练集群等产品。 现场,中国工程院院士、清华大学计算机系教授郑纬民发表主题演讲。他指出,发展"主权AI"是提升未 来国家竞争力的关键,其核心在于实现"算力自主、算法自强、生态自立"的完整体系。 霸狂女 为什么 "万卡甚至十万卡系统"是必选项 算力基石 模型时代的基本单位是集群总算力,不是单卡性能 预训练超大规模模型、服务国民级推理需求,需要 持续可用的万卡级训练集群 分布在全国的推理集群与第力网络 num SSICAL Partist 从主权 Al 角度 国产万卡/十万卡系统 = 本土大模型与行业模型的 "母机"与基座 图 郑纬民认为,国产计算显卡与国外主流产品的性能差距正在持续缩小,虽然构建国产万卡乃至十万卡级 别的超大规模智算系统存在难度,但这是必须完成的产业基础设施任务。他特别指出,开发者是生态建 设的关键,国产芯片平台必须构建起友好、易用的开发环境,以有效服务开发者社群 ...
摩尔线程亮出全栈技术底牌:“花港”新架构与万卡集群冲击高端GPU市场格局
Huan Qiu Wang· 2025-12-20 07:00
Core Insights - The article highlights the significant advancements made by Moore Threads in the GPU sector, particularly through the introduction of the new "Huagang" architecture and the "Kua'e" ten-thousand card intelligent computing cluster, which supports trillion-parameter model training [2][3]. Architecture Innovations - The "Huagang" architecture showcases a 50% increase in computing density and up to 10 times improvement in efficiency, fully supporting precision calculations from FP4 to FP64. It integrates the self-developed MTLink high-speed interconnect technology, facilitating cluster expansion beyond 100,000 cards [3][5]. - Two chips have been planned based on the "Huagang" architecture: "Huashan" for AI training and inference integration, and "Lushan" aimed at high-performance graphics rendering, with performance improvements of 64 times for AI computation, 16 times for geometric processing, and 50 times for ray tracing [5]. Cluster Capabilities - The "Kua'e" ten-thousand card intelligent computing cluster has publicly disclosed key engineering efficiency metrics, achieving a model compute utilization (MFU) of 60% for dense models and 40% for mixture of experts (MOE) models, with a linear scaling efficiency of 95% and effective training time exceeding 90% [6]. Ecosystem Development - Moore Threads announced the iteration of its unified software architecture MUSA to version 5.0, with plans to gradually open-source core components, including computation acceleration libraries and system management frameworks [8]. - The "Moore Academy" platform has attracted nearly 200,000 learners and collaborates with over 200 universities nationwide, reflecting a comprehensive approach to ecosystem building through technology open-sourcing, developer tool provision, and early talent cultivation [9]. Technological Integration and Exploration - The release indicates a trend towards the deep integration of graphics, AI, and high-performance computing, with hardware-level ray tracing acceleration and the introduction of the AI generative rendering technology MTAGR 1.0 [10]. - The company is also exploring cutting-edge fields such as embodied intelligence and AI for science, showcasing its ambition to redefine the value of GPUs as a general computing platform [10]. Industry Context - The comprehensive technology showcase reflects the current stage of domestic high-end computing power development, transitioning from single-chip innovations to tackling large-scale system engineering and building a thriving application ecosystem [11]. - The efficiency disclosure of the ten-thousand card cluster signifies that domestic computing infrastructure is beginning to undergo rigorous testing in large-scale, high-load scenarios, while the architecture iteration and integration of graphics and AI demonstrate the company's intent to define the next generation of computing architecture [11].
摩尔线程发布新一代GPU架构,打造MUSA生态对标英伟达CUDA
Xin Lang Cai Jing· 2025-12-20 06:42
来源:钛媒体 图片由AI生成 登陆A股科创板引发国产芯片股狂欢后,市场对这家公司后续的研发、产品、经营愈发关注。 GPU行业的竞争,本质上也是开发者生态的竞争。为此,摩尔线程在12月20日-21日举办首届MUSA(MUSA Developer Conference)开发者大会。 在今天(12月20日)上午的发布会上,摩尔线程创始人、董事长兼CEO张建中发布了新一代GPU架构"花港",AI训推一体新GPU"华山",游戏领域专业图形 GPU"庐山"、智能SoC"芯片"长江等产品,以及KUAE万卡智算集群。 根据发布会现场的介绍,即将于2026年量产的相关产品较上一代性能大幅提升。而这背后,继续对标、追赶甚至挑战以英伟达为代表的国际领先芯片产品、 架构及生态,成为了发布会的隐含主题。 摩尔线程在经营模式、产品体系和发展方向上,也一直对标着英伟达,尤其是在生态和基础算力设施构建、对物理AI的布局、高毛利率等方面,相比于"国 产GPU四小龙"中刚刚上市的沐曦股份,以及宣布赴港IPO的壁仞科技等公司来说。 不过,摩尔线程也正在尝试超越英伟达。其高调宣扬的"全功能GPU",是尝试在一颗GPU芯片上集成支撑AI计算、图形渲染 ...
摩尔线程发布多项关键技术成果 董事长张建中:生态体系是GPU行业核心护城河
Sou Hu Cai Jing· 2025-12-20 06:05
央广网北京12月20日消息(记者 齐智颖)摩尔线程在首届MUSA开发者大会(MDC 2025)上发布了以自主MUSA统一架构为核心的全栈技术成果,全面展 示了公司在高端全功能GPU领域的关键突破与前瞻布局。 海淀区委书记、中关村科学城党工委书记张革在大会开场致辞中指出:"海淀区作为北京国际科技创新中心的核心区,以'国家所需'为导向,坚持走新时代 科技创新之路,始终把培育硬科技企业摆在重要位置。我们将聚焦'打造自主创新策源地和新兴产业集聚地'的目标,携手摩尔线程和各位开发者,共筑全国 GPU最优生态。" 中国工程院院士、清华大学计算机系教授郑纬民在发表主题演讲时指出,高端AI芯片从全球化分工时代发展到"主权AI"时代,发展"主权AI"是提升未来国家 竞争力的关键,其核心在于实现"算力自主、算法自强、生态自立"的完整体系。 本次大会上,摩尔线程集中发布了一系列技术与产品进展。该公司发布全功能GPU架构"花港","夸娥"万卡智算集群。据介绍,"花港"支持FP4到FP64的全 精度计算,密度提升50%,效能提升10倍。摩尔线程表示,未来将基于该架构推出高性能AI训推一体"华山"芯片与专攻高性能图形渲染的"庐山"芯片。 ...
能效提升10倍!摩尔线程最新发布
Xin Lang Cai Jing· 2025-12-20 05:43
炒股就看金麒麟分析师研报,权威,专业,及时,全面,助您挖掘潜力主题机会! 12月20日,摩尔线程首届MUSA开发者大会(MUSA Developer Conference,简称"MDC 2025")在北京 中关村国际创新中心开幕。这是国内首个聚焦全功能GPU的开发者盛会。摩尔线程称,2026年将发布新 一代GPU架构"花港",具备新一代指令集,算力密度提升50%,能效提升10倍。 摩尔线程开发者大会现场布置近千平方米的科技嘉年华展区,记者看到,参观者人头攒动,展区主要展 示了国产GPU在AI大模型、Agent智能体、科学计算、空间智能、数字孪生、多媒体、6G试验网络等领 域的最新应用。 摩尔线程创始人、董事长兼首席执行官张建中发表主旨演讲。他表示,无论是"十五五"规划建议中的新 兴产业还是未来产业,这些新的经济增长点、新质生产力,都需要人工智能的基础设施,来赋能各行各 业的科研工作者和科技开发者,让他们能够有能力做得更好。人工智能的基础设施离不开算力,摩尔线 程就是希望利用全功能的GPU来建设这样的基础设施。 他介绍,摩尔线程全功能GPU主要包括四大功能引擎——AI计算加速引擎、图形渲染引擎、物理仿真 和科学 ...
王江平:用上善AI的东方智慧,平衡技术发展的激进与焦虑
Nan Fang Du Shi Bao· 2025-12-20 05:26
AI是否存在泡沫?对AI技术的监管该严还是松?如何确保这项技术发展始终服务于人的福祉?这些有 关AI的核心争论,或许能从"上善AI"的理念找到解答方向。 12月18日下午,南方都市报、南都数字经济治理研究中心在北京举办"第九届啄木鸟数据治理论坛",主 题聚焦"AI安全边界:技术、信任与治理新秩序"。十四届全国政协委员、工业和信息化部原副部长王江 平以《上善AI:以对齐求善治》为题发表主旨演讲,他引用老子"上善若水"的理念,尝试勾勒中国AI治 理愿景。 12月18日,南都在京举办第九届啄木鸟数据治理论坛。 王江平认为,当前国际AI治理理念分歧较大,阵营化趋势明显,迫切需要一种东方古老智慧统一治理 愿景。国内AI监管同样需要一套科学、敏捷的框架体系,来平衡AI发展其间的激进与焦虑。 AI系统向"智能实体"转变,但治理领域进展有限 随着AI技术的演进和产业应用,AI治理问题日益显现。从AI生成"流浪汉入家"到AI仿冒名人带货,从 AI换脸拟声诈骗到未成年人过度沉迷聊天机器人……这些案例显示,当AI技术能力不断提升,安全边 界却在模糊。 王江平在演讲中提到,大模型加速落地的同时,由模型幻觉所引发的内容风险愈发成为热点 ...
微软承诺在加拿大和印度投入超300亿美元,用于建设“主权AI”
3 6 Ke· 2025-12-10 04:43
为了掌控全球数据生态系统,微软于12月10日承诺向加拿大和印度投资超过300亿美元,在两大洲同步启动大规模基础设施建设。 在加拿大渥太华,微软宣布将在2027年前投入190亿加元(约134亿美元)用于扩建加拿大本地云。其中一项关键承诺是:若遇外国司法 机构试图调取存储于加拿大的数据,微软将采取法律诉讼等手段进行对抗。 与此同时,微软大幅升级了其印度战略,承诺投资额增至175亿美元。与2025年1月设定的目标相比,这一数额激增近六倍。按计划,微 软的Azure AI服务将深度集成至印度政府福利门户网站,覆盖约3.1亿劳工。 2025年6月份,微软公布了其在欧洲云主权计划的细节,确认客户存储的数据将保留在欧洲境内,并遵守欧洲法规。这项名为"微软主权 云"的新计划,将为欧洲各地的企业提供三种不同的选择:主权公有云、主权私有云以及国家合作伙伴云。 从战略层面看,这些行动意味着微软的重心正从集中在美国的"超级工厂"转向分布式、国家级的基础设施布局。这种"主权AI"模式直接 回应了各国政府日益强烈的诉求:数据必须存储在境内,核心AI系统须由本地掌控。 通过将计算能力本地化,微软实质上是在为应对日益严格的数据本地化法律修筑 ...
财经观察:数据中心建设瓶颈制约日本AI规划
Huan Qiu Shi Bao· 2025-12-09 22:43
【环球时报驻日本特约记者 邵南 环球时报特约记者 陈欣】 编者的话: 日本政府正在推出雄心勃勃的人工智能(AI)规划。据日本时事通讯社7日 报道,日本政府计划将公众AI使用率提高到80%。日本政府此前将AI定位为国家经济增长的核心引擎。不过,《日经亚洲评论》报道称,"数据中 心建设瓶颈威胁着日本的人工智能雄心"。多家日媒及相关研究机构的报告显示,日本的数据中心建设面临建设成本上升、电网布局滞后、建筑工 艺过时等多重压力。 日本时事通讯社报道称,日本政府制定了AI发展和利用基本规划草案,目标是先将公众AI利用率提高到50%,最终提高到80%。该草案强调,有 必要提高AI的普及程度,以推动日本自主AI技术的发展。草案还提出一项政策目标,即吸引约1万亿日元(100日元约合4.5元人民币)的民间投 资,以加强研发活动。 此外,草案要求日本中央政府所有部门率先使用AI,并最终推广至全体政府职员。 《日本时报》提到,1万亿日元私营部门投资的目标包括:旨在保障人力资源的项目、支持AI产业向新兴市场拓展的项目,以及改善研发基础设 施的项目。日本首相高市早苗在10月的政策演讲中宣称,要让日本成为"世界上最容易开发和利用AI的国 ...
AMD(AMD.US)携手慧与科技、博通!三强联合共筑机架级AI算力平台 向“英伟达Blackwell系”宣战
Zhi Tong Cai Jing· 2025-12-03 07:12
Core Insights - AMD is expanding its collaboration with HPE to focus on AI infrastructure and hybrid cloud platforms, aiming to build an open, rack-scale AI computing infrastructure for high-performance computing clusters and large AI data centers [1][2] - The partnership with HPE and Broadcom is designed to create a competitive alternative to NVIDIA's integrated solutions, targeting cloud giants like Microsoft, Google, and Amazon who are investing heavily in AI computing infrastructure [2][3] Group 1: Collaboration Details - The collaboration involves AMD's Helios rack-scale AI computing architecture, with HPE becoming one of the first system vendors to adopt this technology [1][4] - The integrated solution will include AMD's Instinct MI455X GPUs, EPYC Venice CPUs, and Pensando Vulcano NICs, providing a comprehensive AI computing platform [4][7] - This partnership signifies a shift from traditional GPU stacking to a more integrated rack-level product approach, enhancing efficiency and cost-effectiveness [3][6] Group 2: Market Impact - The collaboration is expected to significantly enhance AMD's market share in the AI data center segment, moving from selling chips to offering complete rack solutions [3][6] - AMD's stock has surged over 80% this year, driven by major contracts and optimistic market forecasts, including a significant deal with Saudi Arabia for a 1GW AI chip computing cluster [6][8] - Analysts are bullish on AMD's future, with target prices suggesting a potential increase of at least 32% in the next 12 months, and expectations of substantial revenue growth in the AI chip market [8]