MTT AIBOOK
Search documents
教育行业周报:AI教育、职教升级与星际前沿人才培养迎新局-20260201
Guolian Minsheng Securities· 2026-02-01 03:08
教育行业周报 | 分析师 | 苏多永 | | --- | --- | | | 执业证书: S0590525110058 | | 邮箱: | suduoyong@glms.com.cn | | 分析师 | 张锦 | | | 执业证书: S0590525120009 | | 邮箱: | zhangjin_yj@glms.com.cn | 相对走势 -10% 3% 17% 30% 2025/2 2025/8 2026/1 消费者服务 沪深300 相关研究 本公司具备证券投资咨询业务资格,请务必阅读最后一页免责声明 证券研究报告 1 AI 教育、职教升级与星际前沿人才培养迎新局 glmszqdatemark 2026 年 02 月 01 日 推荐 维持评级 [Table_Author] | 1 教育行业政策动态 3 | | --- | | 2 教育个股动态 5 | | 3 教育指数及个股表现 6 | | 3.1 教育行业一周市场表现 6 | | 3.2 一周个股表现 7 | | 4 投资建议 9 | | 4.1 行业投资建议 9 | | 5 风险提示 10 | | 插图目录 11 | | 表格目录 11 | 行业动态 ...
摩尔线程与北京市十一学校合作的“AI教育实训基地”正式启用
Bei Ke Cai Jing· 2026-01-27 09:45
未来,双方将依托"AI教育实训基地"持续开展多元化的教学展示与创新实践活动,推动其成为全国领先 的AI教育平台,通过深化校企合作,带动更多中学加入国产计算生态的产教融合实践,共同构建前 瞻、专业的育人体系,为培养面向未来的科技创新人才提供可持续支撑。 编辑 陈莉 校对 贾宁 新京报贝壳财经讯(记者阎侠)1月26日,记者自摩尔线程方面获悉,摩尔线程与北京市十一学校战略 合作的"AI教育实训基地"已正式启用。作为首个落地北京的AI实训示范项目,该基地部署了摩尔线程 MTT AIBOOK及云端算力,为学校多元化的人工智能课程体系注入了国产算力支持,标志着基于国产 全功能GPU的教学实践正式落地。 此次落成的"AI教育实训基地",将成为双方探索"算力驱动教育"的试验田。学校与摩尔线程将推动国产 端云一体平台深度融入中学人工智能课程体系,并构建起从基础设施、课程内容到实践平台的全套国产 算力AI教学方案。基地将依托摩尔线程MTT AIBOOK算力本及云端算力服务,建立基于国产算力的 Python编程、机器学习、AI4S等人工智能相关的课程体系,组织学生围绕计算机视觉、自然语言处理及 语音处理等方向开展AI项目实践,完成 ...
国产算力赋能创新人才培养 北京市十一学校与摩尔线程共建AI教育实训基地
Xin Lang Cai Jing· 2026-01-27 08:31
来源:环球网 2026年1月23日,北京市十一学校于摩尔线程在京共同宣布,双方战略合作成果"AI教育实训基地"已正 式启用。作为落地北京的AI实训示范项目,该基地部署了摩尔线程MTT AIBOOK及云端算力,为学校 多元化的人工智能课程体系注入了坚实的国产算力支持,标志着基于国产全功能GPU的教学实践在顶尖 中学正式落地。 北京市海淀区委教工委书记、区教委主任杜荣贞,中关村科学城管理委员会产业促进五处处长孟庆文, 北京市数字教育中心资源建设与服务部副主任袁野,北京市十一学校校长田俊,摩尔线程联合创始人兼 首席运营官周苑出席启动仪式。北京十一实验中学、北京市第八中学、北京市鼎石学校等20余所中学的 技术学科教师齐聚现场,观摩了基地启动仪式,并就教学模式进行了深入交流,共同见证国产软硬一体 化平台与人工智能基础教育深度融合的重要时刻。 作为北京市人工智能教育创新的标杆学校,北京市十一学校在AI与科技创新人才培养方面持续走在全 国前列,深耕机器学习、深度学习、具身智能等人工智能核心领域的教学实践,设计并实施中学场景 下"数据驱动"的AI4S课程生态,并在全校范围的各学科教学场景中探索AI赋能的典型案例。此次落成 的 ...
瞄准英伟达,国产算力产业走向“闭环”
3 6 Ke· 2026-01-09 12:39
Core Insights - The Chinese computing power industry is experiencing rapid growth in capital operations, highlighted by significant IPOs and market enthusiasm for domestic semiconductor companies [1][2] - The focus of competition in the domestic computing power sector is shifting from hardware specifications to system stability, software ecosystem usability, and cost-effectiveness [3][4] Capital Market Activity - TianShuZhiXin Semiconductor Co., Ltd. went public on January 8, 2026, with over 400 times subscription, indicating strong market interest [1] - Other domestic GPU companies, such as MoEr Thread and MuXi Co., saw their stock prices surge on their debut, with MoEr Thread's market cap exceeding 305.5 billion yuan and MuXi's reaching 330 billion yuan [1] - ChangXin Technology submitted its IPO application on December 30, 2025, reporting revenue of 32.084 billion yuan for the first three quarters of 2025, showcasing the scale of domestic DRAM production [1] Technological Developments - The "Ten Thousand Card Cluster" concept is becoming a benchmark for evaluating domestic computing power, but it also presents challenges in reliability as system scale increases [3][4] - The introduction of the scaleX Ten Thousand Card Super Cluster by ZhongKe Shuguang, featuring 10,240 AI accelerator cards, represents a significant advancement in system architecture [3][4] - The need for high-quality, low-latency data transmission networks is critical for supercomputing, with domestic products now matching international standards [5][6] Storage Solutions - ChangXin Technology and ChangChun Group are positioned in the core areas of DRAM and NAND Flash, respectively, with ChangXin reporting a compound annual growth rate of over 70% in revenue from 2022 to 2024 [6][7] - The introduction of advanced technologies like Xtacking in NAND Flash production by ChangChun Group marks a significant technological breakthrough [7] Software Ecosystem - The transition to a robust software ecosystem is complex, with developers facing high costs in switching from established platforms like NVIDIA's CUDA [10][11] - MoEr Thread is addressing this by launching the MTT AIBOOK, which includes development tools to facilitate easier adoption of its platform [10] - Cloud service providers are playing a crucial role in integrating various hardware brands to create a unified software environment, addressing compatibility issues [11][12] Market Dynamics - The industry is witnessing a shift towards collaborative ecosystems, with companies recognizing the need for specialization rather than attempting to cover the entire supply chain independently [9][12] - The emergence of customized products from companies like Haiguang is aimed at meeting the specific needs of large enterprises, reflecting a trend towards more open architectures [15] Future Outlook - The domestic computing power industry is expected to face challenges related to global supply chain fluctuations, particularly in DRAM and NAND supply [13] - The successful integration of domestic computing solutions in high-stakes environments, such as the National High Energy Physics Data Center, indicates growing confidence in local technologies [14] - The potential easing of export restrictions on NVIDIA's H200 chip could impact the domestic ecosystem, but the established supply chain and customer preferences for security are likely to mitigate risks [17]
瞄准英伟达!国产算力产业走向“闭环”
经济观察报· 2026-01-09 10:28
Core Viewpoint - The rapid advancement of China's computing power industry is highlighted by significant capital market activities, including the successful IPOs of domestic semiconductor companies, while challenges remain in practical applications and system integration [2][4]. Group 1: Capital Market Activities - Shanghai Tensu Zhixin Semiconductor Co., Ltd. went public on January 8, 2026, with over 400 times subscription, indicating strong market enthusiasm [2]. - Other domestic GPU companies, such as Moer Thread and Muxi Co., saw their stock prices surge by 468.78% and 692.95% respectively on their debut, with market capitalizations exceeding 305.5 billion and 330 billion yuan [2]. - Changxin Technology submitted its IPO application on December 30, 2025, reporting revenue of 32.084 billion yuan for the first three quarters of 2025, showcasing the scale of domestic DRAM production [2][10]. Group 2: Technological Advancements - The focus of competition in the computing power sector is shifting from hardware specifications to system stability, software ecosystem usability, and cost-effectiveness in commercial applications [6]. - The introduction of the scaleX super cluster by Zhongke Shuguang, featuring 10,240 AI accelerator cards, emphasizes the need for high reliability in large-scale systems [6][7]. - The development of a native 400G RDMA network by Zhongke Shuguang aims to enhance data transmission quality and reduce latency, crucial for supercomputing applications [7][8]. Group 3: Software Ecosystem Development - Moer Thread is addressing the challenge of transitioning developers to domestic computing platforms by launching the MTT AIBOOK, which includes essential development tools [13]. - The company also introduced a code migration model, MUSACode, to facilitate the transition from CUDA to its own platform, aiming for a 93% compilation rate [13]. - Cloud service providers are playing a critical role in integrating various hardware brands, thereby mitigating compatibility issues and enhancing resource management [15][16]. Group 4: Supply Chain and Market Dynamics - The supply chain for DRAM and NAND flash is under pressure, prompting cloud vendors to adjust procurement strategies to ensure resource availability [17]. - The adoption of domestic computing facilities by institutions like the Chinese Academy of Sciences indicates growing confidence in local technology, despite some performance gaps compared to international counterparts [19]. - The emergence of customized products from companies like Haiguang reflects a shift towards meeting specific client needs, enhancing market competitiveness [20]. Group 5: Industry Ecosystem and Future Outlook - The domestic computing power industry is forming a closed-loop ecosystem, integrating various components from storage to computing and application layers [21][22]. - The rise of domestic large models, such as DeepSeek, is redefining hardware competition standards, necessitating support for mixed-precision computing [21]. - Concerns about potential disruptions from international competitors, such as NVIDIA's H200 chip, are countered by the established supply chain and ecosystem resilience of domestic firms [21][22].
国产算力产业走向“闭环”
Jing Ji Guan Cha Wang· 2026-01-09 08:41
Core Insights - The Chinese computing power industry is experiencing rapid acceleration in capital operations, highlighted by significant IPOs and market enthusiasm for domestic semiconductor companies [1][2] - The focus of competition in the domestic computing power sector is shifting from hardware specifications to system stability, software ecosystem usability, and cost-effectiveness in commercial applications [3][4] Capital Market Activity - TianShuZhiXin Semiconductor Co., Ltd. went public on January 8, 2026, with over 400 times subscription for its public offering, indicating strong market interest [1] - Other domestic GPU companies, such as MoEr Thread and MuXi Co., saw substantial stock price increases upon their listings, with MoEr Thread's stock rising by 468.78% on its debut [1] - ChangXin Technology submitted its IPO application on December 30, 2025, reporting revenue of 32.084 billion yuan for the first three quarters of 2025, showcasing the scale of domestic DRAM production [1] Technological Developments - The "Ten Thousand Card Cluster" concept is becoming a benchmark for evaluating computing power systems, with challenges in reliability and fault tolerance as system scales increase [3][4] - The introduction of the scaleX Ten Thousand Card Super Cluster by ZhongKe Shuguang, featuring 10,240 AI acceleration cards, represents a significant technological advancement [3][5] - The need for high-quality, low-latency data transmission networks is critical for supercomputing, with domestic products now matching international standards [6][7] Storage Solutions - ChangXin Technology and ChangJiang Storage are positioned in key storage sectors, with ChangXin reporting a compound annual growth rate of over 70% in revenue from 2022 to 2024 [7][8] - The introduction of DDR5 memory and advancements in NAND Flash technology are crucial for supporting AI computing needs [8] Software Ecosystem Challenges - Transitioning developers from NVIDIA's CUDA to domestic platforms presents significant challenges due to high code restructuring costs [11] - MoEr Thread is addressing this by launching tools to facilitate easier migration to its ecosystem, aiming to cultivate a developer base [11][12] Cloud Services and Integration - Cloud service providers like UCloud are playing a vital role in integrating various domestic chip brands, addressing compatibility issues and enhancing resource management [12][13] - The need for localized solutions is emphasized, as latency and privacy concerns with cloud-based AI solutions drive demand for on-premises systems [14] Market Dynamics and Future Outlook - The domestic computing power industry is forming a closed-loop ecosystem, with companies collaborating across the supply chain to enhance competitiveness [17] - The potential easing of export restrictions on NVIDIA's H200 chip raises concerns about its impact on the nascent domestic ecosystem, but domestic clients prioritize supply chain security [17]
摩尔线程周苑:扩大北京新一代信息技术产业集群优势
Xin Jing Bao· 2025-12-29 08:52
Core Viewpoint - The article discusses the upcoming five-year plan for China's economy, emphasizing the importance of innovation and the development of new driving forces, particularly in the context of the domestic GPU industry and its role in the broader economic landscape [1][3][7]. Group 1: Economic Context and Policy Direction - The Central Economic Work Conference highlighted the need for steady progress and quality improvement in economic work for 2026, focusing on stabilizing employment, enterprises, markets, and expectations [1]. - The conference outlined eight key tasks for 2026, including the acceleration of innovation-driven development and the cultivation of new economic drivers [3][7]. Group 2: GPU Industry and Company Strategy - The domestic GPU industry is experiencing a transformation, with companies like Moer Thread making significant strides, as evidenced by its recent listing on the STAR Market [3]. - Moer Thread aims to establish a comprehensive computing power platform through its MUSA architecture, focusing on full-stack development and continuous innovation [5][6]. - The company emphasizes the importance of building a developer ecosystem, which is crucial for the competitive landscape of the GPU industry [4][6]. Group 3: Technological and Ecological Development - Moer Thread's strategy includes a commitment to open-source innovation and empowering developers, which is seen as essential for creating a robust AI ecosystem [6][8]. - The company is expanding its product offerings from data centers to edge and endpoint solutions, thereby broadening its market reach and validating its technology in real-world applications [6][9]. Group 4: Regional Development and Industry Positioning - Moer Thread positions itself as a key player in Beijing's integrated circuit industry, benefiting from the city's rich resources and supportive policies [9][10]. - The company advocates for enhanced collaboration across the entire supply chain and the establishment of a resilient integrated circuit industry in Beijing [11].
看2026|摩尔线程周苑:扩大北京新一代信息技术产业集群优势
Bei Ke Cai Jing· 2025-12-29 08:40
Group 1 - The core message emphasizes the importance of innovation and the development of new driving forces in China's economy as it transitions into the "15th Five-Year Plan" period, with a focus on enhancing quality and efficiency [3][5][19] - The central economic work conference highlighted the need to stabilize employment, businesses, and market expectations while promoting effective qualitative improvements and reasonable quantitative growth [3][5] - The upcoming five-year journey starting in 2026 is seen as a critical period for economic development, with a call for collaboration among regulatory bodies, scholars, and leading entrepreneurs to interpret policy trends and changes [4][5] Group 2 - The domestic GPU industry is experiencing a surge, with companies like Moore Threads successfully entering capital markets, indicating a significant shift towards self-reliant computing capabilities [5][16] - Moore Threads aims to establish a comprehensive computing power platform through its MUSA architecture, focusing on continuous innovation and development across various applications, including AI and scientific computing [16][18] - The company is committed to fostering a developer ecosystem by hosting events and creating platforms to lower barriers for developers, which is crucial for building a robust AI ecosystem [17][24] Group 3 - The strategic focus of Moore Threads includes enhancing its core competitiveness through a full-stack GPU approach, emphasizing independent architecture and continuous R&D investment [16][20] - The company recognizes the significance of regional integration in fostering innovation, particularly in areas like Beijing, which boasts a complete integrated circuit industry chain [21][22] - Moore Threads plans to leverage its position in Beijing to contribute to the development of the new generation of information technology and the AI ecosystem, aligning with national policies [23][26] Group 4 - Recommendations for optimizing the integrated circuit industry ecosystem in Beijing include strengthening the entire supply chain, enhancing application-driven technology iterations, and fostering an open innovation environment for talent development [28][29][30] - The company believes that collaborative efforts across design, manufacturing, and application sectors will enhance the resilience and self-sufficiency of the integrated circuit industry [28] - By promoting real-world applications and feedback, Moore Threads aims to accelerate the maturity of domestic chips and AI models, driving innovation and industry growth [29]
瞄准AI、图形顶端战场:摩尔线程上演国产GPU硬核实力路演
机器之心· 2025-12-22 04:23
Core Viewpoint - The article highlights the unveiling of the latest AI computing card S5000 by Moore Threads, showcasing significant advancements in AI computing capabilities and the introduction of the MUSA architecture, which aims to support various AI and graphics computing needs [1][3][5]. Group 1: MUSA Architecture Overview - The MUSA (Meta-computing Unified System Architecture) is a comprehensive technology stack developed by Moore Threads, covering chip architecture, instruction sets, programming models, and software frameworks, serving as the foundation for all products [7]. - The architecture features a new "Huagang" design that improves computing density by 50% and energy efficiency by 10 times compared to the previous generation [9]. - MUSA supports mainstream GPU ecosystems and various CPU systems, ensuring security through a hardware-based protection mechanism [9][10]. Group 2: New Chip Developments - The upcoming chips "Huashan" and "Lushan" are designed for AI computing and professional graphics rendering, respectively, with "Huashan" positioned to compete with top international AI chips [18][21]. - "Huashan" features a dedicated large language model acceleration engine and supports high-speed interconnects for large-scale clusters, achieving a performance level comparable to leading global products [22][23]. - "Lushan" aims to address performance bottlenecks in gaming and professional design, boasting a 15-fold increase in 3A game performance compared to the previous generation [25]. Group 3: High-Performance Computing Infrastructure - Moore Threads introduced the Kuai'e 2.0 super AI infrastructure, capable of 10 Exa-FLOPS, supporting trillion-parameter model training with over 60% utilization efficiency [31]. - The company plans to launch the MTT C256 super node product, enhancing GPU deployment density and reducing bandwidth loss [31][33]. Group 4: Future Directions and Ecosystem Development - The company is expanding its focus beyond large models to include embodied intelligence, AI for Science, quantum computing, and AI for 6G, indicating a broad vision for future computing applications [35][36]. - Moore Threads has initiated the "Moore Academy" to train GPU developers and researchers, engaging over 100,000 students across more than 200 universities [40]. - The MTT AIBOOK, an AI computing notebook, is designed to lower the development barrier for AI applications, integrating various processing units and supporting multiple operating systems [42][44].
腾讯研究院AI速递 20251222
腾讯研究院· 2025-12-21 16:01
Group 1: Moore Threads Technology Roadmap - Moore Threads has unveiled its new generation full-featured GPU architecture "Huagang," which boasts a 50% increase in computing density and a 10-fold improvement in energy efficiency, supporting full precision calculations from FP4 to FP64 and capable of supporting over 100,000 card intelligent computing clusters [1] - The company is set to release the "Huashan" AI training and inference integrated chip and the "Lushan" high-performance graphics rendering GPU, with a computing power of 10 EFLOPS for the Wan Card intelligent computing cluster, and the S5000 single card inference sets a new record for domestic GPU performance [1] - The AI computing book MTT AIBOOK, equipped with the "Yangtze River" SoC chip, offers 50 TOPS heterogeneous AI computing power and can locally run large models with up to 30 billion parameters, now available for pre-sale on JD.com [1] Group 2: OpenAI's GPT-5.2-Codex Launch - OpenAI has launched GPT-5.2-Codex, which is considered the most advanced intelligent coding model to date, achieving state-of-the-art performance in SWE-Bench Pro and Terminal-Bench 2.0 benchmark tests [2] - Compared to GPT-5.2, it has improved instruction-following capabilities, long context understanding, and network security features, with better performance in Windows environments and significant improvements in token efficiency at mid-high inference levels [2] - The model is now available to paid ChatGPT users across all Codex platforms, with plans to open access to API users in the coming weeks and provide more lenient access for defensive cybersecurity professionals [2] Group 3: Google's Gemma Models - Google has open-sourced two models from the Gemma 3 family, T5Gemma 2 and FunctionGemma, with T5Gemma 2 being the first multi-modal long-context encoder-decoder model, available in sizes of 270M-270M, 1B-1B, and 4B-4B [3] - FunctionGemma is optimized for function calls, running on just 270 million parameters, suitable for mobile and browser devices, and supports precise structured data output for external API calls, making it ideal for edge AI agent applications [3] - T5Gemma 2 returns to the classic Encoder-Decoder architecture, surpassing similarly sized Gemma 3 models in multi-modal performance, code reasoning, and long context capabilities, while FunctionGemma can be reduced to 135MB for operation through quantization [3] Group 4: NVIDIA's NitroGen Model - NVIDIA has open-sourced the NitroGen foundational model, designed to play over 1,000 games, using game video frames as input to output real controller operation signals, and supports rapid adaptation to new games through post-training [4] - The model is based on the GR00T N1.5 architecture and utilizes 500 million parameters, trained by automatically extracting action labels from 40,000 hours of publicly available game videos, covering various game types including RPGs, platformers, and racing [4] - It can accomplish non-trivial tasks without fine-tuning, achieving a task success rate improvement of up to 52% compared to models trained from scratch, and the dataset, evaluation suite, and model weights have been made open-source [4] Group 5: OpenAI's Codex Agent Skills Support - OpenAI has announced that Codex now fully supports Agent Skills, integrating with industry-standard specifications led by Anthropic, which include markdown commands and optional script resources [5] - It allows for explicit calls (via /skills command or $selection) and implicit calls (automatically matching descriptions based on tasks), with skill storage prioritized from the current working directory to the user's personal directory [5] - Built-in tools like $skill-creator and $skill-installer are provided to automatically generate skill frameworks or install skills from third-party repositories like GitHub, with an official Skill library released by OpenAI [5] Group 6: Luma AI's Ray3 Modify - Luma AI has launched the Ray3 Modify feature, emphasizing a "real person first, AI follows" approach to video production, where actor performances and camera movements serve as the foundational input for AI processing [6] - It supports keyframe control (start and end frames), character reference capabilities, and retains the integrity of performances, allowing the same performance to be placed in different scenes for various content versions without reshooting [6] - Integrated into the Dream Machine platform, it targets film production, advertising creativity, and post-production processes, enabling creators to maintain control without the need for repeated filming [6] Group 7: METR Report on Claude Opus 4.5 - The METR report indicates that Claude Opus 4.5 can sustain coding for approximately 4 hours and 49 minutes, marking the longest time span reported to date, surpassing GPT-5.1-Codex-Max's 2 hours and 53 minutes [9] - The task duration for AI coding agents is showing exponential growth, doubling every 7 months from 2019 to 2024, and expected to double every 4 months from 2024 to 2025, with predictions that AI will complete a full workday's tasks by April 2026 [9] - The industry views long-term memory as the final challenge towards achieving AGI, as current models rely on retrieval tools and context compression, lacking true self-learning and persistent memory capabilities [9] Group 8: Google AI's Success Story - Josh Woodward, the head of Google AI products, has driven the Gemini application’s monthly active users from 350 million in March to 650 million in October, surpassing ChatGPT to top the App Store rankings [10] - At 42 years old and from Oklahoma, he joined Google through an internship in 2009, contributing to Chromebook development, founding the NBU initiative, and leading the expansion of Google Pay, before taking over as Gemini application head in April 2025 [10] - He has promoted the NotebookLM project to break Google's traditional practices by utilizing Discord for community engagement, establishing a "Block" ticketing system to eliminate bureaucratic obstacles, and initiating the "Papercuts" plan to address minor issues, emphasizing the balance between AI innovation and social responsibility [10]