摩尔线程,上市后首份业绩预告出炉

Core Viewpoint - Moore Threads (688795) has released its first performance forecast post-IPO, projecting a significant revenue increase for 2025 while still expecting a net loss, albeit a reduced one compared to the previous year [2][4]. Financial Performance - The company anticipates 2025 revenue between 1.45 billion to 1.52 billion yuan, representing a year-on-year growth of 230.70% to 246.67% [2]. - The net loss attributable to shareholders is expected to be between 950 million to 1.06 billion yuan, with a reduction in loss margin of 34.50% to 41.30% compared to the previous year [2]. Product Development and Market Position - Moore Threads has focused on the research and innovation of full-featured GPUs, successfully launching the flagship MTTS5000 GPU, which has achieved market-leading performance and is now in mass production [4]. - The company has built large-scale clusters based on this product, which efficiently supports the training of trillion-parameter large models, matching the performance of comparable international GPU clusters [4]. - Despite these advancements, the company acknowledges existing gaps in R&D strength, core technology accumulation, and product ecosystem compared to international giants [4]. Market Trends and Future Outlook - The growth in the artificial intelligence industry and strong demand for high-performance GPUs have further enhanced the company's competitive advantages, leading to increased market attention and revenue growth [4]. - The company continues to invest heavily in R&D, remaining in a phase of sustained investment without profitability and facing cumulative unrecouped losses [4]. Recent Developments - On January 8, the company released version 1.1 of its open-source distributed training simulation tool, SimuMax, which has undergone significant upgrades to support large model training [6]. - The new version features a user-friendly interface, intelligent parallel strategy search, and improved modeling accuracy for complex communication behaviors in mixed parallel training [6].