Core Viewpoint - MoE Technology is launching its first generation of GPU clusters in 2024, aiming to reach 10,000 cards this year and plans for future expansions to 100,000 cards [1] Group 1: Product Development - MoE Technology held its first MUSA Developer Conference on December 20, announcing a new GPU architecture and three new chips based on this architecture [1] - The new architecture, named Huagang, improves computing density by 50% compared to the previous generation and supports full precision calculations from FP4 to FP64 [1] - The three new chips introduced are Huashan (AI training and inference chip), Lushan (graphics rendering chip), and Changjiang (system-on-chip) [1] Group 2: Performance Metrics - The previous generation S4000 card has performance metrics of 25 TFLOPS (FP32), 49 TFLOPS (TF32), 98 TFLOPS (FP16), and 196 TOPS (INT8) with a maximum power consumption of 450W [2] - In comparison, NVIDIA's A100 chip has performance metrics of 19.5 TFLOPS (FP32), 156 TFLOPS (TF32), 312 TFLOPS (FP16), and 624 TOPS (INT8) with a maximum power consumption of 300W [2] - The new S5000 card's performance in distributed inference scenarios is reported to be approximately 2.5 times and 1.3 times that of common chips for specific tasks [3] Group 3: Market Position and Financials - MoE Technology's stock debuted on the Sci-Tech Innovation Board at a price of 114.28 CNY per share, with significant fluctuations leading to a closing price of 664.1 CNY on December 19 [5] - The company has not yet achieved profitability, with cumulative losses of 1.6 billion CNY as of June this year, but it anticipates profitability by 2027 [5]
摩尔线程张建中:智算集群将做到50万卡、100万卡规模