AI计算生态
Search documents
收购失败后彻底退出:英伟达(NVDA.US)英伟达清空Arm持股 投资版图转向AI计算生态
智通财经网· 2026-02-18 08:20
Group 1 - Nvidia has sold all of its shares in Arm, totaling 1.1 million shares valued at approximately $140 million based on Arm's closing price [1] - Nvidia's attempt to acquire Arm for $40 billion in 2020 was thwarted by regulatory and customer opposition, leading to the termination of the agreement in February 2022 [1] - Arm, which is crucial for advanced semiconductor technology, is now pursuing its IPO plans under the ownership of SoftBank [1] Group 2 - Nvidia has made strategic investments in Intel, Synopsys, and Nokia, enhancing its presence in key areas such as chip manufacturing, EDA tools, and communication infrastructure [2] - Nvidia's investment in Intel amounts to approximately $5 billion, representing a 4.91% stake, aimed at strengthening AI computing capabilities [2] - The investment in Synopsys is around $2 billion, focusing on securing the chip design tool ecosystem [2] - Nvidia holds about 2.9% of Nokia, targeting the integration of AI with next-generation 5G/6G networks [2]
摩尔线程发布“花港”GPU新架构,万卡AI训练与推理能力,剑指英伟达
Feng Huang Wang· 2025-12-21 06:18
Core Insights - The company unveiled its next-generation GPU architecture "Huagang" at the first MUSA Developer Conference (MDC2025) in Beijing, showcasing advancements in AI training clusters and various technologies [1][2] - The new architecture supports full precision computing from FP4 to FP64, with a 50% increase in computing density and a 10x improvement in energy efficiency [1] - The company plans to launch the "Huashan" chip focused on AI training and inference, and the "Lushan" chip aimed at graphics rendering [1] Architecture and Performance - The "Huagang" architecture enhances training cluster capabilities with the "Kua'e" 10,000-card intelligent computing cluster, achieving 60% training utilization on dense models and 40% on mixture of experts models, with a linear scaling efficiency of 95% [1] - In inference, the company collaborated with Silicon-based Flow to achieve a single card Prefill throughput exceeding 4000 tokens/s and Decode throughput over 1000 tokens/s on the DeepSeek R1671B model [1] Software Ecosystem - The MUSA 5.0 version optimizes programming models, computing libraries, and compilers, with core computing library muDNN's GEMM and FlashAttention efficiency exceeding 98% and communication efficiency reaching 97% [1] - The company plans to gradually open-source some core components, including computing acceleration libraries and system management frameworks [1] Graphics and AI Integration - The new architecture integrates hardware ray tracing acceleration engines and supports self-developed AI generative rendering technology [2] - The company introduced the MTLambda simulation training platform and the MTT AIBOOK based on the "Yangtze" SoC, focusing on cutting-edge fields like embodied intelligence and AI for Science [2] Future Infrastructure - The company announced the MTTC256 super-node architecture design for next-generation large-scale intelligent computing centers, emphasizing high-density hardware and energy efficiency optimization [2] - The comprehensive technology layout from chip architecture to cluster infrastructure and edge devices aims to support the development of the domestic AI computing ecosystem [2] - Industry experts believe the company is positioning itself to compete directly with Nvidia by releasing its architecture early to boost confidence in its software ecosystem [2]