Workflow
MRAM(磁性随机存储器)
icon
Search documents
带宽战争前夜,“中国版Groq”浮出水面
半导体芯闻· 2026-01-16 10:27
在AI算力赛道,英伟达凭借Hopper、Blackwell、Rubin等架构GPU,早已在AI训练领域建立起了难以撼动的技术壁垒与行业地位。但随着 即时AI场景需求爆发,传统GPU在面对低批处理、高频交互推理任务中的延迟短板愈发凸显。 为破解这一痛点,英伟达重磅出击,斥资200亿美元收购Groq核心技术,抢跑AI推理市场。 这一金额不仅创下英伟达历史最大手笔交易、刷新了推理芯片领域的估值纪录,更鲜明地昭示着英伟达从"算力霸主"向"推理之王"转型的意志。 紧随这一动作,据技术博主AGF消息进一步披露,英伟达计划在2028年推出新一代Feynman架构GPU——采用台积电A16先进制程与SoIC 3D堆叠 技术,核心目的正是为了在GPU内部深度集成Groq那套专为推理加速而生的LPU(语言处理单元),相当于给GPU加装了一个专门处理语言类推理 任务的专属引擎,直指AI推理性能中长期存在的"带宽墙"与"延迟瓶颈"。 回看中国市场,AI浪潮推动下,国产大模型多点突破、强势崛起,本土AI芯片企业集体爆发并密集冲击IPO,资本热度居高不下。 然而,当英伟达选择通过Feynman架构来补齐推理短板时,就意味着谁能率先解决" ...
带宽战争前夜,“中国版Groq”浮出水面
半导体行业观察· 2026-01-15 01:38
Core Viewpoint - NVIDIA is transitioning from a "computing powerhouse" to a "king of inference" by acquiring Groq's core technology for $20 billion, aiming to dominate the AI inference market [2][6]. Group 1: NVIDIA's Strategy and Market Position - NVIDIA has established a strong technical barrier in AI training with its GPU architectures like Hopper and Blackwell, but faces challenges in low-batch, high-frequency inference tasks due to traditional GPU latency issues [1]. - The acquisition of Groq's technology signifies NVIDIA's intent to enhance its capabilities in AI inference, particularly by integrating Groq's Language Processing Unit (LPU) into its upcoming Feynman architecture GPU [2][4]. - The competition in the AI industry is shifting from pure computing power to maximizing bandwidth per unit area, aligning with NVIDIA's findings that a significant portion of inference latency stems from data movement [4]. Group 2: Emergence of Domestic Competitors - In the Chinese market, the AI wave has led to the rise of domestic AI chip companies, with ICY Technology (寒序科技) being highlighted as a potential "Chinese version of Groq" due to its focus on ultra-high bandwidth inference chips [6][7]. - ICY Technology has been developing a 0.1TB/mm²/s bandwidth streaming inference chip, directly competing with Groq's technology [7]. - The company employs a dual-line strategy, focusing on both magnetic probabilistic computing chips and high-bandwidth magnetic logic chips aimed at accelerating large model inference [7][9]. Group 3: Technical Innovations and Advantages - ICY Technology's choice of on-chip MRAM (Magnetic Random Access Memory) over traditional DRAM or SRAM solutions is seen as a more innovative and sustainable approach, addressing the limitations of existing technologies [9][11]. - The MRAM technology offers significant advantages, including higher storage density and lower costs, making it a viable alternative to SRAM and HBM in AI applications [11][20]. - The SpinPU-E chip architecture aims to achieve a bandwidth density of 0.1-0.3TB/mm²·s, significantly outperforming NVIDIA's H100 [12]. Group 4: Industry Trends and Future Outlook - The global MRAM market is projected to grow from $4.22 billion in 2024 to approximately $84.77 billion by 2034, with a compound annual growth rate of 34.99% [30]. - The strategic importance of MRAM is heightened by geopolitical factors and the need for supply chain independence, positioning it as a critical technology for China's semiconductor industry [21][22]. - The industry is witnessing a shift towards MRAM as a mainstream solution, with major semiconductor companies actively investing in its development [23][26].