Workflow
DDR产品
icon
Search documents
【点金互动易】存储芯片+英伟达,公司用于DDR的产品已进入小批量测评,芯片元件产品已批量用于英伟达GPU
财联社· 2026-01-29 00:50
前言 《电报解读》是一款主打时效性和专业性的即时资讯解读产品。侧重于挖掘重要事件的投资价值、分析 产业链公司以及解读重磅政策的要点。即时为用户提供快讯信息对市场影响的投资参考,将信息的价值 用专业的视角、朴素的语言、图文并茂的方式呈现给用户。 ①存储芯片+英伟达,用于DDR的产品已进入小批量测评,芯片元件产品已批量用于英伟达GPU,这家公 司产品获得数据中心、AI-GPU等领域客户认可。 ②端侧AI+光通信+谷歌,这家公司是谷歌Gemini AI端 侧硬件核心芯片供应商,布局FTTR光纤技术,构建多维通信矩阵助力端云互联。 ...
HBM之战:中国加速破墙,英伟达杀入基础裸片设计
Hu Xiu· 2025-08-18 01:43
Group 1 - The core technology of HBM (High Bandwidth Memory) is becoming increasingly strategic for AI high-end chips and the computing power supply chain [1][2] - The competition in AI computing power is intensifying, with the next generation of HBM technology being crucial for breakthroughs in AI large models and low-cost large-scale deployment [2][6] - China's domestic replacement of HBM technology is accelerating, with the technology gap with leading companies shrinking from 8 years to 4 years [3][14] Group 2 - China has begun mass production of HBM2 ahead of schedule, with HBM3 samples delivered to customers in June and mass production verification expected by the end of the year [4][12] - Major players are moving towards HBM4, which will lead to a technological leap, with Nvidia starting to design HBM bare chips expected to begin mass production in 2027 [5][24] - The capacity and bandwidth of HBM have increased significantly, with HBM capacity growing 2.4 times and bandwidth 2.6 times from H100 to GB200, but the growth rate of model parameters and context length is faster, increasing storage pressure [6][20] Group 3 - The lack of domestic HBM is a significant challenge for China's AI competitiveness, especially with restrictions on accessing advanced HBM technologies from the US [7][19] - The mainstream domestic AI chips are primarily using HBM2E from the three major memory suppliers, while global competitors have moved to HBM3E [8][9] - The speed of domestic HBM replacement is faster than previously expected, with companies like Changxin Storage and Wuhan Xinxin rapidly catching up [10][13] Group 4 - Changxin Storage is actively expanding HBM capacity, with HBM2 already in mass production and plans for HBM3 mass production by next year [12][14] - The time gap between Chinese manufacturers and leading memory companies is expected to shrink to about 3-4 years by 2027 [14][15] - The transition to HBM4 is expected to introduce new technical paths, making it difficult to replicate the success of HBM2 to HBM3 [20][25] Group 5 - The future of HBM will not be standardized, with a trend towards customization to reduce power consumption and performance loss [24][25] - Nvidia's design of its own HBM base die is a critical development, with plans for a 3nm process base die expected to begin small-scale production in 2027 [35][36] - The integration of storage and computing architectures is becoming a key factor in the future of AI chips, with HBM being a decisive element [34][44]