Compute Express Link(CXL)
Search documents
HBM4原型,首次亮相
半导体芯闻· 2025-10-22 10:30
Group 1 - The Korea Semiconductor Industry Association will hold SEDEX 2025 from October 22 to 24 in Seoul, focusing on the theme "Beyond Limits, Connected Innovation" [1] - The main highlight of the exhibition will be the next-generation AI semiconductor roadmap announced by Samsung Electronics and SK Hynix, showcasing a prototype of the 12-layer HBM4 product intended for NVIDIA's next AI accelerator "Rubin" [2] - Samsung Electronics plans to demonstrate its capabilities as a full-solution provider, featuring a product lineup that includes HBM4, the next-generation mobile application processor Exynos 2600 manufactured with a 2nm process, system semiconductors, and wafer foundry products [2] Group 2 - SK Hynix aims to emphasize its vision as an "AI memory full-stack" supplier, with a product portfolio that includes HBM4, high-capacity DDR5 memory, Compute Express Link (CXL), and high-performance enterprise SSDs to meet the explosive data processing demands of the AI era [3] - The core strategy of SK Hynix is to solidify its market leadership by providing a complete set of core memory products required for AI data center operations [3]
HBM之父:内存决定AI性能!
半导体芯闻· 2025-04-23 10:02
Group 1 - The core argument of the article emphasizes the critical role of High Bandwidth Memory (HBM) in determining the performance of artificial intelligence (AI) systems, as highlighted by Professor Kim Jong-ho from KAIST during a recent forum [1][2] - Professor Kim suggests that the South Korean government should establish ten government-funded semiconductor departments and a research center for HBM to maintain its technological leadership [2] - The article discusses the relationship between HBM and Compute Express Link (CXL), clarifying that CXL is not a replacement for HBM but serves as a complementary technology to address latency issues [2] Group 2 - Concerns are raised regarding the future development of HBM technology, particularly the potential external control of the ecosystem due to partnerships with companies like NVIDIA and TSMC [2] - The article notes that Samsung Electronics must keep its wafer foundry business active for the success of the HBM sector [2]
HBM之父:内存决定AI性能!
半导体芯闻· 2025-04-23 10:02
金教授在4月23日于国会议员会馆第9会议室举行的"AI G3强国新技术战略早餐论坛"上做主题发言 时 强 调 了 这 一 点 。 该 论 坛 由 共 同 民 主 党 议 员 郑 东 泳 与 国 民 力 量 党 议 员 崔 炯 斗 联 合 主 办 , 主 题 为"HBM拯救韩国"。 金 教 授 以 最 近 引 发 热 议 的 OpenAI CEO 山 姆 · 奥 特 曼 ( Sam Altman ) 的 言 论 为 开 场 话 题 。 他 表 示:"与其说是'因为宫崎骏风格图像转换服务的火爆而导致GPU熔化',倒不如说是'HBM熔化'更 为准确",因为"AI加速器中的内存处理主要由HBM负责"。 他还指出:"当前阶段,GPU与HBM之间的带宽决定了AI的性能。幸运的是,韩国在HBM方面具 有很强的竞争力。" 金教授被称为"HBM之父"。2013年,SK海力士与AMD合作开发HBM时,金教授赋予了其生命 力,成为HBM技术发展至今的关键人物。他致力于系统应用、性能优化及AI应用拓展,是推动 HBM技术扩散和演进的核心人物。 他强调:"AI正在支配人类",呼吁政府与国会应在维持HBM技术主导地位方面发挥重要作用。 ...