Workflow
HBM5
icon
Search documents
三星公布HBM新路线图
半导体行业观察· 2026-02-12 00:56
公众号记得加星标⭐️,第一时间看推送不会错过。 三星电子设备解决方案(DS)事业部总裁兼首席技术官(CTO)宋在赫公布了公司下一代产品路线 图。 在2月11日于首尔江南区COEX举行的"SEMICON Korea 2026"主题演讲中,三星电子社长宋载赫表 示:"随着人工智能从智能体人工智能(Agent AI)向物理人工智能(Physical AI)发展,我们预计 工作负载(数据计算量)将大幅增加。三星电子正在开发能够显著降低内存带宽限制的技术。" 宋社长强调,三星电子是唯一一家涵盖存储器、晶圆代工(半导体代工制造)和封装的集成器件制造 商(IDM),并表示:"我们计划展示三星半导体独有的强大协同优化能力。"他解释说,三星旨在通 过涵盖设计、工艺、存储器和封装的集成解决方案,引领先进技术的发展。 宋总裁还介绍了下一代HBM架构"cHBM"和"zHBM"的研发进展,并指出"我们正在与客户沟通"。他 分享了"三星定制HBM(cHBM)"的研发成果,并表示"我们正在研发定制HBM,通过主动采用芯片 间接口IP,确保更高的带宽"。cHBM是一种专用集成电路(ASIC),旨在通过为AI半导体客户进行 定制来最大限度地提 ...
三星首席技术官称对公司在HBM4领域领先地位充满信心
Xin Lang Cai Jing· 2026-02-11 04:57
"三星拥有内存、代工和封装的(产品组合),拥有生产AI领域所需产品的优化环境,而且它们目前正 在产生协同效应。" Song指出,客户对三星的 HBM4 产品表示满意,并补充说该公司将继续努力在下一代 HBM4E 和 HBM5 产品方面取得领先地位。 人们普遍预计三星电子将在下周农历新年假期后开始向英伟达发货 HBM4 产品。 责任编辑:于健 SF069 三星电子公司首席技术官Song Jai-hyuk周三表示,他对该公司在第六代高带宽存储器,即HBM4领域的 领先地位充满信心,该产品的首批产品将于本月晚些时候出货。 Song Jai-hyuk是在2026年韩国半导体展(Semicon Korea 2026)期间发表上述言论的。该展会是一年一 度的半导体企业展览会,当天在首尔开幕,为期三天。 当被问及这家韩国科技巨头HBM4产品的实力时,他表示:"一直以世界顶尖技术应对市场的三星,只 是展现了真实的自己。" 三星电子公司首席技术官Song Jai-hyuk周三表示,他对该公司在第六代高带宽存储器,即HBM4领域的 领先地位充满信心,该产品的首批产品将于本月晚些时候出货。 Song Jai-hyuk是在2026年 ...
三星加快定制HBM4E设计,预计2026年中完成,SK海力士、美光同步跟进
Hua Er Jie Jian Wen· 2026-01-23 12:33
Core Insights - The competition in high bandwidth memory (HBM) technology is intensifying, with major storage chip manufacturers accelerating their focus on customized HBM4E solutions [1] - Samsung Electronics is significantly increasing its R&D investment, aiming to complete the design of its customized HBM4E by mid-2026, indicating a shift from standardized products to high-performance customized solutions [1] - The industry anticipates that HBM4E will be launched in 2027, followed by HBM5 in 2029, as major manufacturers like SK Hynix and Micron are also progressing on similar timelines [1][4] Group 1: Samsung's Strategy - Samsung has established dedicated teams for both standardized and customized HBM designs and has recently hired 250 engineers specifically for customized projects, targeting major tech clients like Google, Meta, and NVIDIA [1] - Samsung is currently in the backend design phase of HBM4E, which constitutes 60% to 70% of the overall design cycle, focusing on physical design after the RTL logic development [3] - The company plans to utilize a 2nm process for its customized HBM, aiming for higher performance, following the 4nm process used for its current HBM4 logic die [3] Group 2: Competitors' Approaches - SK Hynix and Micron are relying on deepening their collaboration with TSMC to address the challenges of customization, with both companies expected to complete their customized HBM4E development around the same time as Samsung [4] - SK Hynix is working closely with TSMC to develop next-generation HBM logic dies and is adopting a 12nm process for mainstream server logic dies, upgrading to a 3nm process for high-end designs [4] - Micron has commissioned TSMC to manufacture its HBM4E logic dies, aiming for production in 2027, but is facing structural disadvantages due to its decision to stick with existing DRAM processes [4]
万字拆解371页HBM路线图
半导体行业观察· 2025-12-17 01:38
Core Insights - The article emphasizes the critical role of High Bandwidth Memory (HBM) in supporting AI technologies, highlighting its evolution from a niche technology to a necessity for AI performance [1][2][15]. Understanding HBM - HBM is designed to address the limitations of traditional memory, which struggles to keep up with the computational demands of AI models [4][7]. - Traditional memory types like DDR5 and LPDDR5 have significant drawbacks, including limited bandwidth, high latency, and inefficient data transfer methods [4][10]. HBM Advantages - HBM offers three main advantages: significantly higher bandwidth, reduced power consumption, and a compact form factor suitable for high-density AI servers [11][12][14]. - For instance, HBM3 has a bandwidth of 819GB/s, while HBM4 is expected to double that to 2TB/s, enabling faster AI model training [12][15]. HBM Generational Roadmap - The KAIST report outlines a roadmap for HBM development from HBM4 to HBM8, detailing the technological advancements and their implications for AI [15][17]. - Each generation of HBM is tailored to meet the evolving needs of AI applications, with HBM4 focusing on mid-range AI servers and HBM5 addressing the computational demands of large models [17][27]. HBM Technical Innovations - HBM's architecture includes a "sandwich" 3D stacking design that enhances data transfer efficiency [8][9]. - Innovations such as Near Memory Computing (NMC) in HBM5 allow memory to perform computations, reducing the workload on GPUs and improving processing speed [27][28]. Market Dynamics - The global HBM market is dominated by three major players: SK Hynix, Samsung, and Micron, which together control over 90% of the market share [80][81]. - These companies have secured long-term contracts with major clients, ensuring a steady demand for HBM products [83][84]. Future Challenges - The article identifies key challenges for HBM's widespread adoption, including high costs, thermal management, and the need for a robust ecosystem [80]. - Addressing these challenges is crucial for transitioning HBM from a high-end product to a more accessible solution for various applications [80].
HBM 4,黄仁勋确认
半导体行业观察· 2025-11-10 01:12
Core Insights - Nvidia's CEO Jensen Huang announced the receipt of advanced memory samples from Samsung Electronics and SK Hynix, indicating strong support for Nvidia's growth in AI chip demand [3][4] - Huang expressed concerns about potential memory supply shortages due to robust business growth across various sectors, suggesting that memory prices may rise depending on operational conditions [3] - TSMC's CEO C.C. Wei acknowledged Nvidia's significant wafer demand, emphasizing the critical role TSMC plays in Nvidia's success [3] Memory Market Dynamics - SK Hynix, Micron, and Samsung are in fierce competition to dominate the HBM4 market, estimated to be worth $100 billion [6] - Micron has begun shipping its next-generation HBM4 memory, claiming record performance and efficiency, with bandwidth exceeding 2.8TB/s [6][7] - SK Hynix has also delivered 12-Hi HBM4 samples to major clients, including Nvidia, and plans to ramp up production [7][8] Future HBM Generations - The latest HBM generation, HBM4, supports bandwidth up to 2TB/s and a maximum of 16 layers of Hi DRAM chips, with a capacity of up to 64GB [10] - Future generations, HBM5 to HBM8, are projected to significantly increase bandwidth and capacity, with HBM8 expected to reach 64TB/s by 2038 [11][12][15] - HBM technology is evolving with new stacking techniques and cooling methods, enhancing performance and efficiency [12][13]
HBM,前所未见
半导体行业观察· 2025-09-07 02:06
Core Insights - The article discusses the rapid growth of High Bandwidth Memory (HBM) driven by the increasing demand for artificial intelligence (AI) and the acceleration of GPU development by companies like NVIDIA [1][2][5] - HBM is a high-end memory technology that is difficult to implement, and customization is crucial for its continued benefit from the widespread application of GPUs and accelerators [1][2] Market Trends - According to Dell'Oro Group, the server and storage components market is expected to grow by 62% year-over-year by Q1 2025 due to the surge in demand for HBM, accelerators, and network interface cards (NiC) [1] - AI server sales have increased from 20% to approximately 60% of the total market, significantly boosting GPU performance and HBM capacity [2] Competitive Landscape - SK Hynix leads the HBM market with a 64% sales share, followed by Samsung and Micron [1][2] - Micron plans to begin mass production of the next-generation HBM4 with a 2048-bit interface in 2026, expecting a 50% quarter-over-quarter revenue growth in HBM by Q3 FY2025, reaching an annual revenue of $6 billion [2] Technological Challenges - The demand for HBM is increasing rapidly, with manufacturers facing challenges due to the accelerated release cycles of GPU technologies, which are now updated every 2 to 2.5 years compared to the traditional 4 to 5 years for standard memory technologies [3][4] - The complexity of HBM5 architecture poses challenges for standardization and widespread adoption, as it requires a balance between high memory bandwidth and increased capacity for next-generation AI and computing hardware solutions [5][6] Future Developments - Marvell Technology is collaborating with major HBM suppliers to develop a custom HBM computing architecture, expected to be released in the second half of 2024, which will integrate advanced 2.5D packaging technology and custom interfaces for AI accelerators [4][6] - The HBM memory bandwidth and I/O count are expected to double with each generation, necessitating innovative packaging technologies to accommodate the increased density and complexity [4][6]
混合键合,下一个焦点
3 6 Ke· 2025-06-30 10:29
Group 1 - The core concept of hybrid bonding technology is gaining traction among major semiconductor companies like TSMC and Samsung, as it is seen as a key to advancing packaging technology for the next decade [2][4][10] - Hybrid bonding allows for high-density, high-performance interconnections between different chips, significantly improving signal transmission speed and reducing power consumption compared to traditional methods [5][11] - The technology is particularly relevant for high bandwidth memory (HBM) products, with leading manufacturers like SK Hynix, Samsung, and Micron planning to adopt hybrid bonding in their upcoming HBM5 products to meet increasing bandwidth demands [10][12] Group 2 - TSMC's SoIC technology utilizes hybrid bonding, achieving a 15-fold increase in chip connection density compared to traditional methods, which enhances performance and reduces size [14][15] - Intel has also entered the hybrid bonding space with its 3D Foveros technology, which significantly increases the number of interconnections per square millimeter, enhancing integration capabilities [19] - SK Hynix and Samsung are actively testing and planning to implement hybrid bonding in their next-generation HBM products, with Samsung emphasizing the need for this technology to meet height restrictions in memory packaging [20][22] Group 3 - The global hybrid bonding technology market is projected to grow from $123.49 million in 2023 to $618.42 million by 2030, with a compound annual growth rate (CAGR) of 24.7%, particularly strong in the Asia-Pacific region [22]
HBM 8,最新展望
半导体行业观察· 2025-06-13 00:46
Core Viewpoint - The cooling technology will become a key competitive factor in the high bandwidth memory (HBM) market as HBM5 is expected to commercialize around 2029, shifting the focus from packaging to cooling methods [1][2]. Summary by Sections HBM Technology Roadmap - The roadmap from HBM4 to HBM8 spans from 2025 to 2040, detailing advancements in HBM architecture, cooling methods, TSV density, and interlayer technologies [1]. - HBM4 is projected to have a data rate of 8 Gbps, a bandwidth of 2.0 TB/s, and a capacity of 36/48 GB per HBM, utilizing liquid cooling methods [3]. - HBM5 will maintain the 8 Gbps data rate but will double the bandwidth to 4 TB/s and increase capacity to 80 GB [3]. - HBM6 will introduce a data rate of 16 Gbps and a bandwidth of 8 TB/s, with a capacity of 96/120 GB [3]. - HBM7 is expected to reach 24 TB/s bandwidth and 160/192 GB capacity, while HBM8 will achieve 32 Gbps data rate, 64 TB/s bandwidth, and 200/240 GB capacity [3]. Cooling Technologies - HBM5 will utilize immersion cooling, where the substrate and package are submerged in cooling liquid, addressing limitations of current liquid cooling methods [1]. - HBM7 will require embedded cooling systems to inject cooling liquid between DRAM chips, introducing fluid TSVs [2]. - The professor emphasizes that cooling will be critical as the base chip will take on part of the GPU workload starting from HBM4, leading to increased temperatures [1][2]. Bonding and Performance Factors - Bonding will also play a significant role in determining HBM performance, with mixed glass and silicon interlayers being introduced from HBM6 onwards [2].
HBM 8,最新展望
半导体行业观察· 2025-06-13 00:40
Core Viewpoint - The cooling technology will become a key competitive factor in the high bandwidth memory (HBM) market as HBM5 is expected to commercialize around 2029, shifting the focus from packaging to cooling solutions [1][2]. Summary by Sections HBM Technology Roadmap - The roadmap from HBM4 to HBM8 spans from 2025 to 2040, detailing advancements in HBM architecture, cooling methods, TSV density, and interposer layers [1]. - HBM4 is projected to be available in 2026, with a data rate of 8 Gbps, bandwidth of 2.0 TB/s, and a capacity of 36/48 GB per HBM [3]. - HBM5, expected in 2029, will double the bandwidth to 4 TB/s and increase capacity to 80 GB [3]. - HBM6, HBM7, and HBM8 will further enhance data rates and capacities, reaching up to 32 Gbps and 240 GB respectively by 2038 [3]. Cooling Technologies - HBM5 will utilize immersion cooling, where the substrate and package are submerged in cooling liquid, addressing limitations of current liquid cooling methods [2]. - HBM7 will require embedded cooling systems to inject coolant between DRAM chips, introducing fluid TSVs for enhanced thermal management [2]. - The introduction of new types of TSVs, such as thermal TSVs and power TSVs, will support the cooling needs of future HBM generations [2]. Performance Factors - Bonding techniques will also play a crucial role in HBM performance, with HBM6 introducing a hybrid interposer of glass and silicon [2]. - The integration of advanced packaging technologies will allow base chips to take on GPU workloads, necessitating improved cooling solutions due to increased temperatures [2].