Workflow
DDR内存
icon
Search documents
CounterPoint:全球内存价格年内涨幅达50%,2026年或再涨50%
Huan Qiu Wang Zi Xun· 2025-11-20 04:25
来源:环球网 报告指出,芯片巨头英伟达的战略转型带来了更广泛的长期影响。传统服务器普遍采用具备错误纠正码 (ECC)功能的DDR内存以保障数据可靠性,而英伟达为降低功耗,正转向在服务器产品中大规模采 用LPDDR内存,并计划通过CPU层面处理错误纠正。研究总监MS Hwang表示,这一转变使英伟达的内 存需求规模堪比大型智能手机制造商,对现有供应链构成"地震级"变革,短期内难以消化。 【环球网科技综合报道】11月20日消息,市场调查机构CounterPoint Research近日发布报告显示,全球 内存市场正遭遇显著价格上涨压力。继今年价格已飙升50%后,动态随机存取存储器(DRAM)价格预 计将持续上涨,2025年第四季度可能再涨30%,2026年初进一步上涨20%,至2026年第二季度,累计涨 幅或达50%。 此次内存市场波动将广泛波及消费电子生态系统。高级分析师Ivan Lam介绍,最初冲击将集中在采用 LPDDR4的低端智能手机制造商,后续影响将逐步蔓延。报告预测,中高端智能手机的物料清单 (BoM)成本可能增加超过25%,既可能侵蚀制造商利润空间,也可能迫使企业上调产品售价,给行业 发展带来不确定 ...
【环球财经】AI泡沫论再起,韩国股市重挫
Sou Hu Cai Jing· 2025-11-05 11:58
转自:新华财经 基于强劲的价格前景,瑞银将SK海力士的12月目标价从64万韩元上调至71万韩元,并维持其"买入"评级,称其为内存领域的"首选"。同时,三星电子的目 标价也从11.8万韩元上调至12.8万韩元。 AI投资加速引发"AI泡沫论" 新华财经上海11月5日电(葛佳明) 市场对美股人工智能(AI)板块或存在泡沫的担忧加剧,使得韩国股市遭遇重挫。韩国综合指数5日一度跌超5%,后跌 幅收窄,收盘跌2.92%。 有分析指出,在经历一轮大涨后,市场对AI、芯片等科技股估值过高的担忧日益加剧,而三星电子和SK海力士在韩国股指中占比较高,在此前连续大涨后 估值已处高位,受隔夜美股AI板块回调影响,韩国股市走弱。 韩国股市持续上行背后 在人工智能热潮和公司治理改革预期的双重驱动下,韩国综合股价指数今年迄今涨幅超过66%,在全球各国股指中涨幅居前。 韩国综合指数的上涨背后主要得益于三星电子和SK海力士这两大权重股推动。上述两家公司合计占韩国综合指数权重近30%。今年以来,三星股价已上涨 超过88%,SK海力士股价更是大涨近240%,韩国交易所近日对SK海力士股票发布"投资谨慎提示"。 受益于数据中心等对内存需求的增加, ...
存储芯片“严重短缺”,DRAM和DDR涨价“势不可挡”,瑞银上调三星和SK海力士目标价
美股IPO· 2025-11-04 12:42
存储芯片行业正面临"严重短缺",瑞银报告指出2025年第四季度DDR内存合同价格预计环比涨幅达21%或更高。瑞银据此上调三星电子和SK海力士目 标股价,并预计供应短缺状况将至少持续至2026年底,价格涨势有望延续至2027年第一季度。 存储芯片行业正进入一个"严重短缺"的时期,强劲的需求与有限的产能扩张共同推动DRAM价格进入一个强劲的上行周期。在此背景下,瑞银集团显著 上调了对DRAM合同价格的预期,并提升了行业巨头三星电子和SK海力士的目标股价,认为涨价势头将至少延续至2026年底。 据追风交易台, 瑞银在11月3日发布的研究报告中指出,其最新的行业调查显示,2025年第四季度的DDR内存合同价格谈判正以积极势头进行,预计 环比涨幅将达到21%或更高。 报告明确表示,"DRAM供应商显然掌握了上风。" 基于强劲的价格前景,瑞银将SK海力士的12个月目标价从64万韩元上调至71万韩元,并维持其"买入"评级,称其为内存领域的"首选"。同时,三星电子 的目标价也从11.8万韩元上调至12.8万韩元。瑞银因此上调了两家公司2026年及2027年的营收和盈利预测。 这一轮价格上涨的背后,是人工智能(AI)驱动的高带 ...
人工智能,需要怎样的DRAM?
半导体行业观察· 2025-06-13 00:40
Core Viewpoint - The article discusses the critical role of different types of DRAM in meeting the growing computational demands of artificial intelligence (AI), emphasizing the importance of memory bandwidth and access methods in system performance [1][4][10]. DRAM Types and Characteristics - Synchronous DRAM (SDRAM) is categorized into four types: DDR, LPDDR, GDDR, and HBM, each with distinct purposes and advantages [1][4]. - DDR memory is optimized for complex operations and is the most versatile architecture, featuring low latency and moderate bandwidth [1]. - Low Power DDR (LPDDR) includes features to reduce power consumption while maintaining performance, such as lower voltage and temperature compensation [2][3]. - GDDR is designed for graphics processing with higher bandwidth than DDR but higher latency [4][6]. - High Bandwidth Memory (HBM) provides extremely high bandwidth necessary for data-intensive computations, making it ideal for data centers [4][7]. Market Dynamics and Trends - HBM is primarily used in data centers due to its high cost and energy consumption, limiting its application in cost-sensitive edge devices [7][8]. - The trend is shifting towards hybrid memory solutions, combining HBM with LPDDR or GDDR to balance performance and cost [8][9]. - LPDDR is gaining traction in various systems, especially in battery-powered devices, due to its excellent bandwidth-to-power ratio [14][15]. - GDDR is less common in AI systems, often overlooked despite its high throughput, as it does not meet specific system requirements [16]. Future Developments - LPDDR6 is expected to launch soon, promising improvements in clock speed and error correction capabilities [18]. - HBM4 is anticipated to double the bandwidth and channel count compared to HBM3, with a release expected in 2026 [19]. - The development of custom HBM solutions is emerging, allowing bulk buyers to collaborate with manufacturers for optimized performance [8]. System Design Considerations - Ensuring high-quality access signals is crucial for system performance, as different suppliers may offer varying speeds for the same DRAM type [22]. - System designers must carefully select the appropriate memory type to meet specific performance needs while considering cost and power constraints [22].
CXL的进展:尚未成熟
半导体行业观察· 2025-05-27 01:25
Core Insights - CXL is gaining attention as a new standard for memory subsystems, with significant advancements in hardware and performance data [2][17] - The CXL standard has evolved through multiple versions, with CXL 1.0 and 1.1 released in 2019, followed by CXL 2.0 in late 2020, and further versions expected between 2022 and 2024 [2][17] - The adoption of CXL memory subsystems is projected to take time, with significant sales growth not expected until 2027 due to current software support limitations [14][17] Group 1: CXL Development and Features - CXL has expanded its coverage from single CPU systems to rack and rack-to-rack connections, enhancing memory consistency across various processors [3][6] - Key features of CXL include eliminating idle memory in data centers, increasing memory bandwidth, and supporting persistent memory [3][11] - The interest in CXL varies among system developers, with large data center systems showing more interest compared to PC OEMs [11][17] Group 2: Performance Comparisons - CXL memory exhibits higher latency compared to traditional DDR memory, with delays ranging from 170 to 210 nanoseconds for CXL memory controllers, compared to approximately 100 nanoseconds for DDR [14][15] - In terms of bandwidth, CXL memory can achieve a transfer speed of 64 GB/s through PCIe Gen5, compared to 51.2 GB/s for DDR5-6400 SDRAM DIMM [15] - CXL memory has shown significant performance improvements in specific applications, such as a 23% performance increase in Microsoft SQL database tests and over 100% improvement in Apache Spark machine learning tests [16][17] Group 3: Market Outlook - The current lack of software support for CXL features is a barrier to widespread adoption, with predictions indicating that significant sales will not occur until 2027 [14][17] - Despite the challenges, CXL memory subsystems are already in production and have demonstrated tangible benefits in certain applications, particularly for large system developers [17]