Workflow
CXL
icon
Search documents
未知机构:长江电子澜起科技第四call在产业趋势上做配置ramb-20260204
未知机构· 2026-02-04 02:00
据产业链调研反馈,谷歌TPUv8p有望引入内存池化,用以拓展内存容量,该方案下,每张TPU有望额外增加数百 GB的内存拓展;内存池化有望逐步迎来规模商业化、放量可期。 【长江电子】#澜起科技第四call:在产业趋势上做配置 #rambus自身因封测品质因素指引不及预期、但不改产业趋势 rambus因自身封测品质因素影响一季度产品交付,Q1业绩指引不及预期至盘后大跌。 但公司#对服务器CPU内存条未来增长指引、MRDIMM等产品放量均给予正面回应、产业趋势持续向上。 #谷歌TPUv8p或引入内存池化、CXL放量可期 据产业链调 【长江电子】#澜起科技第四call:在产业趋势上做配置 #rambus自身因封测品质因素指引不及预期、但不改产业趋势 rambus因自身封测品质因素影响一季度产品交付,Q1业绩指引不及预期至盘后大跌。 但公司#对服务器CPU内存条未来增长指引、MRDIMM等产品放量均给予正面回应、产业趋势持续向上。 #谷歌TPUv8p或引入内存池化、CXL放量可期 伴随着CXL拓展芯片、MRCD/MDB、PCIe Retimer、PCIe Switch的规模起量;我们认为公司远期利润有望达100- 15 ...
Astera Labs (NasdaqGS:ALAB) FY Conference Transcript
2026-01-14 16:17
Summary of Astera Labs Conference Call Company Overview - **Company**: Astera Labs - **Founded**: 2017 - **Headquarters**: San Jose, California - **Industry**: Semiconductor, specifically focused on rack-scale AI infrastructure and connectivity solutions [1][3] Core Products and Technologies - **Product Portfolio**: - **Scorpio P and X Family of Fabric Switches**: - Scorpio P for PCI Express connectivity in scale-out applications - Scorpio X for GPU to GPU scale-up connectivity [4][5] - **Aries Retimers**: Used for both scale-out and scale-up applications [5] - **Taurus Products**: Signal conditioning for Ethernet, deployed as active electrical cables [5] - **Leo Products**: Address memory bottlenecks in AI systems, enabling DDR5 memory with CXL connectivity [5] - **Software**: Cosmos software suite integrates various components for diagnostics and customization [8] Competitive Advantages - **Architecture**: Software-first architecture allows flexibility and customization for end customers [7] - **Customer Trust**: Strong relationships with customers provide insights into their future needs, influencing Astera's product roadmap [9] - **Market Position**: Leading position in PCIe retimers and rapidly gaining market share in PCIe switches [10][11] Market Trends and Customer Insights - **AI Spending Environment**: Strong demand for AI systems, with customers reporting ROI on investments. No signs of slowdown expected in 2026 or 2027 [12][14] - **Engagements**: Over 10 engagements for PCIe scale-up switches, with increasing traction for Scorpio X family [28] Industry Developments - **AWS Announcements**: Transition to PCIe-based switch fabric and support for UA-Link, which is beneficial for Astera [15][17] - **CPO Solutions**: CPO (Chiplet-Optical) is seen as a net increase in Total Addressable Market (TAM), with plans to develop optical solutions [21][22] - **CXL Market**: Expected ramp-up in 2026, particularly for general-purpose compute applications [55] Future Outlook - **Product Development**: Anticipation of growth in 800-gig AECs and continued development of UA-Link switches [49][32] - **M&A Strategy**: Plans for strategic acquisitions to bolster capabilities and capture market opportunities [57][58] Key Challenges - **Competition**: Competing with companies like Marvell and Broadcom in the UA-Link and Ethernet spaces [32][10] - **Adoption of New Technologies**: Transition from PCIe to UA-Link and NVLink may take time, with expectations for gradual adoption [30][31] Conclusion Astera Labs is positioned strongly in the semiconductor industry, particularly in AI infrastructure, with a robust product portfolio and strategic customer relationships. The company anticipates continued growth driven by strong demand for AI solutions and plans to expand its offerings in optical and CXL technologies.
榨干GPU性能,中兴Mariana(马里亚纳)突破显存壁垒
量子位· 2025-08-26 05:46
Core Insights - The article discusses the challenges of expanding Key-Value Cache (KV Cache) storage in large language models (LLMs), highlighting the conflict between reasoning efficiency and memory cost [1] - It emphasizes the need for innovative solutions to enhance KV Cache storage without compromising performance [1] Industry Exploration - Nvidia's Dynamo project implements a multi-level caching algorithm for storage systems, but faces complexities in data migration and latency issues [2] - Microsoft's LMCahce system is compatible with inference frameworks but has limitations in distributed storage support and space capacity [3] - Alibaba proposed a remote storage solution extending KV Cache to Tair database, which offers easy scalability but struggles with low-latency requirements for LLM inference [3] Emerging Technologies - CXL (Compute Express Link) is presented as a promising high-speed interconnect technology that could alleviate memory bottlenecks in AI and high-performance computing [5] - Research on using CXL to accelerate LLM inference is still limited, indicating a significant opportunity for exploration [5] Mariana Exploration - ZTE Corporation and East China Normal University introduced a distributed shared KV storage technology named Mariana, which is designed for high-performance distributed KV indexing [6] - Mariana's architecture is tailored for GPU and KV Cache storage, achieving 1.7 times higher throughput and 23% lower tail latency compared to existing solutions [6] Key Innovations of Mariana - The Multi-Slot lock-based Concurrency Scheme (MSCS) allows fine-grained concurrency control at the entry level, significantly reducing contention and improving throughput [8] - Tailored Leaf Node (TLN) design optimizes data layout for faster access, enhancing read speeds by allowing simultaneous loading of key arrays into SIMD registers [10] - An adaptive caching strategy using Count-Min Sketch algorithm identifies and caches hot data efficiently, improving read performance [11] Application Validation - Mariana's architecture supports large-capacity storage by distributing data across remote memory pools, theoretically allowing unlimited storage space [13] - Experimental results indicate that Mariana significantly improves read/write throughput and latency performance in KV Cache scenarios [14] Future Prospects - Mariana's design is compatible with future CXL hardware, allowing seamless migration and utilization of CXL's advantages [18] - The advancements in Mariana and CXL technology could lead to efficient operation of large models on standard hardware, democratizing AI capabilities across various applications [18]
海力士,加速发展CXL
半导体芯闻· 2025-04-23 10:02
Group 1 - SK Hynix announced the completion of customer certification for its CXL 2.0-based DRAM solution, the CMM-DDR5 96GB product, which offers a 50% increase in capacity and a 30% increase in bandwidth compared to existing DDR5 modules, enabling data processing of 36GB per second [1] - The company is also working on certifying a 128GB product, which utilizes a 10nm-class fifth-generation 32Gb DDR5 DRAM, resulting in higher power performance [1] - SK Hynix aims to expand the CXL ecosystem and has developed its own software, HMSDK, to optimize the product, enhancing system performance through efficient cross-array between DRAM modules and CMM-DDR5 [2] Group 2 - HMSDK was installed on the largest open-source operating system, Linux, in September of the previous year, improving the systemic performance of CXL applications [2]
海力士,加速发展CXL
半导体芯闻· 2025-04-23 10:02
如果您希望可以时常见面,欢迎标星收藏哦~ 来源:内容 编译自 chosun ,谢谢。 https://biz.chosun.com/it-science/ict/2025/04/23/4VTOCIILZNFAFNOKZWQ6477UUM/ 点这里加关注,锁定更多原创内容 *免责声明:文章内容系作者个人观点,半导体芯闻转载仅为了传达一种不同的观点,不代表半导体芯闻对该 观点赞同或支持,如果有任何异议,欢迎联系我们。 SK海力士23日宣布,已完成其基于CXL 2.0的DRAM解决方案CMM(CXL Memory Module)- DDR5 96GB(千兆字节)产品的客户认证。 CXL 是 一 种 下 一 代 解 决 方 案 , 通 过 有 效 连 接 计 算 系 统 内 的 中 央 处 理 器 (CPU) 、 图 形 处 理 单 元 (GPU) 和内存,支持大容量、超高速计算。它基于 PCIe 接口,具有池化功能,可实现快速的数据 传输速度和高效的内存利用率。 SK海力士表示,"如果将该产品应用于服务器系统,与现有的DDR5模块相比,容量将增加50%, 产品本身的带宽也将扩大30%,使其每秒能够处理36GB的数据 ...