Compute Express Link (CXL)
Search documents
SK Telecom and Panmnesia Sign Partnership to Innovate AI Data Center Architecture, Enhancing Cost Efficiency and Performance
Businesswire· 2026-03-03 23:00
Core Viewpoint - Panmnesia and SK Telecom have formed a strategic partnership to develop a next-generation AI data center architecture based on Compute Express Link (CXL) technology, aiming to enhance cost efficiency and performance in AI data centers [1][2]. Group 1: Partnership and Objectives - The partnership was announced at MWC26 in Barcelona and focuses on creating a CXL-based AI data center architecture [1]. - The collaboration aims to address the rising costs associated with GPU deployments in large-scale AI services by improving the utilization of existing computing resources [2]. Group 2: Challenges in Current AI Data Center Architectures - Current AI data centers face limitations due to fixed ratios of CPUs, GPUs, and memory, leading to inefficiencies and increased costs when resources are underutilized [4]. - The conventional architecture requires additional GPUs to be deployed when memory capacity is insufficient, which lowers GPU utilization rates and raises operational expenditures [4]. Group 3: Proposed Solutions - SKT and Panmnesia propose a disaggregated architecture that separates computing resources by type, allowing for flexible composition and minimizing resource waste [5]. - The new architecture will utilize a CXL Fabric Switch to interconnect resources at the rack level, enabling dynamic allocation based on workload requirements [5]. Group 4: Enhancements in Computational Efficiency - The collaboration aims to improve computational efficiency by replacing traditional network-based interconnects with CXL, eliminating the need for data copies and software intervention [7][8]. - The architecture will feature a Link Controller that facilitates direct communication over CXL, enhancing processing efficiency and allowing GPU-to-GPU and GPU-to-memory communication without software intervention [9]. Group 5: Implementation and Future Plans - SKT will lead the design of the architecture, leveraging its expertise in AI data center construction and operational management [11]. - Panmnesia will implement the CXL-based AI Rack, extending the link architecture beyond individual servers to the rack level [12]. - The companies plan to validate the architecture by running real AI models and evaluating performance metrics by the end of the year, followed by proof-of-concept deployments [13]. Group 6: Industry Impact - The collaboration is expected to enhance the competitiveness of AI data centers by addressing the "Memory Wall" bottleneck and optimizing system-level performance [14]. - Companies utilizing Panmnesia's link technology in their devices are anticipated to strengthen their market position in the AI data center sector [17].
Astera Labs' Leo CXL Smart Memory Controllers on Microsoft Azure M-series Virtual Machines Overcome the Memory Wall
Globenewswire· 2025-11-18 21:30
Core Insights - Astera Labs has introduced its Leo CXL Smart Memory Controllers, enabling evaluation of Compute Express Link (CXL) memory expansion capabilities for Azure M-series virtual machines [1][2] - Microsoft’s Azure M-series VMs represent the first deployment of CXL-attached memory, addressing the limitations of traditional server architectures in handling memory-intensive workloads [2][3] - The Leo CXL Smart Memory Controllers support CXL 2.0, allowing up to 2TB of memory capacity per controller, which can enhance server memory capacity by over 1.5 times [3] Company Overview - Astera Labs specializes in semiconductor-based connectivity solutions for rack-scale AI infrastructure, focusing on open standards and collaboration with hyperscalers [6] - The company’s Intelligent Connectivity Platform integrates various semiconductor technologies to create flexible systems that enhance connectivity and scalability [6] Industry Context - The introduction of CXL technology is crucial for overcoming the "memory wall" bottleneck faced by organizations processing large datasets, enabling greater memory capacity and performance [2][3] - The collaboration between Astera Labs and Microsoft highlights the importance of addressing memory capacity constraints in cloud infrastructure through innovative solutions [4]
Marvell Extends CXL Ecosystem Leadership with Structera Interoperability Across All Major Memory and CPU Platforms
Prnewswire· 2025-09-02 13:00
Core Insights - Marvell Technology, Inc. has successfully completed interoperability testing for its Structera CXL memory-expansion controllers and near memory compute accelerators with DDR4 and DDR5 memory solutions from Micron, Samsung, and SK hynix, making it the only CXL 2.0 product family with such comprehensive testing [1][2][3] Group 1: Product Development and Features - The Structera product line includes two CXL device families: Structera A CXL near-memory accelerators, which integrate 16 Arm Neoverse V2 cores and multiple memory channels, and Structera X CXL memory-expansion controllers, which enable terabytes of memory to be added to general-purpose servers [5] - Structera supports four memory channels, inline LZ4 compression, and utilizes 5nm manufacturing processes, addressing high-bandwidth and high-capacity memory applications [5] Group 2: Market Demand and Strategic Importance - As data-centric applications become more complex, the need for interoperability is critical, allowing for scalable system design and reduced integration risk [2] - The flexible business engagement model from Marvell allows for tailored product configurations that align with specific workload requirements, supporting both standard and custom deployment models [3][4] Group 3: Industry Collaboration - Collaboration with major memory suppliers like Micron, Samsung, and SK hynix is aimed at ensuring reliable and high-performance systems, facilitating the deployment of Structera with their respective memory technologies [5]