Core Insights - CXL technology is enhancing computing efficiency through storage pooling and high-speed interconnects, with significant implications for AI applications [1] - Major players like NVIDIA and Alibaba Cloud are actively developing CXL capabilities to improve performance and resource utilization in AI systems [2][3] Group 1: CXL Technology Overview - CXL is an open high-speed serial protocol designed to facilitate communication between CPU, memory, and GPU, achieving higher data throughput and lower latency [1] - The technology supports efficient collaboration between accelerators like GPUs and FPGAs with main processors, addressing memory bandwidth bottlenecks and enhancing computational efficiency [1] Group 2: NVIDIA's Strategic Moves - NVIDIA has invested $5 billion in Intel to develop customized x86 CPUs for its AI infrastructure, leveraging Intel's role in the CXL alliance to enhance interoperability between NVLink and CXL technologies [2] - The acquisition of Enfabrica allows NVIDIA to integrate advanced AI interconnect technologies, including low-latency data paths and high-capacity memory support, optimizing GPU and CPU interconnects [2] Group 3: Alibaba Cloud's Innovations - Alibaba Cloud has launched the world's first CXL 2.0 Switch-based PolarDB database server, achieving ultra-low latency and high bandwidth for remote memory access [3] - The server enhances resource utilization and inference throughput by enabling collaborative pathways between GPU, CPU, and shared memory pools, positioning itself as a robust foundation for AI-driven data solutions [3]
广发证券:CXL存储池化助力AI推理 建议关注CXL互连芯片相关厂商