UCIe

Search documents
手把手教你设计Chiplet
半导体行业观察· 2025-09-04 01:24
Core Viewpoint - Chiplet technology is a method to meet the growing demands for computing power and I/O bandwidth by splitting SoC functions into smaller heterogeneous or homogeneous chips, integrated into a single system-in-package (SIP) [1] Group 1: System Partitioning - Design teams must consider which functional blocks to include and how to partition these functions across different chipsets, while also selecting the most efficient semiconductor process node for each functional block [2] - Common high-level partitioning schemes may involve separating compute chips, I/O chips, and storage functions into different chipsets, weighing factors like latency, bandwidth, and power consumption based on the chosen process nodes and partitioning [2] Group 2: Process Node Selection - In the latest process nodes, AI accelerators may be ideal for optimizing performance and power, but implementing cache at this node may not be efficient; SRAM is better implemented at lower-cost nodes [3] - A 3D implementation can be considered, where compute chips are on the latest node and SRAM and I/O are on older nodes, exemplified by AMD's Ryzen 7000X3D processor with second-generation 3D V-Cache [3] Group 3: Chip-to-Chip Connection Considerations - UCIe has become the de facto standard for die-to-die connections, with design teams needing to understand bandwidth requirements based on workload, including both data and control bandwidth [4] - Designers have various options for data rates and configurations, needing to balance data rates (ranging from 16G to 64G) and the number of channels to meet chip constraints [4] Group 4: Advanced Packaging Challenges - The focus on packaging technology has intensified, presenting both opportunities and challenges in multi-chip designs [6] - Designers must decide how to interconnect chips in multi-die designs, with considerations for cost, design speed, and interconnect density [6][7] Group 5: Testing and Security Design - Testing planning involves wafer probing to provide known good die (KGD) and using protocols like IEEE 1838 for accessing chips that may not be directly accessible [9] - Security design considerations arise with IP integration, requiring authentication features and potential support for secure computing architectures to protect sensitive data [10]
AI芯片带宽,终于有救了
半导体芯闻· 2025-04-02 10:50
Core Viewpoint - Lightmatter has launched two silicon photonic interconnect products to meet the increasing bandwidth demands of AI deployments, specifically targeting high-bandwidth multi-chip switches for XPU applications [1][2]. Group 1: Product Overview - The first product, Passage M1000, is an optical intermediary layer designed to facilitate high-bandwidth communication between stacked ASIC or GPU chips, with a total bandwidth capacity of up to 14.25 TB/s [2]. - The M1000 utilizes 56 Gb/s NRZ modulation and wavelength division multiplexing, allowing each fiber to support eight wavelengths, resulting in a bandwidth of 56 GB/s per fiber [2]. - Lightmatter plans to release a pair of smaller co-packaged optical designs in 2026 following the M1000 launch [2]. Group 2: Additional Products - The Passage L200 and L200X are designed as more traditional co-packaged optical devices, promising bidirectional bandwidths of 32 Tb/s and 64 Tb/s, respectively [3]. - The L200 uses 56 Gb/s NRZ, while the L200X employs 112 Gb/s PAM4 SerDes, with both series utilizing a 3D packaging method to support external communication speeds exceeding 200 Tb/s [3]. - These chips incorporate various technologies from Alphawave Semi, including low-power, low-latency UCIe and optical-ready SerDes [3].