英伟达从“卖芯片”进一步走向“交付AI工厂”

Core Insights - Nvidia has introduced the Vera Rubin platform, a next-generation AI computing system designed for the intelligent agent AI era, consisting of 40 cabinets and various specialized racks to optimize performance and efficiency [2] - The platform achieves a total computing power of 3.6 EFLOPS and enhances throughput by 35 times per megawatt, marking a significant advancement in AI supercomputing capabilities [2] - The introduction of NemoClaw as an enterprise-level software stack for OpenClaw positions Nvidia as a standard setter in AI software infrastructure, enhancing the reliability and scalability of AI agents [1][4] Group 1: Vera Rubin Platform - The Vera Rubin platform comprises 40 cabinets with a specific configuration aimed at optimizing inference performance, including 16 CPU cabinets and 10 storage cabinets [2] - It features a fully liquid-cooled design and utilizes NVLink6 technology to redefine CPU, storage, network, and security components [2] - Revenue from Blackwell and Rubin AI chips is projected to reach $1 trillion by 2025-2027, doubling the previous forecast of $500 billion [2] Group 2: LPU and CPO Technologies - The Groq3LPU is designed to optimize the decoding phase of large model inference, significantly reducing latency with a bandwidth of up to 150 TB/s [3] - CPO technology has emerged as a critical solution for high-density AI networks, with the launch of the Spectrum-X CPO switch, which drastically reduces energy consumption and transmission loss [4] - Nvidia's strategy includes parallel development of copper and optical solutions, alleviating market concerns about a complete transition from copper to optical technologies [4] Group 3: Investment Recommendations - Companies to watch in the LPU space include Huadian Co., Shenghong Technology, and Shenzhen South Circuit [6] - In the CPO sector, potential investments include Himax, AIXTRON, Lumentum, and others [6] - Storage-related companies of interest are Samsung, SK Hynix, Micron, and several others [6]