Workflow
英伟达为机器人推出懂推理的“大脑”!升级版Cosmos世界模型来了

Core Viewpoint - Nvidia is significantly advancing its robotics development infrastructure, focusing on the integration of AI and computer graphics to enhance robotic capabilities and reduce training costs [17][20][21]. Group 1: Product and Technology Updates - Nvidia introduced the upgraded Cosmos world model at the SIGGRAPH conference, which is designed to generate synthetic data that adheres to real-world physics [2][3]. - The upgrade emphasizes planning capabilities and generation speed, with enhancements across software and hardware, including the new Omniverse library and RTX PRO Blackwell servers [4][8]. - The new Cosmos Reason model features 70 billion parameters and reasoning capabilities, aiding robots in task planning [6][10]. - Cosmos Transfer-2 and its lightweight version accelerate the conversion of virtual scenes into training data, significantly reducing the time required for this process [12][13]. Group 2: Integration of AI and Graphics - Nvidia's AI research vice president highlighted the powerful synergy between simulation capabilities and AI system development, which is rare in the industry [5]. - The combination of Cosmos and Omniverse aims to create a realistic and scalable "virtual parallel universe" for robots to safely experiment and evolve [22][23]. - The integration of real-time rendering, computer vision, and physical simulation is essential for building this virtual environment [23]. Group 3: Market Strategy and Collaborations - Nvidia is strategically positioning itself in the robotics sector, recognizing the trend of merging computer graphics with AI as a transformative force in the industry [20][21]. - The company is collaborating with various Chinese firms, including Alibaba Cloud and several robotics companies, to expand its influence in the domestic market [26][27]. - Nvidia's approach mirrors its previous strategies, where it provided computational resources to emerging AI companies, indicating a similar trajectory in the robotics field [25][26].