Workflow
英伟达Isaac GR00T
icon
Search documents
这不是显卡,是一座2吨重的AI工厂
华尔街见闻· 2026-01-06 03:53
Core Insights - NVIDIA has announced the full production of the Vera Rubin platform, which weighs nearly 2 tons and integrates six new chips, significantly enhancing inference cost and training efficiency, achieving AI computations at a trillion operations per second, marking it as a true AI factory [2] - The company has also open-sourced its first inference VLA (Vision-Language-Action) model, Alpamayo 1, designed for vehicles to "think" and solve problems in unexpected situations, utilizing a 10 billion parameter architecture [3] - The first vehicles equipped with NVIDIA technology are set to hit the roads in the US in Q1, Europe in Q2, and Asia later in the year [4][17] Production and Performance - The new Rubin platform has a performance increase of 5 times compared to the previous Blackwell version, with training performance being 3.5 times better [5][8] - The Rubin platform can reduce inference token generation costs by up to 10 times and decrease the number of GPUs required for training mixture of experts models by 4 times [8] - The Vera CPU in the Rubin platform features 88 cores, providing double the performance of its predecessor, and is designed for agent inference, making it the most energy-efficient processor in large-scale AI factories [8] Ecosystem and Deployment - Major cloud providers, including Microsoft, are expected to be among the first to deploy the new hardware in the second half of the year, with Microsoft’s next-generation Fairwater AI super factory set to utilize NVIDIA's Vera Rubin NVL72 systems [6] - NVIDIA maintains a long-term bullish outlook, predicting the total market size could reach several trillion dollars despite concerns about increasing competition and sustainability of AI spending [7] Innovations and Technologies - The Rubin platform incorporates five innovative technologies, including the sixth-generation NVLink interconnect technology and a third-generation transformer engine, which provides 50 petaflops of NVFP4 computing power for AI inference [9] - The platform's modular design allows for faster assembly and maintenance, with an 18 times quicker process compared to Blackwell [9] Open Source Initiatives - NVIDIA has released the Alpamayo model as part of a complete open ecosystem for autonomous driving development, which includes simulation frameworks and datasets [14][21] - The Alpamayo model is designed for the autonomous driving research community, allowing developers to adapt it for vehicle development and as a foundational tool for autonomous driving technology [15][18] - NVIDIA has also launched various open-source models and tools across different sectors, including the Nemotron family for agent AI and the Cosmos platform for physical AI [26][27]
物理AI的ChatGPT时刻!英伟达“内驱”无人驾驶汽车将至,将于一季度在美国上路
硬AI· 2026-01-06 01:40
Core Viewpoint - NVIDIA has announced the open-source release of its first inference VLA (Vision-Language-Action) model, Alpamayo 1, aimed at enhancing autonomous driving technology by enabling vehicles to "think" and solve problems in unexpected situations [2][3][5]. Group 1: Model and Technology Overview - The Alpamayo 1 model features a 10 billion parameter architecture that processes video inputs to generate trajectories and reasoning processes [2][5][9]. - This model is designed to address the long-tail problem in autonomous driving by employing human-like reasoning to handle complex driving scenarios [3][5]. - The model is not intended to run directly in vehicles but serves as a large-scale teacher model for developers to fine-tune and integrate into their autonomous driving technology stacks [11]. Group 2: Industry Support and Collaboration - Major companies and research institutions, including Jaguar Land Rover, Lucid, Uber, and UC Berkeley's DeepDrive, have expressed support for the Alpamayo model, indicating its potential to accelerate the deployment of Level 4 autonomous driving technologies [5][16]. - Industry leaders emphasize the importance of open and transparent AI development for advancing responsible autonomous mobility, highlighting the role of Alpamayo in providing new tools for developers [16]. Group 3: Ecosystem and Additional Tools - NVIDIA has created a comprehensive open ecosystem around the Alpamayo model, including simulation tools and datasets, to support developers in building autonomous driving solutions [9][15]. - The AlpaSim framework, a fully open-source end-to-end simulation tool, has been released to facilitate high-fidelity autonomous driving development [15]. - NVIDIA also provides a large-scale open dataset with over 1,700 hours of driving data, covering a wide range of geographical locations and conditions, crucial for advancing reasoning architectures [15]. Group 4: Broader AI Developments - In addition to Alpamayo, NVIDIA has launched several other open-source models and tools across various sectors, including the Nemotron family for agent AI, the Cosmos platform for physical AI, and the Isaac GR00T model for robotics [21][22]. - These models and frameworks are available on platforms like GitHub and Hugging Face, enabling widespread access and deployment across different AI infrastructures [22].
物理AI的ChatGPT时刻!英伟达“内驱”无人驾驶汽车将至,发布首个链式思维推理VLA模型
美股IPO· 2026-01-05 23:38
Core Viewpoint - Nvidia has announced the open-source release of its first inference VLA (Vision-Language-Action) model, Alpamayo 1, aimed at enhancing autonomous vehicle capabilities to "think" and solve problems in unexpected situations, utilizing a 10 billion parameter architecture [1][3][4]. Group 1: Model and Technology Overview - The Alpamayo model is designed to process complex driving scenarios using human-like reasoning, providing new pathways to address long-tail issues in autonomous driving [1][3]. - The model integrates three foundational pillars: open-source models, simulation frameworks, and datasets, creating a comprehensive open ecosystem for automotive developers and research teams [4]. - The model is now available on the Hugging Face platform and allows developers to adapt it for smaller runtime models or as a foundational tool for autonomous driving development [4][10]. Group 2: Industry Support and Collaboration - Major companies in the mobility sector, including Jaguar Land Rover, Lucid, and Uber, have expressed strong interest in utilizing the Alpamayo model to develop inference-based autonomous driving technology stacks [3][11]. - Nvidia's CEO highlighted the importance of the Alpamayo model in enabling autonomous vehicles to navigate rare scenarios safely and explain their driving decisions, which is crucial for scalable autonomous driving [6][11]. Group 3: Simulation and Data Resources - Alongside the Alpamayo model, Nvidia has released AlpaSim, a fully open-source end-to-end simulation framework for high-fidelity autonomous driving development, available on GitHub [9][10]. - Nvidia provides a large-scale open dataset containing over 1,700 hours of driving data, covering a wide range of geographical locations and conditions, essential for advancing inference architectures [9][10]. Group 4: Broader AI Model Releases - Nvidia has also launched several new open-source models, data, and tools across various industries, including the Nemotron family for agent AI, the Cosmos platform for physical AI, and the Isaac GR00T for robotics [12][14]. - These models include extensive datasets, such as 100 trillion language training tokens and 100TB of vehicle sensor data, aimed at accelerating AI development across sectors [14][15].
物理AI的ChatGPT时刻!英伟达“内驱”无人驾驶汽车将至,发布首个链式思维推理VLA模型
Xin Lang Cai Jing· 2026-01-05 23:14
Core Insights - Nvidia has made a significant advancement in the autonomous driving sector by open-sourcing its first reasoning VLA (Vision-Language-Action) model, Alpamayo, aimed at accelerating the development of safe autonomous driving technology [1][16][13] - The model processes complex driving scenarios using human-like reasoning, providing a new pathway to address the long-tail problem in autonomous driving [1][14] Model Release and Features - The Alpamayo platform was unveiled by Nvidia CEO Jensen Huang at CES, with the first vehicles equipped with Nvidia technology expected to hit the roads in the U.S. in Q1 [3][16] - The Alpamayo model is free for potential users to retrain, designed to enable vehicles to "think" and propose solutions in unexpected situations, such as traffic signal failures [3][16] - The model features a 10 billion parameter architecture, utilizing video input to generate trajectories and reasoning paths, showcasing the logic behind each decision [4][17] Ecosystem and Support - Nvidia has created a comprehensive open ecosystem that includes the Alpamayo model, simulation frameworks, and datasets for any automotive developer or research team [3][16] - The open-source initiative has garnered widespread support from industry leaders, including Jaguar Land Rover, Lucid, Uber, and the University of California, Berkeley's DeepDrive, who plan to utilize Alpamayo for developing reasoning-based autonomous driving technology stacks [3][8][21] Technical Principles - The reasoning VLA model integrates visual perception, language understanding, and action generation with step-by-step reasoning capabilities, distinguishing it from standard VLA models [5][19] - It breaks down complex tasks into manageable sub-problems and provides interpretable reasoning processes, enhancing accuracy in problem-solving and task execution [5][19] Simulation Tools and Datasets - Alongside the Alpamayo model, Nvidia released AlpaSim, an open-source end-to-end simulation framework for high-fidelity autonomous driving development, available on GitHub [20] - The company also offers a large-scale open dataset with over 1,700 hours of driving data, covering diverse geographical locations and conditions, crucial for advancing the reasoning architecture [20] Industry Reactions - Industry leaders have expressed strong interest in Alpamayo, highlighting the growing need for AI systems to reason about real-world behaviors rather than merely processing data [21] - The open-source nature of Alpamayo is seen as a catalyst for innovation in the autonomous driving ecosystem, providing developers and researchers with new tools to safely navigate complex real-world scenarios [21][8]
英伟达(NVDA.US)联手多家合作伙伴 推动美国AI制造本土化布局
智通财经网· 2025-04-14 13:47
Core Viewpoint - Nvidia is collaborating with manufacturing partners to design and build its first AI supercomputer production facility in the United States, aiming to establish a $500 billion AI infrastructure over the next four years [1][2] Group 1: Manufacturing and Infrastructure - Nvidia plans to construct a comprehensive AI infrastructure in the U.S. with a projected value of $500 billion through partnerships with TSMC, Foxconn, Wistron, Amkor, and SPIL [1] - The company is currently producing Blackwell chips at TSMC's facility in Phoenix, Arizona, and is working with Foxconn and Wistron to build supercomputer manufacturing plants in Houston and Dallas, expected to achieve mass production within 12-15 months [1] - Amkor and SPIL are collaborating with Nvidia for packaging and testing operations in Arizona [1] Group 2: Supply Chain and Technology - Nvidia's CEO emphasized that increasing U.S. manufacturing will help meet the growing demand for AI chips and supercomputers, enhancing supply chain resilience [2] - The company will leverage advanced AI, robotics, and digital twin technologies to design and operate these facilities, including using Nvidia Omniverse for digital twin modeling and Nvidia Isaac GR00T for manufacturing automation [2]