物理AI
Search documents
英伟达想做“物理AI”的“安卓”
Hua Er Jie Jian Wen· 2026-01-06 04:01
Core Insights - Nvidia is establishing a default platform in the robotics sector, aiming to replicate Android's dominance in smartphone operating systems [1] - The company has released multiple open-source foundational models to enable robots to reason, plan, and adapt across various tasks and environments, all available on the Hugging Face platform [1] - Nvidia's new Jetson T4000 graphics card and the open-source command center OSMO are designed to support the entire robotics development workflow [1][4] - The trend of AI migrating from the cloud to the physical world is evident, driven by decreasing sensor costs, advancements in simulation technology, and improved generalization capabilities of AI models [1][6] Model Matrix Construction - The foundational models released by Nvidia form the core capabilities layer of physical AI [2] Data Generation and Evaluation - Cosmos Transfer 2.5 and Cosmos Predict 2.5 are responsible for data synthesis and robot strategy evaluation, allowing validation of robot behavior in simulated environments [3] - Cosmos Reason 2 is a reasoning-based visual language model that enables AI systems to observe, understand, and act in the physical world [3] - Isaac GR00T N1.6 is a visual language action model specifically developed for humanoid robots, utilizing Cosmos Reason for full-body control [3] - The Isaac Lab-Arena, launched at CES, is an open-source simulation framework hosted on GitHub, addressing industry pain points in robot capability validation [3] Hardware Accessibility - The Jetson T4000 graphics card, part of the Thor series, offers a cost-effective upgrade with 1.2 trillion floating-point AI operations and 64GB of memory, while maintaining power consumption between 40 to 70 watts [4] Strategic Partnerships - Nvidia has deepened its collaboration with Hugging Face, integrating Isaac and GR00T technologies into the LeRobot framework, connecting 2 million robot developers with 13 million AI builders [5] - The open-source humanoid robot Reachy 2 now supports Nvidia's Jetson Thor chips, allowing developers to test various AI models without being locked into proprietary systems [5] - Early signs indicate that Nvidia's strategy is effective, with robotics becoming the fastest-growing category on the Hugging Face platform and Nvidia's models leading in download numbers [5]
数千人现场听讲!英伟达新一代GPU发布,还提到了3个中国大模型
Mei Ri Jing Ji Xin Wen· 2026-01-06 03:48
黄仁勋认为,相比最前沿的AI模型,开源模型落后约6个月,但这个距离正逐步缩短,开源模型彻底改变了人工智能,吸引所有人参与其中。"我们现在知 道,当开源、开放创新、全球每家公司每个行业的创新被激活时,AI将无处不在。与此同时,开放模型去年真的起飞了。AI模型现在能推理的能力强大 得不可思议。"黄仁勋表示。 美国当地时间1月5日11点,英伟达创始人、首席执行官黄仁勋的"CES2026"(2026年国际消费电子展)主题演讲开始前两小时,现场已排起了长队,每日 经济新闻记者在现场注意到,场内约有3000人。 黄仁勋演讲海报 图片来源:每经记者 杨卉 摄 黄仁勋依旧穿着皮衣亮相,登场后向大家问候了"新年好",随后的话题全部围绕AI(人工智能)展开。他表示:"人工智能正在改变世界,并且正以惊人 的速度普及。我们很自豪能够继续推动这一变革。" 黄仁勋谈及,2025年最重要的事情之一就是开放模型取得进步。在演讲资料中,黄仁勋展示了包含Kimi K2、DeepSeek V3.2、Qwen在内的开源大模型, 还在演讲中提及了DeepSeek。 黄仁勋介绍,英伟达于8年前开始研究自动驾驶汽车。他透露:'我们的愿景是有朝一日,每辆车 ...
黄仁勋CES演讲全文来了!Rubin全面投产,算力暴涨5倍,砸掉智驾门槛All in物理世界
Hua Er Jie Jian Wen· 2026-01-06 03:19
北京时间6日凌晨5点,美国拉斯维加斯,在全球"科技春晚"——国际消费电子展(CES)的聚光灯下,英伟达CEO黄仁勋身着标志性的鳄鱼皮纹黑色夹 克跑步登台。 "AI竞赛已经开始,所有人都在努力达到下一个水平……如果你不进行全栈的极端协同设计,根本无法跟上模型每年增长10倍的速度。"面对资本市场 对"AI泡沫"的隐忧和摩尔定律失效的焦虑,黄仁勋用一套名为Vera Rubin的全新架构,向外界证明:英伟达依然掌握着定义AI未来的绝对权力。 这次演讲不同于以往单纯发布显卡,老黄这次虽然没有带来GeForce新品,却用"All in AI"、"All in 物理AI"的姿态,向资本市场展示了一张从原子级芯 片设计到物理世界机器人落地的完整拼图。 演讲三大主线: 演讲要点: 在基础设施与算力层面,英伟达通过"极端协同设计"暴力破解物理极限,重构了数据中心的成本逻辑。 面对晶体管数量仅增长1.6倍的 瓶颈,英伟达通过Vera Rubin平台、NVLink 6互联以及BlueField-4驱动的推理上下文内存存储平台,强行将推理性能提升5倍,并将 Token生成成本压低至1/10。这一层面的核心目标是解决Agentic AI ...
英伟达,重磅发布!黄仁勋:重要时刻要来了
第一财经· 2026-01-06 03:17
Core Viewpoint - The article highlights NVIDIA's advancements in AI and computing architecture, emphasizing the dual transformation occurring in AI and computing, which is reshaping the entire technology stack and creating new applications and ecosystems [6][7]. Group 1: AI and Computing Transformation - Huang emphasized that the computing industry undergoes a platform change every 10 to 15 years, with the current shift driven by AI and computing architecture simultaneously evolving [6]. - AI is both an application and a new platform, leading to a paradigm shift in software development from coding to model training [6][7]. - The modernization of a $10 trillion computing infrastructure is underway, with billions in venture capital flowing into AI, as industries shift R&D budgets towards AI [7]. Group 2: Open Source Models - Huang noted that one of the significant changes in the industry last year was the rise of open-source models, specifically mentioning China's DeepSeek R1 as a remarkable contributor to this global movement [7][8]. - Multiple open-source models were showcased, including three from China: Kimi K2, Qwen, and DeepseekV3.2 [8]. Group 3: Physical AI and Autonomous Driving - Huang stated that the next phase of AI development involves entering the physical world, requiring AI to learn common sense about physical properties [10]. - NVIDIA is working on a system that allows AI to learn about the physical world, which is crucial for applications like autonomous driving [10][12]. - Huang believes that the transition from non-autonomous to autonomous vehicles is imminent, with a significant portion of cars expected to be autonomous in the next decade [14]. Group 4: New Chip Platform - Rubin - The Rubin platform includes six new chips, with the Rubin GPU achieving a reasoning power of 50 PFLOPS, five times that of the previous Blackwell platform [21]. - The Rubin platform's design allows for a tenfold reduction in reasoning token costs and a fourfold decrease in the number of GPUs needed for training [21][22]. - The new Vera Rubin NVL72 chip is expected to significantly enhance performance, with reasoning and training capabilities reaching 3.6 EFLOPS and 2.5 EFLOPS, respectively [24]. Group 5: Collaborations and Future Developments - NVIDIA announced a deepened collaboration with Siemens to integrate its physical AI models into Siemens' industrial software, covering the entire lifecycle from chip design to production [16]. - The first autonomous vehicles using NVIDIA's DRIVE AV software are set to hit the roads in the U.S. in the first quarter of this year, with further expansions planned for Europe and Asia [16].
直击CES | 黄仁勋新年第一场发布:物理AI的ChatGPT时刻即将到来
Di Yi Cai Jing· 2026-01-06 02:20
Core Insights - NVIDIA's CEO Jensen Huang announced multiple open-source models related to physical AI and detailed the performance data of the new chip platform Rubin during a keynote speech at CES [1] - The event attracted significant attention, with a full audience of 3,000 people, indicating strong interest in NVIDIA's advancements in AI technology [1] Group 1: Product Announcements - NVIDIA introduced several open-source models focused on physical AI, marking a shift from solely relying on transistor density improvements to enhancing network processing and low-precision floating-point operations [1] - The Rubin chip platform includes six new chips, such as Vera CPU and Rubin GPU, with Rubin GPU achieving a 50 PFLOPS inference performance, five times that of the previous Blackwell platform [18][20] - The new platform's design allows for a 10-fold reduction in inference token costs and a fourfold decrease in the number of GPUs required for training MoE models compared to Blackwell [20] Group 2: AI Development and Trends - Huang emphasized that AI and computing architecture are undergoing simultaneous transformations, with AI serving as both an application and a new platform [6] - The shift in software development paradigms from coding to model training signifies a complete restructuring of the computing technology stack [6] - The global industry is reallocating R&D budgets towards AI, driven by the modernization of computing infrastructure valued at approximately $10 trillion over the past decade [7] Group 3: Future of AI and Autonomous Vehicles - Huang highlighted that the next phase of AI development involves integrating AI into the physical world, with a focus on teaching AI common sense about physical properties [9] - The transition from non-autonomous to autonomous vehicles is anticipated to occur within the next decade, with a significant portion of cars expected to be fully or highly autonomous [12] - NVIDIA's DRIVE AV software will be implemented in Mercedes-Benz vehicles, with the first autonomous vehicle expected to hit the roads in the U.S. in Q1 2024 [16] Group 4: Collaborations and Industrial Applications - NVIDIA announced a deepened collaboration with Siemens to integrate its physical AI models and Omniverse simulation platform into Siemens' industrial software, covering the entire lifecycle from chip design to production operations [16] - The company is positioned at the forefront of a new industrial revolution, leveraging physical AI to enhance chip design and automation in manufacturing [16] Group 5: Open-Source Models and Global Impact - Huang noted the significant rise of open-source models in the industry, specifically mentioning China's DeepSeek R1 as a model that has surprised the world and activated a global open-source movement [7][8] - The presentation included several open-source models from China, such as Kimi K2 and Qwen, showcasing the competitive advancements in AI technology [8]
黄仁勋又夸了DeepSeek,新一代“算力巨兽”正在量产,性能暴增5倍!
Feng Huang Wang· 2026-01-06 02:19
Core Insights - The keynote by NVIDIA CEO Jensen Huang at CES 2026 highlighted a significant shift in AI development from "digital intelligence" to "physical AI," emphasizing the company's ambition to build foundational elements for this new era [1][14] - NVIDIA is responding to the surging demand for AI computing power by upgrading its computing platforms, aiming to capture a substantial share of the digital world's computational base [1] Group 1: AI Evolution and Open Source - Huang emphasized that the computing industry undergoes a platform shift every 10 to 15 years, with current transitions focusing on AI-centric applications and a redefined software development paradigm [2] - The transformation of the computing industry, valued at approximately $10 trillion over the past decade, is driven by a shift in global R&D budgets towards AI and significant venture capital investments [4] - Open-source models are revolutionizing AI, with NVIDIA investing billions in supercomputing clusters to advance open-source model development, achieving breakthroughs in various scientific fields [4] Group 2: Intelligent Agents and Physical AI - The next stage in AI capability evolution involves moving from large language models to intelligent agents that can reason and act, addressing previously untrained problems [5][7] - Huang highlighted the challenge of enabling AI to understand the physical world, necessitating a complete system comprising training, real-time inference, and high-precision physical simulation [7] Group 3: Innovations in AI Models - NVIDIA introduced Cosmos, an open-source world model that learns from vast amounts of video and real-world driving data, enabling realistic video generation and causal reasoning [8] - The first end-to-end autonomous driving system, NVIDIA AlphaMio, was developed using data generated by Cosmos, featuring reasoning capabilities to explain actions to passengers [8] Group 4: Vera Rubin Chip Architecture - The new NVIDIA Vera Rubin chip architecture addresses the explosive growth in AI model size and computing demands, with a redesign of all chips to work as a cohesive system [9][12] - The Vera CPU and Rubin GPU demonstrate significant performance improvements, with the GPU's AI floating-point performance reaching five times that of the previous Blackwell architecture [11] - The architecture's design allows for a reduction in the number of systems required to train a 100 trillion parameter model to one-fourth of that needed with Blackwell, while also significantly lowering inference costs [13] Group 5: Industry Collaboration and Future Vision - NVIDIA announced a partnership with Siemens to integrate its physical AI models with Siemens' digital twin platform, aiming for a comprehensive digital transformation across various industrial processes [14] - Huang concluded that autonomous vehicles represent just the beginning of the physical AI market, with similar technologies poised to drive a robotics revolution [14]
黄仁勋:Rubin提前量产,物理AI“ChatGPT时刻”已至
Tai Mei Ti A P P· 2026-01-06 01:53
Core Insights - NVIDIA is set to experience unprecedented spending in 2026, showcasing its absolute strength and leading advantage in the AI field [2] - The company has announced the full production of its next-generation Rubin chip architecture, significantly ahead of the expected timeline [3] Group 1: Rubin Chip Architecture - The Rubin architecture aims to create an incredible AI supercomputer, marking the next generation of artificial intelligence [3] - The architecture consists of six chips, including the NVIDIA Vera CPU and NVIDIA Rubin GPU, designed for extreme collaboration to enhance efficiency and performance in large model training and inference [4] - Compared to the current Blackwell architecture, Rubin uses four times the GPUs for parallel training of mixed experts (MoE) models, reducing average inference costs by up to 10 times and increasing training speed by 3.5 times [4] Group 2: Market Position and Competition - NVIDIA faces strong competition from Google’s TPU and other ASIC chips, which are perceived to offer lower total cost of ownership (TCO) while maintaining or exceeding performance [5] - Despite the competitive landscape, NVIDIA's CEO expressed confidence in Rubin's ability to improve the company's product delivery value and market share in data centers [5] - Major cloud providers and AI developers, including AWS, Google, and Microsoft, are interested in deploying Rubin, indicating strong customer demand [5] Group 3: Future Trends in AI - The demand for AI computing is expected to surge, with Morgan Stanley predicting a 26% year-over-year increase in data center AI chip shipments in 2026 [6] - NVIDIA aims for Rubin to counter predictions that ASIC chips will significantly outpace GPU growth, with ASIC market share expected to rise from under 41% to over 46% [6] - The company is positioning itself for the transition from generative AI to agent-based AI, which is anticipated to transform enterprise-level AI usage [6] Group 4: Physical AI Developments - NVIDIA is actively investing in physical AI, having previously introduced the NVIDIA Cosmos model and now unveiling new products in robotics and autonomous driving [6][7] - Collaborations with leading companies like Boston Dynamics and Caterpillar are underway to develop new AI robots using NVIDIA's technology [7] - The CEO declared that the "ChatGPT moment" for physical AI has arrived, indicating a significant shift in the industry [7]
高通推出全套机器人技术组合,助力从家用机器人到全尺寸人型机器人的具身智能
硬AI· 2026-01-06 01:40
Core Viewpoint - Qualcomm has launched a comprehensive universal robot stack architecture that integrates hardware, software, and AI capabilities, aiming to accelerate automation in various industries such as retail, logistics, and manufacturing [2][3]. Group 1: Architecture and Technology - The new architecture supports a range of robots from personal service robots to full-sized humanoid robots, emphasizing energy efficiency and scalability [2][3]. - Qualcomm introduced the Dragonwing™ IQ10 series, a flagship robot processor designed for humanoid and advanced autonomous mobile robots (AMR), enhancing the existing robot product roadmap [3][9]. - The architecture combines heterogeneous edge computing, edge AI, hybrid safety-critical systems, software, machine learning operations, and AI data flywheels, redefining the possibilities of robotic technology [11]. Group 2: Industry Collaboration and Applications - Qualcomm is collaborating with various companies in its robot platform ecosystem, including Advantech, AutoCore, and Kuka Robotics, to enable large-scale deployment-ready robotic applications [6][9]. - The partnership with Figure aims to develop advanced AI-driven humanoid robots to improve productivity across industries and enhance human well-being [6]. - The Dragonwing industrial processor roadmap supports multiple universal robot forms, including humanoid robots from global manufacturers like Booster and VinMotion [9]. Group 3: Performance and Capabilities - The architecture enables advanced perception capabilities and combines end-to-end AI models for motion planning, achieving generalized operational and human-robot interaction capabilities [9][11]. - A demonstration of the Motion 2 robot by Vinmotion showcased its capabilities, including physical tasks that highlight the potential of the Dragonwing IQ10 [9].
物理AI的ChatGPT时刻!英伟达“内驱”无人驾驶汽车将至,将于一季度在美国上路
硬AI· 2026-01-06 01:40
Core Viewpoint - NVIDIA has announced the open-source release of its first inference VLA (Vision-Language-Action) model, Alpamayo 1, aimed at enhancing autonomous driving technology by enabling vehicles to "think" and solve problems in unexpected situations [2][3][5]. Group 1: Model and Technology Overview - The Alpamayo 1 model features a 10 billion parameter architecture that processes video inputs to generate trajectories and reasoning processes [2][5][9]. - This model is designed to address the long-tail problem in autonomous driving by employing human-like reasoning to handle complex driving scenarios [3][5]. - The model is not intended to run directly in vehicles but serves as a large-scale teacher model for developers to fine-tune and integrate into their autonomous driving technology stacks [11]. Group 2: Industry Support and Collaboration - Major companies and research institutions, including Jaguar Land Rover, Lucid, Uber, and UC Berkeley's DeepDrive, have expressed support for the Alpamayo model, indicating its potential to accelerate the deployment of Level 4 autonomous driving technologies [5][16]. - Industry leaders emphasize the importance of open and transparent AI development for advancing responsible autonomous mobility, highlighting the role of Alpamayo in providing new tools for developers [16]. Group 3: Ecosystem and Additional Tools - NVIDIA has created a comprehensive open ecosystem around the Alpamayo model, including simulation tools and datasets, to support developers in building autonomous driving solutions [9][15]. - The AlpaSim framework, a fully open-source end-to-end simulation tool, has been released to facilitate high-fidelity autonomous driving development [15]. - NVIDIA also provides a large-scale open dataset with over 1,700 hours of driving data, covering a wide range of geographical locations and conditions, crucial for advancing reasoning architectures [15]. Group 4: Broader AI Developments - In addition to Alpamayo, NVIDIA has launched several other open-source models and tools across various sectors, including the Nemotron family for agent AI, the Cosmos platform for physical AI, and the Isaac GR00T model for robotics [21][22]. - These models and frameworks are available on platforms like GitHub and Hugging Face, enabling widespread access and deployment across different AI infrastructures [22].
黄仁勋最想赢的一仗, 四年仍在原地踏步
3 6 Ke· 2026-01-06 01:35
Core Insights - Nvidia has experienced remarkable growth in its AI chip business, with revenue soaring from $27.5 billion in the first nine months of 2023 to nearly $148 billion in the same period of 2024, a growth rate that is rare in the tech industry history [1] - CEO Jensen Huang is not satisfied with this growth and is betting on the next phase of Nvidia's development in robotics and manufacturing through the Omniverse platform [2][4] - However, the Omniverse initiative has not met expectations, leading to frustration from Huang [3][9] Group 1: Omniverse Overview - Omniverse was initially launched with high ambitions, with Huang emphasizing its strategic importance and potential to capture a share of the $50 trillion manufacturing and logistics market [4][6] - Despite the high-profile endorsements and partnerships, insiders reveal that Omniverse has made little substantial progress over four years, with very few companies actually utilizing its cloud services for large-scale simulations [7][10] - Developers have criticized the Omniverse tools for being difficult to use and prone to crashes, with one developer noting that the platform fails when attempting complex simulations [8][12] Group 2: Challenges and Limitations - The complexity of simulating physical behaviors in robotics and manufacturing is far greater than anticipated, particularly when dealing with flexible materials and fluid dynamics [11][12] - Omniverse's initial vision of a universal simulation platform has proven inefficient, as specific simulations for particular scenarios are more effective [13][14] - Many companies prefer to develop their own simulation software, as seen with Tesla, which indicates a reluctance to adopt Nvidia's offerings [15][19] Group 3: Strategic Implications - The setbacks with Omniverse could have broader implications for Nvidia's strategic positioning within the tech industry, as it seeks to transition from a hardware manufacturer to a provider of comprehensive ecosystems [20][21] - If Omniverse fails, Nvidia risks losing its opportunity to define the next generation of standards in the manufacturing and robotics sectors, potentially relegating it to a mere hardware supplier [22][23] - Competitors are already encroaching on the market, with companies like Unity Technologies and Gazebo gaining traction, which could threaten Nvidia's market share [18][22] Group 4: Future Outlook - Huang's concerns about the slow adoption of Omniverse by large companies reflect a broader anxiety about establishing a unified standard in a fragmented market [27][28] - The rapid development of the robotics industry presents a critical window for Nvidia to establish its standards; failure to do so may hinder its influence in future technological landscapes [30][31] - While the market demand for simulation technology exists, the timing for its explosion remains uncertain, and Nvidia's ability to define the ecosystem will be crucial for its long-term success [31][33]