【天风汽车】英伟达开源智驾大模型,商业化落地加速-0106
NvidiaNvidia(US:NVDA) Ge Long Hui·2026-01-06 09:39

Group 1 - Nvidia has launched an open-source vision-language-action model, Alpamayo 1, for autonomous driving, featuring 10 billion parameters and a significant breakthrough in openness, providing complete development resources from data to deployment [1] - The model is an inference-based VLA that can reason causal relationships before decision-making, predict intentions of others, and handle multi-step decisions, resulting in a 12% improvement in planning accuracy, a 37% increase in action consistency, and a 25% reduction in collision rates [1] - Nvidia's DRIVE system is entering mass production, with the first model being the new Mercedes CLA, set to hit the roads in the US in 2026, and plans to test Robotaxi services in collaboration with partners in 2027 [1] Group 2 - The open-source end-to-end model is expected to reconstruct the intelligent driving ecosystem, further lowering the threshold for Level 4 (L4) autonomous driving, with recommendations for Robotaxi algorithm companies such as Xiaoma and Wenyuan, and attention on unmanned mining vehicle targets like Xidi Zhijia and Boleton, as well as the upcoming listing of Yikong Zhijia [2] - Recommended components for safety redundancy include steering systems from Cybercab's exclusive supplier, Nexperia, braking systems from Bertley and Ruikem, lidar from Hesai and Suteng, and chips from Horizon and Black Sesame [2] - In addition to intelligent driving, Nvidia's Isaac platform and GR00T basic module development cover various types of robots, including industrial, humanoid, and consumer-grade, with a positive outlook on the resonance between intelligent driving and robotics domain control companies like Kobot, Desay SV, and Jingwei Hengrun [2]