Vision-Language-Action
Search documents
走向融合统一的VLA和世界模型......
自动驾驶之心· 2025-12-23 09:29
点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 >>自动驾驶前沿信息获取 → 自动驾驶之心知识星球 最近自动驾驶的两大前沿方向:VLA和世界模型,已经有明显的融合趋势 。这一想法是十月份看到中科院的 DriveVLA-W0,因此笔者借这个机会分别调研了 VLA 和 World Model 相关的工作,并且思考一下 这二者结合 的可能性。 太长不看版: VLA和世界模型并不冲突,终极目标是一致的。世界模型可以作为数据引擎、闭环引擎,甚至可以参与到VLA 的模型训练过程中,融合是大趋势,落地是我全都要。 经过几周的调研、分析,有了些成果和自己的心得,所以也想理一理,分享给自动驾驶之心的小伙伴们,主 要分为以下几个部分: 输入端:融合多模态感知 VLA的输入整合了视觉、传感器与语言等多模态的信息。核心视觉输入通过多摄像 头图像生成BEV或体素表征,以理解空间结构;传感器(如激光雷达、毫米波雷达)提供几何与动态补充; 语言输入则是关键创新,支持导航指令、交互问答与规则描述,使系统能理解人类意图与常识,构建出超越 传统纯视觉感知的环境理解。 自动驾驶技术诞生到发展至 ...
让机器人「不仅会想,还能准确去做」,VLA-R1把「推理+行动」带进真实世界
机器之心· 2025-10-25 05:14
Core Insights - The article discusses the VLA-R1 model, which enhances reasoning in Vision-Language-Action (VLA) models by integrating chain-of-thought (CoT) supervision with reinforcement learning (RL) to improve both reasoning quality and execution accuracy [4][5]. Group 1: VLA-R1 Overview - VLA-R1 is a foundational model that emphasizes "reasoning first, then executing" [4]. - It combines CoT supervision with verifiable rewards from RL to optimize the reasoning and execution processes [4][5]. Group 2: Key Innovations - Two-stage training approach: The model first undergoes supervised fine-tuning (SFT) with explicit CoT supervision, followed by reinforcement learning based on GRPO to stabilize the transition from reasoning to action [6][8]. - Three types of verifiable rewards (RLVR) are introduced to ensure accurate perception, trajectory execution, and structured output [9][11]. - The VLA-CoT data engine generates a structured dataset of 13,000 visual-language-action samples to provide high-quality supervision signals for SFT [12][19]. Group 3: Experimental Results - VLA-R1 was evaluated across four levels: in-domain testing, out-of-domain testing, simulation platforms, and real robot experiments [16][17]. - In the in-domain benchmark, VLA-R1 achieved a perception IoU of 36.51, improving by 17.78% over the baseline [22]. - In real robot experiments, VLA-R1 demonstrated a success rate of 62.5% for affordance perception and 75% for trajectory execution under various environmental complexities [26]. Group 4: Applications - VLA-R1 is applicable in home automation tasks, such as object retrieval and organization in cluttered environments, by effectively reasoning through similar targets and multiple container options [34]. - It can also be utilized in warehouse picking and light industrial assembly processes, where it clarifies the relationships between parts, tools, and containers [34]. - The model's structured output format is suitable for educational demonstrations and automated assessments, allowing for easy evaluation of reasoning and execution steps [34].