视觉-语言-动作模型
Search documents
VLA2:浙大x西湖大学提出智能体化VLA框架,操作泛化能力大幅提升
具身智能之心· 2025-10-24 00:40
本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 点击下方 卡片 ,关注" 具身智能 之心 "公众号 作者丨 Han Zhao等 编辑丨具身智能之心 在机器人操作任务中,尽管视觉-语言-动作模型已具备一定程度的泛化能力,但当遇到涉及完全陌生概念(unseen concept)的指令时,这类模型仍无法将先前习 得的操作经验迁移至此类场景(参考OpenVLA的相关实验)。为此,我们提出视觉-语言-动作智能体(VLA²),旨在通过支持调用多样化工具来增强VLA系统 的能力。该框架能够整合任务规划、网络搜索、目标检测等功能模块,从而突破现有VLA模型的执行能力边界。 论文题目:VLA²: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation 论文链接:https://arxiv.org/abs/2510.14902 项目主页:http ...
缺数据也能拿SOTA?清华&上海AI Lab破解机器人RL两大瓶颈
量子位· 2025-09-26 02:08
Core Viewpoint - The article discusses the development of SimpleVLA-RL, an end-to-end online training solution for Visual-Language-Action (VLA) models, aimed at enhancing the flexibility and performance of robots in complex environments while addressing existing training bottlenecks [3][12]. Group 1: Key Challenges in Existing Training Paradigms - Current training paradigms face significant challenges, including high data collection costs and insufficient generalization capabilities [2][8]. - The reliance on large-scale, high-quality robot operation trajectories limits scalability and increases costs, making data acquisition a major hurdle [8]. - The models struggle with generalization, particularly in out-of-distribution tasks and new environments, leading to performance drops in long-sequence dependencies and combinatorial tasks [8][9]. Group 2: SimpleVLA-RL Framework - SimpleVLA-RL employs a combination of interactive trajectory sampling, result-based rewards, and enhanced exploration to tackle the three core challenges of VLA model training [5][6]. - The framework demonstrates state-of-the-art (SoTA) performance in standard benchmarks like LIBERO and RoboTwin, achieving significant improvements even with limited data [5][21]. - In scenarios with single demonstration data, the average success rate in LIBERO increased from 48.9% to 96.9% after applying SimpleVLA-RL [5]. Group 3: Performance Metrics and Results - SimpleVLA-RL achieved an average success rate of 99.1% in LIBERO, with long-sequence tasks improving by 12.0 percentage points [21]. - In RoboTwin1.0, the average success rate rose from 39.8% to 70.4%, with specific tasks like "Blocks Stack" improving by 33.1 percentage points [23]. - The framework also demonstrated a significant increase in performance in RoboTwin2.0, with average success rates improving from 38.3% to 68.8% [25]. Group 4: Innovations and Discoveries - The training process led to the emergence of new operational strategies, such as the "Pushcut" phenomenon, where the model autonomously discovers more efficient methods beyond human demonstrations [10][31]. - This phenomenon indicates that reinforcement learning can enable VLA models to surpass the limitations of human demonstration patterns, paving the way for future adaptive VLA model development [31].
基于313篇VLA论文的综述与1661字压缩版
理想TOP2· 2025-09-25 13:33
Core Insights - The emergence of Vision Language Action (VLA) models signifies a paradigm shift in robotics from traditional strategy-based control to general robotic technology, enabling active decision-making in complex environments [12][22] - The review categorizes VLA methods into five paradigms: autoregressive, diffusion-based, reinforcement learning, hybrid, and specialized methods, providing a comprehensive overview of their design motivations and core strategies [17][20] Summary by Categories Autoregressive Models - Autoregressive models generate action sequences as time-dependent processes, leveraging historical context and sensory inputs to produce actions step-by-step [44][46] - Key innovations include unified multimodal Transformers that tokenize various modalities, enhancing cross-task action generation [48][49] - Challenges include safety, interpretability, and alignment with human values [47][56] Diffusion-Based Models - Diffusion models frame action generation as a conditional denoising process, allowing for probabilistic action generation and modeling multimodal action distributions [59][60] - Innovations include modular optimization and dynamic adaptive reasoning to improve efficiency and reduce computational costs [61][62] - Limitations involve maintaining temporal consistency in dynamic environments and high computational resource demands [5][60] Reinforcement Learning Models - Reinforcement learning models integrate VLMs with reinforcement learning to generate context-aware actions in interactive environments [6] - Innovations focus on reward function design and safety alignment mechanisms to prevent high-risk behaviors while maintaining task performance [6][7] - Challenges include the complexity of reward engineering and the high computational costs associated with scaling to high-dimensional real-world environments [6][9] Hybrid and Specialized Methods - Hybrid methods combine different paradigms to leverage the strengths of each, such as using diffusion for smooth trajectory generation while retaining autoregressive reasoning capabilities [7] - Specialized methods adapt VLA frameworks to specific domains like autonomous driving and humanoid robot control, enhancing practical applications [7][8] - The focus is on efficiency, safety, and human-robot collaboration in real-time inference and interactive learning [7][8] Data and Simulation Support - The development of VLA models heavily relies on high-quality datasets and simulation platforms to address data scarcity and testing risks [8][34] - Real-world datasets like Open X-Embodiment and simulation tools such as MuJoCo and CARLA are crucial for training and evaluating VLA models [8][36] - Challenges include high annotation costs and insufficient coverage of rare scenarios, which limit the generalization capabilities of VLA models [8][35] Future Opportunities - The integration of world models and cross-modal unification aims to evolve VLA into a comprehensive framework for environment modeling, reasoning, and interaction [10] - Causal reasoning and real interaction models are expected to overcome limitations of "pseudo-interaction" [10] - Establishing standardized frameworks for risk assessment and accountability will transition VLA from experimental tools to trusted partners in society [10]
全新范式!LLaDA-VLA:首个基于大语言扩散模型的VLA模型
具身智能之心· 2025-09-12 00:05
Core Viewpoint - The article discusses the advancements in Vision-Language Models (VLMs) and introduces LLaDA-VLA, the first Vision-Language-Action Model developed using large language diffusion models, which demonstrates superior multi-task performance in robotic action generation [1][5][19]. Group 1: Introduction to LLaDA-VLA - LLaDA-VLA integrates Masked Diffusion Models (MDMs) into robotic action generation, leveraging pre-trained multimodal large language diffusion models for fine-tuning and enabling parallel action trajectory prediction [5][19]. - The model architecture consists of three core modules: a vision encoder for RGB feature extraction, a language diffusion backbone for integrating visual and language information, and a projector for mapping visual features to language token space [10][7]. Group 2: Key Technical Innovations - Two major breakthroughs are highlighted: - Localized Special-token Classification (LSC), which reduces cross-domain transfer difficulty by classifying only action-related special tokens, thus improving training efficiency [8][12]. - Hierarchical Action-Structured Decoding (HAD), which explicitly models hierarchical dependencies between actions, resulting in smoother and more reasonable generated trajectories [9][13]. Group 3: Performance Evaluation - LLaDA-VLA outperforms state-of-the-art methods across various environments, including SimplerEnv, CALVIN, and real robot WidowX, achieving significant improvements in success rates and task completion metrics [4][21]. - In specific task evaluations, LLaDA-VLA achieved an average success rate of 58% across multiple tasks, surpassing previous models [15]. Group 4: Experimental Results - The model demonstrated a notable increase in task completion rates and average task lengths compared to baseline models, validating the effectiveness of the proposed LSC and HAD strategies [18][14]. - In a comparative analysis, LLaDA-VLA achieved a success rate of 95.6% in a specific task, significantly higher than other models [14][18]. Group 5: Research Significance and Future Directions - The introduction of LLaDA-VLA establishes a solid foundation for applying large language diffusion models in robotic operations, paving the way for future research in this domain [19][21]. - The design strategies employed in LLaDA-VLA not only enhance model performance but also open new avenues for exploration in the field of embodied intelligence [19].