视觉-语言-动作(VLA)模型

Search documents
面向量产VLA!FastDriveVLA:即插即用剪枝模块,推理加速近4倍
自动驾驶之心· 2025-08-23 16:03
点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 今天自动驾驶之心为大家分享 北京大学,小鹏汽车 最新的工作! FastDriveVLA:对抗性视觉token剪枝,50%压缩率下性能保持97.3%! 如果 您有相关工作需要分享,请在文末联系我们! 自动驾驶课程学习与 技术交流群加入 ,也欢迎添加小助理微信AIDriver005 >>自动驾驶前沿信息获取 → 自动驾驶之心知识星球 论文作者 | Jiajun Cao等 编辑 | 自动驾驶之心 写在前面 && 笔者理解 近年来,端到端自动驾驶研究进展神速,各家也都在如火如荼的宣传自家的端到端方案。与传统模块化方案(感知→预测→规划)不同,端到端方法在同一个模 型中完成全部感知到规划的过程,有效减少了不同模块之间的信息损失,也从某种角度简化了系统架构。但是技术的进步不止于此,随着视觉-语言大模型 (VLM)在视觉问答任务中展现出令人称奇的推理能力,很多研究人员及算法团队开始将其扩展至具身智能和自动驾驶领域,通过引入动作生成功能,形成了视 觉-语言-动作(VLA)模型。相较传统模块化方案,VLA 模型在复杂场景理解与 ...
天大&清华最新!GeoVLA:增强VLA模型的3D特征提取能力,鲁棒提升明显(SOTA)
具身智能之心· 2025-08-15 00:05
Core Insights - The article introduces GeoVLA, a novel framework that integrates 3D information into Vision-Language-Action (VLA) models, enhancing robots' spatial perception and adaptability [3][9][10]. Group 1: Background and Motivation - The advancement of robotic operations requires intelligent interaction and precise physical control in real-world environments. Recent VLA models have gained attention for their ability to follow instructions and execute actions [7]. - Current VLA models primarily rely on 2D visual inputs, neglecting the rich geometric information inherent in the 3D physical world, which limits their spatial perception capabilities [8]. Group 2: GeoVLA Framework - GeoVLA employs a visual-language model (VLM) to process images and language instructions, extracting fused visual-language embeddings. It converts depth maps into point clouds and uses a custom point embedding network to generate 3D geometric embeddings [3][10][12]. - The framework consists of three key components: VLM for general understanding, a point embedding network (PEN) for extracting fine-grained 3D features, and a 3D enhanced action expert (3DAE) for generating action sequences [12][13]. Group 3: Performance Evaluation - GeoVLA was evaluated on the LIBERO and ManiSkill2 benchmarks, achieving state-of-the-art results. It demonstrated significant robustness in real-world tasks requiring high adaptability and spatial awareness [15][27]. - In LIBERO, GeoVLA achieved an average success rate of 97.7%, outperforming other models like CogACT (93.2%) and OpenVLA-OFT (95.3%) [27]. - In the ManiSkill2 benchmark, GeoVLA achieved a success rate of 77%, surpassing CogACT (69%) and Dita (66%) [27]. Group 4: Ablation Studies - Ablation studies indicated that the PEN encoder outperformed traditional encoders, achieving a success rate of 97.7% compared to 95.8% for MLP and 95.2% for PointNet [30]. - The use of static routing in the MoE architecture improved performance, demonstrating the effectiveness of the design in leveraging multimodal information [30][20]. Group 5: Real-World Experiments - Real-world experiments showcased GeoVLA's robustness and generalization capabilities across various 3D manipulation tasks, maintaining high performance despite changes in camera perspective, height, and object size [36][34]. - GeoVLA achieved an average success rate of 86.3% across basic and 3D perception tasks, outperforming other models by significant margins [36].
保持精度,提升速度!Spec-VLA:首个专为VLA推理加速设计的推测解码框架
具身智能之心· 2025-08-14 00:03
Core Viewpoint - The article discusses the introduction of the Spec-VLA framework, which utilizes speculative decoding to accelerate the inference process of Vision-Language-Action (VLA) models, achieving significant speed improvements without the need for fine-tuning the VLA validation model [2][6]. Group 1: Spec-VLA Framework - Spec-VLA is the first speculative decoding framework specifically designed for accelerating VLA inference [2]. - The framework demonstrates a 42% acceleration compared to the OpenVLA baseline model, achieved by training only the draft model [6]. - The proposed mechanism enhances the acceptance length by 44% while maintaining the task success rate [2]. Group 2: Technical Details - The article highlights the challenges posed by the large parameter scale and autoregressive decoding characteristics of Vision-Language Models (VLMs) [2]. - Speculative decoding (SD) allows large language models (LLMs) to generate multiple tokens in a single forward pass, effectively speeding up inference [2]. - The framework employs a relaxed acceptance mechanism based on the relative distances represented by action tokens in VLA models [2]. Group 3: Live Broadcast Insights - The live broadcast covers key topics such as speculative decoding as an acceleration method for large language models, an introduction to VLA models, and detailed implementation aspects of the Spec-VLA framework [7].
Interleave-VLA:首个支持交错图文指令的VLA框架,跨域泛化提升2-3倍
具身智能之心· 2025-08-05 00:03
Core Viewpoint - The article introduces the Interleave-VLA framework, which enhances robot manipulation by utilizing interleaved image-text instructions, demonstrating significant improvements in performance over existing models [2][3][7]. Group 1: Interleave-VLA Framework - Interleave-VLA is the first framework capable of understanding interleaved image-text instructions and generating continuous action sequences in the physical world [2]. - The framework is model-agnostic and requires minimal modifications to current state-of-the-art VLA models, providing strong zero-shot generalization capabilities [2][3]. Group 2: Data Set Development - A major challenge in implementing Interleave-VLA was the lack of a large-scale interleaved embodied dataset. To address this, an automated process was developed to convert pure text instructions from the Open X-Embodiment dataset into interleaved image-text instructions [2]. - The resulting dataset contains 210,000 interaction data points and 13 million frames of images, marking the first large-scale real-world interleaved embodied dataset [2]. Group 3: Performance Evaluation - Comprehensive evaluations in simulation benchmarks and real robot experiments show that Interleave-VLA significantly enhances cross-domain generalization capabilities by 2-3 times compared to state-of-the-art baseline models [3]. - The framework supports flexible task interfaces and can handle various user-provided image instructions, such as hand-drawn sketches, in a zero-shot manner [3]. Group 4: Advantages of Interleaved Instructions - The interleaved instruction paradigm effectively utilizes heterogeneous datasets and diverse instruction images, including those sourced from the internet, showcasing its substantial scalability potential [3][7].
面向量产VLA方案!FastDriveVLA:即插即用剪枝模块,推理加速近4倍(北大&小鹏)
自动驾驶之心· 2025-08-04 23:33
Core Viewpoint - The article discusses the development of FastDriveVLA, a novel framework for visual token pruning in autonomous driving, achieving a 50% compression rate while maintaining 97.3% performance [2][3][43]. Group 1: End-to-End Autonomous Driving - Recent advancements in end-to-end autonomous driving research have led to the adoption of end-to-end methods that complete perception to planning in a single model, reducing information loss between modules [3]. - The introduction of Visual-Language-Action (VLA) models enhances decision-making in complex scenarios, making them increasingly popular in autonomous driving systems [3][10]. Group 2: Visual Token Pruning - Existing VLM/VLA models encode images into numerous visual tokens, resulting in high computational costs. Current research explores two main directions for visual token pruning: attention mechanism-based methods and similarity-based methods [4][14]. - FastDriveVLA proposes a reconstruction-based visual token pruning framework that focuses on retaining tokens related to foreground information, significantly reducing computational costs while maintaining performance [5][13]. Group 3: FastDriveVLA Framework - FastDriveVLA includes a plug-and-play pruner called ReconPruner, trained using a pixel reconstruction task to focus on foreground areas and assign higher significance scores to key tokens [6][17]. - The framework utilizes a large-scale dataset, nuScenes-FG, containing 241,000 image-mask pairs for training, enhancing the model's ability to distinguish between foreground and background [6][12]. Group 4: Experimental Results - FastDriveVLA achieved state-of-the-art results on the nuScenes closed-loop planning benchmark, demonstrating its effectiveness and practicality [13][34]. - The framework shows superior performance compared to existing methods, with improvements in L2 error and collision rates at various pruning ratios [30][34]. Group 5: Efficiency Analysis - FastDriveVLA significantly reduces FLOPs by approximately 7.5 times and decreases prefill and decode latencies, enhancing inference efficiency for real-time deployment [36][40]. - The lightweight design of ReconPruner allows for lower CUDA latency compared to several similar methods, making it suitable for practical applications [36][40].
Spec-VLA:首个专为VLA推理加速设计的推测解码框架
具身智能之心· 2025-08-02 16:02
Core Viewpoint - The article discusses the development of Spec-VLA, a speculative decoding framework designed to accelerate Vision-Language-Action (VLA) models, addressing challenges related to computational demands and decoding delays [3][4][16]. Research Background and Motivation - VLA models have shown significant progress in generating robot action sequences based on language instructions, but they face challenges such as the large parameter size of backbone Visual Language Models (VLMs) and increased decoding latency due to autoregressive decoding strategies [3]. - Existing acceleration methods have limitations, necessitating a tailored approach for VLA models [3]. Core Framework: Spec-VLA - Spec-VLA introduces a collaborative mechanism between draft and validation models to enhance inference speed, utilizing a draft model to predict action tokens and a validation model to ensure output quality [4][5]. Key Mechanism: Relaxed Acceptance - The relaxed acceptance mechanism allows for a defined threshold of acceptable distance between draft and validation model predictions, facilitating a more efficient decoding process without significant computational overhead [7][10]. Experimental Validation - The framework was evaluated on the LIBERO simulation benchmark across four task sets, demonstrating significant improvements in speed and acceptance length while maintaining success rates [9][10]. - The introduction of relaxed acceptance led to an acceleration factor of 1.22× to 1.42×, with acceptance length increasing by 25%-44% [10][11]. Key Results - The results indicate that as the relaxed threshold increases, the acceptance length significantly improves while maintaining stable success rates across various datasets [10][11]. - Case studies show that relaxed conditions reduce the number of iterations needed to complete action sequences, validating the effectiveness of the relaxed acceptance mechanism [13]. Conclusion and Limitations - Spec-VLA demonstrates the potential of speculative execution in VLA prediction tasks, achieving a speedup of 1.42× and a 44% increase in acceptance length without compromising success rates [16]. - Limitations include the lack of real-world robot scenario testing and the exploration of action chunking strategies [16].
都说强化+VLA才是未来?相关工作汇总来啦
具身智能之心· 2025-08-01 00:03
Core Viewpoint - The integration of Vision-Language-Action (VLA) models with Reinforcement Learning (RL) presents a promising new paradigm that leverages both environmental trial-and-error interactions and pre-collected suboptimal data for enhanced performance [2]. Group 1: Offline RL Training without Environment - The paper "MoRE: Unlocking Scalability in Reinforcement Learning for Quadruped Vision-Language-Action Models" discusses scalability in RL applications [3]. - "Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions" focuses on offline RL techniques [3]. Group 2: Online RL Training with Environment - Online RL training enhances VLA models through trial-and-error interactions in real-time environments, leading to performance improvements [4]. - The paper "ReinboT: Amplifying Robot Visual-Language Manipulation with Reinforcement Learning" explores this concept [5]. - "GeRM: A Generalist Robotic Model with Mixture-of-experts for Quadruped Robot" presents a generalist approach in robotic models [5]. Group 3: Simulator-Based Approaches - Various projects aim to improve VLA models using simulation environments, such as "OctoNav: Towards Generalist Embodied Navigation" [6]. - "TGRPO: Fine-tuning Vision-Language-Action Model via Trajectory-wise Group Relative Policy Optimization" focuses on optimizing VLA models through trajectory-based methods [6]. - "VLA-RL: Towards Masterful and General Robotic Manipulation with Scalable Reinforcement Learning" emphasizes scalable RL for robotic manipulation [6]. Group 4: Real-World Applications - The deployment phase of RL training is crucial for testing VLA models in real-world scenarios [8]. - "Dynamism v1 (DYNA-1) Model: A Breakthrough in Performance and Production-Ready Embodied AI" highlights advancements in embodied AI [9]. - "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy" discusses fine-tuning methods for VLA models [9]. Group 5: RL Alignment Training - "GRAPE: Generalizing Robot Policy via Preference Alignment" addresses the alignment of robot policies with user preferences [11]. - "SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning" focuses on safety in VLA model training [12].
亿级短视频数据突破具身智能Scaling Law!Being-H0提出VLA训练新范式
量子位· 2025-07-24 07:28
Core Viewpoint - The article discusses the advancements in embodied intelligence, particularly focusing on the development of the Being-H0 model, which utilizes human hand movement data to enhance robot action capabilities and address the data scarcity issue in visual-language-action (VLA) models [1][30]. Group 1: Data Scarcity and Solutions - The lack of real-world data is hindering the development of VLA models, with existing data falling short by three orders of magnitude compared to the required scale of over one hundred million training samples [2]. - The research team from Peking University and BeingBeyond proposed a solution by creating a large-scale dataset from human operation videos, achieving a dataset size in the hundreds of millions [3][17]. Group 2: Being-H0 Model and Innovations - Being-H0 is the first large-scale pre-trained VLA model based on human video hand data, utilizing a novel "physical instruction tuning" framework to map human hand movements to robot action spaces [5][10]. - The model is built on the premise that human hand movements serve as the most complete execution template for various robotic end-effectors, allowing robots to benefit from human motion knowledge [6][10]. Group 3: Training Framework - The physical instruction tuning framework consists of three key components: pre-training from millions of human operation videos, physical space alignment to eliminate data source heterogeneity, and post-training for effective skill transfer to real robots [12][13][14]. - The framework addresses the challenges of data heterogeneity between 2D multimodal data and 3D robot action spaces, enhancing the model's ability to learn and generate actions [12]. Group 4: UniHand Dataset - The UniHand dataset, comprising over 150 million human hand gesture action samples, was systematically constructed to meet the training data needs of the physical instruction tuning framework [20][21]. - Even with just 2.5 million samples from this dataset, the model demonstrated significant performance improvements in gesture action prediction and real robot tasks [21]. Group 5: Experimental Validation - Comprehensive real robot experiments validated the effectiveness of the Being-H0 model, showing it outperformed both its base model InternVL3 and NVIDIA's GR00T N1.5 model in various tasks [22][24]. - The experiments confirmed that the data construction strategy significantly enhances the model's ability to learn human action knowledge from video data, leading to improved task success rates [24]. Group 6: Future Directions - The BeingBeyond team is focused on advancing core technologies in embodied intelligence, dexterous manipulation, and full-body motion control, aiming to integrate robots into everyday life [30].
一文尽览!近一年自动驾驶VLA优秀工作汇总~
自动驾驶之心· 2025-07-15 12:30
Core Insights - The article discusses the advancements in Vision-Language-Action (VLA) models for autonomous driving, highlighting the integration of navigation and reinforcement learning to enhance reasoning capabilities beyond visual range [2][3][6]. Group 1: NavigScene - NavigScene is introduced as a novel auxiliary dataset that pairs local multi-view sensor inputs with global natural language navigation guidance, addressing the critical gap between local perception and global navigation context in autonomous driving [6]. - Three complementary paradigms are implemented in NavigScene: navigation-guided reasoning, navigation-guided preference optimization, and navigation-guided VLA models, enhancing the reasoning and generalization capabilities of autonomous driving systems [6]. - Comprehensive experiments demonstrate significant performance improvements in perception, prediction, and planning tasks by integrating global navigation knowledge into autonomous driving systems [6]. Group 2: AutoVLA - AutoVLA is proposed as an end-to-end autonomous driving framework that integrates physical action tokens with a pre-trained VLM backbone, enabling direct policy learning and semantic reasoning from raw visual observations and language instructions [12]. - A reinforcement learning-based post-training method using Group Relative Policy Optimization (GRPO) is introduced to achieve adaptive reasoning and further enhance model performance in end-to-end driving tasks [12]. - AutoVLA achieves competitive performance across multiple autonomous driving benchmarks, including open-loop and closed-loop tests [12]. Group 3: ReCogDrive - ReCogDrive is presented as an end-to-end autonomous driving system that integrates VLM with a diffusion planner, employing a three-stage training paradigm to address performance drops in rare and long-tail scenarios [13][16]. - The first stage involves fine-tuning the VLM on a large-scale driving Q&A dataset to mitigate domain gaps between general content and real-world driving scenarios [16]. - The method achieves a state-of-the-art PDMS score of 89.6 on the NAVSIM benchmark, highlighting its effectiveness and feasibility [16]. Group 4: Impromptu VLA - Impromptu VLA introduces a large-scale, richly annotated dataset aimed at addressing the limitations of existing benchmarks in autonomous driving VLA models [22]. - The dataset is designed to enhance the performance of VLA models in unstructured extreme scenarios, demonstrating significant improvements in established benchmarks [22]. - Experiments show that training with the Impromptu VLA dataset leads to notable performance enhancements in closed-loop NeuroNCAP scores and collision rates [22]. Group 5: DriveMoE - DriveMoE is a novel end-to-end autonomous driving framework that incorporates a mixture-of-experts (MoE) architecture to effectively handle multi-view sensor data and complex driving scenarios [28]. - The framework features scene-specific visual MoE and skill-specific action MoE, addressing the challenges of multi-view redundancy and skill specialization [28]. - DriveMoE achieves state-of-the-art performance in closed-loop evaluations on the Bench2Drive benchmark, demonstrating the effectiveness of combining visual and action MoE in autonomous driving tasks [28].
DreamVLA:全球首个“世界知识预测”VLA模型,操作成功率近八成
具身智能之心· 2025-07-10 13:16
Core Insights - The article discusses the potential of Vision-Language-Action (VLA) models in enhancing robotic operations through the integration of image generation and action prediction, highlighting the limitations of existing methods in forming a closed-loop perception-prediction-action cycle [3][16] - DreamVLA is introduced as a model that predicts comprehensive world knowledge to improve robotic performance, focusing on dynamic areas, depth perception, and high-level semantic features [4][5][16] Research Background and Motivation - Current VLA models are limited by image-based predictions, leading to information redundancy and a lack of critical world knowledge such as dynamics, spatial, and semantic understanding [3] - DreamVLA aims to construct a more effective perception-prediction-action loop by predicting comprehensive world knowledge, thereby enhancing the interaction between robots and their environment [3] Model Design Core Ideas - DreamVLA focuses on three core features: dynamic area prediction, depth perception, and high-level semantic features, which are essential for task execution [4][5] - Dynamic area prediction utilizes optical flow models to identify moving regions in a scene, optimizing the model's focus on task-critical areas [4] - Depth perception is achieved through depth estimation algorithms, providing 3D spatial context, while high-level semantic features are integrated from various visual models to enhance future state understanding [5] Structural Attention and Action Generation - A block structural attention mechanism is employed to separate queries into dynamic, depth, and semantic sub-queries, preventing cross-type knowledge leakage and maintaining clear representations [6] - The diffusion Transformer decoder is used to separate action representations from shared latent features, transforming Gaussian noise into action sequences through iterative self-attention and denoising processes [8] Experimental Results and Analysis - In benchmark tests, DreamVLA achieved an average task length of 4.44, outperforming other methods such as RoboVLM and Seer [9][10] - Real-world experiments with the Franka Panda robotic arm showed an average success rate of 76.7%, significantly higher than baseline models [10] Ablation Study Insights - The contribution of different knowledge types was analyzed, revealing that dynamic area prediction provided the most significant performance gain, while depth and semantic cues offered smaller, yet valuable, improvements [11] - Predicting future knowledge outperformed merely reconstructing current information, indicating that prediction provides better guidance for actions [12] - The block structural attention mechanism improved average task length from 3.75 to 4.44, demonstrating its effectiveness in reducing cross-signal interference [13] Core Contributions and Limitations - DreamVLA reconfigures VLA models into a perception-prediction-action framework, providing comprehensive foresight for planning through the prediction of dynamic, spatial, and high-level semantic information [16] - The model is currently limited to parallel gripper operations and relies on RGB data, with plans to incorporate more diverse data types and enhance generalization and robustness in future developments [15][16]