Workflow
VLA最新综述 | 中科院详解:面向具身操作的模型架构与演进
自动驾驶之心·2025-08-30 16:03

Core Insights - The article discusses the emergence and development of Vision-Language-Action (VLA) models, which integrate visual perception, natural language understanding, and action control, marking a significant milestone in the pursuit of general robotic intelligence [3][5]. Development Stages - The development of VLA models is categorized into three stages: 1. Emergence Stage: Initial attempts to connect vision, language, and actions without a formal VLA concept, focusing on visual imitation learning and language annotation [7]. 2. Exploration Stage: By mid-2023, the VLA concept was formally introduced, with Transformer architecture becoming mainstream, enhancing model generalization in open scenarios [8]. 3. Rapid Development Stage: Since late 2024, VLA models have undergone rapid iterations, addressing generalization and inference efficiency issues, evolving from single-layer to multi-layer architectures [9]. Core Dimensions of VLA Models - VLA models consist of three main components: 1. Observation Encoding: Transitioning from CNN and RNN structures to unified architectures like ViT and cross-modal Transformers, incorporating multi-modal information for enhanced environmental perception [12]. 2. Feature Inference: The Transformer architecture has become the backbone, with new models like Diffusion Transformer and Mixture of Experts enhancing inference capabilities [14]. 3. Action Decoding: Evolving from discrete token representations to continuous control predictions, improving operational precision in real environments [15]. Training Data for VLA Models - VLA model training data is categorized into four types: 1. Internet Image-Text Data: Provides rich visual and linguistic priors but lacks dynamic environment understanding [17]. 2. Video Data: Contains temporal features of human activities, aiding in learning complex operational skills, though it often lacks precise action annotations [17]. 3. Simulation Data: Offers low-cost, scalable, and well-annotated data for pre-training and strategy exploration, but requires adaptation for real-world applications [19]. 4. Real Robot Collected Data: Directly reflects sensor noise and environmental complexities, crucial for enhancing VLA's generalization and reliability, albeit with high collection costs [19]. Pre-training and Post-training Methods - Common pre-training strategies include: 1. Single Domain Data Training: Early methods focused on single-modal data, providing initial perception and action representation capabilities [21]. 2. Cross-domain Data Staged Training: Models are pre-trained on large datasets before fine-tuning on robot operation data, effectively utilizing large-scale data priors [21]. 3. Cross-domain Data Joint Training: Simultaneously utilizes multiple data types to learn the relationships between perception, language, and actions [21]. 4. Chain-of-Thought Enhancement: Introduces reasoning chains to enable task decomposition and logical reasoning capabilities [21]. - Post-training methods aim to optimize pre-trained VLA models for specific tasks: 1. Supervised Fine-tuning: Uses labeled trajectory data for end-to-end training, enhancing action control mapping [22]. 2. Reinforcement Fine-tuning: Optimizes model strategies through interaction data, improving adaptability and performance [22]. 3. Inference Expansion: Enhances model performance through improved inference processes without modifying model parameters [22]. Evaluation of VLA Models - The evaluation framework for VLA models includes: 1. Real-world Evaluation: Tests model performance in real environments, providing reliable results but with high costs and low repeatability [24]. 2. Simulator Evaluation: Uses high-fidelity simulation platforms for testing, allowing for large-scale experiments but with potential discrepancies from real-world performance [24]. 3. World Model Evaluation: Employs learned environment models for virtual assessments, reducing costs but relying on the accuracy of the world model [24]. Future Directions for VLA Models - Future research on VLA models will focus on: 1. Generalization Reasoning: Enhancing the model's ability to adapt to unknown tasks and environments, integrating logical reasoning with robotic operations [26]. 2. Fine-grained Operations: Improving the model's capability to handle complex tasks by integrating multi-modal sensory information for precise interaction modeling [26]. 3. Real-time Inference: Addressing the need for efficient architectures and model compression to meet high-frequency control demands [27].