Workflow
WidowX机械臂
icon
Search documents
穆尧团队最新!离散扩散引入VLA,支持精确动作建模和一致性训练
具身智能之心· 2025-09-02 00:03
Core Viewpoint - The article discusses the introduction of the Discrete Diffusion VLA model, which integrates discrete diffusion techniques into the Vision-Language-Action (VLA) framework, enhancing the efficiency and accuracy of robotic action decoding [4][7]. Group 1: Background and Problem Statement - The VLA model enables robots to understand visual and language inputs and execute corresponding action sequences. Current VLA frameworks typically adapt large pre-trained visual-language models (VLM) by adding an action generation head [4]. - Existing decoding methods fall into two categories: autoregressive (AR) methods, which generate actions sequentially, and continuous diffusion methods, which treat action trajectories as continuous signals [4][6]. Group 2: Proposed Solution - The Discrete Diffusion VLA model introduces a novel approach by incorporating discrete diffusion into action decoding, utilizing a single Transformer to unify visual, language, and action modalities without the need for additional training modules [6][12]. - The model employs a "first easy, then difficult" adaptive decoding strategy, allowing for parallel decoding of actions and error correction, significantly improving accuracy [12][18]. Group 3: Performance Metrics - In the LIBERO task with the Franka Panda robotic arm, the model achieved a success rate of 96.3%, outperforming traditional AR and continuous diffusion models [2][12]. - The Google robot demonstrated a visual matching rate of 71.2%, while the WidowX robot achieved a 49.3% overall success rate in real-simulation transfer scenarios, showcasing the model's robustness [2][25]. Group 4: Experimental Results - The Discrete Diffusion VLA model consistently outperformed benchmarks, with an average success rate of 96.3% across various tasks, surpassing the closest model, OpenVLA-OFT, by 0.8% [21][22]. - The model's performance in visual matching and variant aggregation was also superior, achieving an overall average success rate of 64.1% in diverse scenarios [23][24]. Group 5: Ablation Studies - Ablation studies indicated that the adaptive decoding strategy significantly enhances performance, with the "max confidence" approach yielding a 97.4% success rate, outperforming other strategies [27]. - The temperature scheduling method used in the model also proved effective, achieving a 97.4% success rate, validating the synergy between temperature adjustment and adaptive decoding [28].
保姆级分享!ALOHA:低成本双臂机器人结合模仿学习经典工作
具身智能之心· 2025-06-27 08:36
Core Viewpoint - The article discusses the ALOHA system, a low-cost open-source hardware system designed for bimanual teleoperation, emphasizing its potential to perform precise manipulation tasks using affordable components and advanced learning algorithms [4][5][8]. Group 1: ALOHA System Overview - ALOHA is a low-cost system costing less than $20,000, designed to enable precise manipulation tasks using two low-cost robotic arms and 3D-printed components [7][8]. - The system utilizes end-to-end imitation learning to perform tasks by collecting real demonstrations from a custom remote operation interface [8][10]. Group 2: Challenges in Imitation Learning - Imitation learning faces challenges such as compounding errors, where small prediction errors accumulate, leading to significant deviations from expert behavior [9][12]. - The article highlights the difficulty of modeling complex physical interactions in tasks, suggesting that learning policies directly from demonstrations is more effective than modeling the entire environment [9][12]. Group 3: Action Chunking with Transformers (ACT) - The ACT algorithm addresses compounding errors by predicting sequences of actions rather than single steps, improving performance in tasks with high complexity [12][13]. - The algorithm has demonstrated an 80-90% success rate in tasks with only 10 minutes of demonstration data [12]. Group 4: Hardware Specifications - The ALOHA system is built on principles of low cost, versatility, user-friendliness, repairability, and ease of construction, utilizing ViperX 6-DoF robotic arms [17][18]. - The system is designed to perform various tasks, including precise, contact-based, and dynamic operations [20][22]. Group 5: Data Collection and Training - The system collects human demonstrations to train the policy, focusing on the leader robot's joint positions to capture the operator's intent and force feedback [23][25]. - The training process involves using a conditional variational autoencoder (CVAE) to model human data and improve learning from noisy demonstrations [33][55]. Group 6: Experimental Results - The article presents experimental results showing that action chunking and temporal ensembling significantly enhance the performance of the ACT algorithm [52][54]. - The necessity of high-frequency control is emphasized, with findings indicating that a control frequency of 50Hz allows for more precise and agile task execution [56].