“最强具身VLA大模型”,究竟强在哪儿?
量子位·2025-11-20 00:30

Core Insights - The article discusses the breakthrough of the robot foundation model π0.6, which showcases its capabilities in performing complex tasks with a success rate exceeding 90% [2][10]. Group 1: Model Overview - π0.6 is the latest VLA (Vision-Language-Action) model, building on the previous π0.5, and introduces a novel training method called RECAP [8][10]. - The RECAP method allows robots to learn from their mistakes, shifting from traditional imitation learning to a more intuitive learning approach [3][29]. Group 2: RECAP Methodology - RECAP consists of three main stages: guidance through human demonstration, correction through expert intervention, and practice through autonomous experience [7][12]. - The model utilizes a value function to evaluate actions, which helps in identifying advantageous actions and improving learning efficiency [19][22]. Group 3: Training Process - The training process involves offline reinforcement learning using diverse data sources, including human demonstrations and autonomous attempts, to train the value function and policy [20][22]. - The model's architecture has been enhanced, with the backbone expanding from Gemma (2.6B) to Gemma3 (4B) and Action Expert parameters increasing to 860M [25]. Group 4: Performance Evaluation - In tests involving complex tasks like folding clothes and making espresso, RECAP doubled the throughput and reduced failure rates by approximately 50% compared to models using only supervised fine-tuning [27]. - The model demonstrated high stability, successfully performing tasks for extended periods without human intervention [28]. Group 5: Learning from Failures - The ability of the model to learn from failures is highlighted as a significant advancement, allowing it to extract effective learning signals from imperfect experiences [29][56]. - This approach opens new avenues for future research in robotics, emphasizing the importance of learning from real-world execution rather than solely relying on ideal demonstrations [56].