Workflow
RECAP
icon
Search documents
“最强具身VLA大模型”,究竟强在哪儿?
3 6 Ke· 2025-11-20 07:38
Core Insights - The core contribution of the π*0.6 model lies in its introduction of a more intuitive learning method called RECAP, which allows robots to learn from their mistakes rather than merely imitating correct actions [3][8][24] - The model demonstrates a high success rate of over 90% in tasks such as making espresso, folding clothes, and assembling packaging boxes, showcasing its practical capabilities [1][20] Group 1: RECAP Methodology - RECAP consists of three main phases: offline reinforcement learning (RL) using diverse demonstration data, fine-tuning with human guidance, and online execution where robots learn from sparse rewards and expert corrections [10][20] - The methodology leverages a value function to evaluate actions and an advantage-conditioned strategy to update policies, allowing for efficient learning from both successful and unsuccessful experiences [13][16][42] Group 2: Model Architecture and Performance - The π*0.6 model builds upon previous versions, expanding its backbone from Gemma (2.6 billion parameters) to Gemma3 (4 billion parameters), and increasing Action Expert parameters to 860 million [20] - In challenging tasks, RECAP has doubled the throughput (successful task completions per hour) and reduced failure rates by approximately 50% compared to models that only utilized supervised fine-tuning [20] Group 3: Learning from Mistakes - The RECAP approach emphasizes the importance of learning from errors, enabling robots to recover from mistakes through expert intervention and self-correction, which is crucial for real-world applications [24][28] - By utilizing a value function to assess the quality of actions, the model can identify key steps and sources of errors, enhancing its ability to adapt and improve in complex environments [39][41]
“最强具身VLA大模型”,究竟强在哪儿?
量子位· 2025-11-20 00:30
Core Insights - The article discusses the breakthrough of the robot foundation model π*0.6, which showcases its capabilities in performing complex tasks with a success rate exceeding 90% [2][10]. Group 1: Model Overview - π*0.6 is the latest VLA (Vision-Language-Action) model, building on the previous π0.5, and introduces a novel training method called RECAP [8][10]. - The RECAP method allows robots to learn from their mistakes, shifting from traditional imitation learning to a more intuitive learning approach [3][29]. Group 2: RECAP Methodology - RECAP consists of three main stages: guidance through human demonstration, correction through expert intervention, and practice through autonomous experience [7][12]. - The model utilizes a value function to evaluate actions, which helps in identifying advantageous actions and improving learning efficiency [19][22]. Group 3: Training Process - The training process involves offline reinforcement learning using diverse data sources, including human demonstrations and autonomous attempts, to train the value function and policy [20][22]. - The model's architecture has been enhanced, with the backbone expanding from Gemma (2.6B) to Gemma3 (4B) and Action Expert parameters increasing to 860M [25]. Group 4: Performance Evaluation - In tests involving complex tasks like folding clothes and making espresso, RECAP doubled the throughput and reduced failure rates by approximately 50% compared to models using only supervised fine-tuning [27]. - The model demonstrated high stability, successfully performing tasks for extended periods without human intervention [28]. Group 5: Learning from Failures - The ability of the model to learn from failures is highlighted as a significant advancement, allowing it to extract effective learning signals from imperfect experiences [29][56]. - This approach opens new avenues for future research in robotics, emphasizing the importance of learning from real-world execution rather than solely relying on ideal demonstrations [56].