清华大学最新!RoboScape:基于物理信息的具身世界模型,动作可控性提升68.3%
具身智能之心·2025-07-02 07:44

Core Insights - The article discusses the limitations of existing embodied intelligence models in physical perception, particularly in robot scenarios involving contact, highlighting the need for better integration of physical knowledge into these models [3][20]. Research Background and Core Issues - Current models rely heavily on visual token fitting and lack physical knowledge awareness, leading to unrealistic object deformation and motion discontinuities in generated videos [3]. - Previous attempts to integrate physical knowledge have been limited to narrow domains or complex pipelines, indicating a need for a unified and efficient framework [3]. Core Methodology - The focus is on learning an embodied world model as a dynamic function to predict the next visual observation based on past observations and robot actions [4]. - A four-step processing pipeline is designed to create a multimodal dataset with physical priors, utilizing the AGIBOT-World dataset [5]. Data Processing Pipeline - The pipeline includes physical attribute annotation, video slicing, segment filtering, and segment classification to ensure effective training data [5][8]. Time Depth Prediction - A dual-branch cooperative autoregressive Transformer (DCT) is introduced to enhance 3D geometric consistency, ensuring causal generation through temporal and spatial attention layers [7]. Adaptive Keypoint Dynamic Learning - The model employs self-supervised tracking of contact-driven keypoints to implicitly encode material properties, enhancing the modeling of object deformation and motion patterns [8]. Joint Training Objectives - The overall training objective integrates various loss functions to balance the contributions of different components in the model [10]. Experimental Validation - The model's performance is evaluated across appearance fidelity, geometric consistency, and action controllability, demonstrating superior results compared to baseline models [12][18]. Dataset and Implementation Details - The study utilizes the AgiBotWorldBeta dataset, comprising 50,000 video segments across 147 tasks, and employs advanced models for comparison [13]. Downstream Application Validation - The model shows effectiveness in training robot policies, achieving performance close to real data training results, indicating the utility of generated data for complex tasks [16]. Conclusion and Future Plans - RoboScape effectively integrates physical knowledge into video generation without relying on external physics engines, with plans to combine generative world models with real robots for further validation [20][21].