机器人零样本模仿
Search documents
Yann LeCun离开Meta后首篇论文?使用了宇树机器人做研究
机器之心· 2025-12-06 04:08
Core Insights - The article discusses a groundbreaking research paper that introduces a method called GenMimic, enabling humanoid robots to perform actions generated from AI video models without prior examples [1][3][4]. Research Contributions - The research presents a universal framework for humanoid robots to execute actions generated by video models [4]. - GenMimic employs a new reinforcement learning strategy that utilizes symmetric regularization and selectively weighted 3D keypoint rewards for training, allowing generalization to noisy synthetic videos [4]. - The team created a synthetic human action dataset named GenMimicBench, which serves as a scalable benchmark for evaluating zero-shot generalization and policy robustness [4][8]. GenMimicBench Dataset - GenMimicBench consists of 428 generated videos created using advanced video generation models Wan2.1 and Cosmos-Predict2 [9][11]. - The dataset includes a wide range of subjects, environments, and action types, from simple gestures to complex interactions with objects [11][13]. - It is designed to stress-test the robustness of humanoid robot control strategies under varying visual and action distributions [13]. Methodology Overview - The proposed method involves a two-stage process for executing humanoid robot actions from generated videos [15][17]. - The first stage focuses on reconstructing the humanoid robot's 4D model from the input RGB video, while the second stage translates this model into executable actions [17][18]. - The strategy emphasizes robustness to variations and noise in the input data by using 3D keypoints instead of joint angles [19][20]. Experimental Results - The team conducted extensive experiments on both the GenMimicBench dataset and a real-world 23-DoF humanoid robot, demonstrating significant improvements over strong baseline models [29][30]. - In simulations, GenMimic achieved a success rate (SR) of 29.78% and outperformed existing models in various metrics [31]. - Real-world experiments showed that the strategy successfully replicated a wide range of upper-body actions, although challenges remained with lower-body movements [34][35].