Workflow
GraspGen
icon
Search documents
宾夕法尼亚大学!MAESTRO:基于VLM的零样本通用机器人框架
具身智能之心· 2025-11-05 00:02
Core Insights - MAESTRO is a modular robotic framework centered around Vision Language Models (VLM), achieving zero-shot operational performance without extensive training data, while offering scalability and debuggability [2][5][22] Group 1: Innovation and Design - Current mainstream robotics development relies on large-scale "observation-action" datasets, which are costly and limited, hindering progress [4] - MAESTRO adopts a differentiated approach, utilizing VLM to avoid dependency on robot-specific data and integrating mature specialized tools for enhanced low-level operations [6][5] - The framework employs a closed-loop interaction mechanism, continuously monitoring environmental feedback to adjust actions in real-time, forming an adaptive cycle of perception, action, and learning [5][6] Group 2: Core Module Toolset - The modular design adheres to six principles, addressing diverse robotic operational needs, including perception, control, and geometry [8] - Key modules include: - Perception: Enhances visual information accuracy through a hierarchical approach [10] - Control: Integrates Cartesian control and collision-free motion planning for safety [10] - Geometry & Linear Algebra: Provides tools for spatial reasoning [10] - Image Editing: Improves visual grounding capabilities [10] - Mobile Operation Extensions: Adapts to mobile robot scenarios with navigation and active perception tools [10] Group 3: Evolution Mechanism - MAESTRO records past task execution codes and outcomes to provide contextual examples for VLM, optimizing code generation and enhancing performance after minimal real-world trials [12] Group 4: Experimental Results and Performance Analysis - MAESTRO demonstrated superior performance in desktop operations, significantly outperforming existing VLA models in six out of seven tasks, particularly in semantic reasoning and long-term memory tasks [17] - In mobile operations, MAESTRO achieved high completion rates, with specific tasks scoring 96.0±8.9 and 93.3±14.9 [17] - The evolution capability was highlighted by improving task completion from 35% to 85.0±7.4 after three iterations in a door-opening task [17] Group 5: Key Module Ablation Analysis - Removing advanced perception modules drastically reduced task completion rates, indicating the importance of precise perception for complex operations [20] - The absence of geometry modules also negatively impacted performance, underscoring the necessity of spatial reasoning tools [20] Group 6: Future Directions - MAESTRO's framework is positioned as an effective alternative to large-scale robotic training paths, with future enhancements aimed at optimizing VLM inference speed, improving low-level control capabilities, and increasing reasoning stability in complex scenarios [22]
NVIDIA最新!GraspGen:基于扩散模型的六自由度抓取生成框架
具身智能之心· 2025-07-21 08:42
Core Viewpoint - GraspGen framework addresses the challenge of generalization in 6-DOF grasping by modeling the grasp generation process as an iterative diffusion process, enhancing grasp generation capabilities through the DiffusionTransformer architecture and an efficient discriminator for sampling evaluation [2][21]. Group 1: Core Methodology - GraspGen models the 6-DOF grasp generation as a diffusion process in SE(3) space, utilizing Denoising Diffusion Probabilistic Model (DDPM) for faster computation and simpler implementation compared to traditional energy-based models [4]. - The framework employs PointTransformerV3 (PTv3) to convert unstructured point clouds into structured formats, reducing translation error by 5.3mm and improving recall rate by 4% compared to PointNet++ [4]. - The noise prediction network generates grasps through a 10-step denoising process, significantly fewer than the hundreds of steps required for image diffusion [5]. Group 2: Discriminator Innovations - GraspGen's discriminator innovatively reuses the generator's object encoder, reducing memory usage by 21 times compared to traditional methods [7]. - The discriminator is trained on a dataset generated by the generator, allowing it to better identify failure modes such as collisions and distant grasps, achieving an AUC of 0.947 compared to 0.886 when trained solely on offline data [16][21]. Group 3: Experimental Results - In single-object scenarios, GraspGen's precision-recall curve AUC exceeds baseline by 48% on the ACRONYM dataset, demonstrating the importance of the discriminator [10]. - In cluttered scenes, GraspGen achieves the highest task success rate and grasp success rate, outperforming Contact-GraspNet by 16.9% and M2T2 by 7.8% [13]. - Real robot experiments on the UR10 robotic arm show an overall success rate of 81.3% across various scenarios, significantly higher than M2T2 (28%) and AnyGrasp (17.6%) [19]. Group 4: Limitations and Future Directions - GraspGen shows limitations in performance on cubical objects and relies heavily on the quality of depth sensing and instance segmentation, with training requiring approximately 3,000 GPU hours [21].