GraspGen
Search documents
宾夕法尼亚大学!MAESTRO:基于VLM的零样本通用机器人框架
具身智能之心· 2025-11-05 00:02
点击下方 卡片 ,关注" 具身智能 之心 "公众号 作者丨 Junyao Shi等 编辑丨具身智能之心 本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 MAESTRO 是一种以视觉语言模型(VLM)为核心的模块化机器人框架,通过动态组合感知、规划、控制等专用模块,在无需大规模机器人训练数据的情况 下,实现了超越现有视觉语言动作(VLA)模型的零样本操作性能,同时具备可扩展性、可调试性等优势。 论文链接:https://arxiv.org/pdf/2511.00917 核心架构与关键设计 1. 整体框架 MAESTRO 以VLM编码代理为核心,接收语言指令和场景图像后,动态编写代码组合工具模块,形成程序化策略。框架采用闭环交互机制,在执行过程中持续 监控环境反馈,实时调整代码和动作,构成"感知-动作-学习"的自适应循环。 利用VLM已有的强大通用能力,避免对机器人专属数据的依赖; 通过模块化设计整合机器人领域成熟的专用工具,弥补VLM在低级别操作上的不足; 突破传统模 ...
NVIDIA最新!GraspGen:基于扩散模型的六自由度抓取生成框架
具身智能之心· 2025-07-21 08:42
Core Viewpoint - GraspGen framework addresses the challenge of generalization in 6-DOF grasping by modeling the grasp generation process as an iterative diffusion process, enhancing grasp generation capabilities through the DiffusionTransformer architecture and an efficient discriminator for sampling evaluation [2][21]. Group 1: Core Methodology - GraspGen models the 6-DOF grasp generation as a diffusion process in SE(3) space, utilizing Denoising Diffusion Probabilistic Model (DDPM) for faster computation and simpler implementation compared to traditional energy-based models [4]. - The framework employs PointTransformerV3 (PTv3) to convert unstructured point clouds into structured formats, reducing translation error by 5.3mm and improving recall rate by 4% compared to PointNet++ [4]. - The noise prediction network generates grasps through a 10-step denoising process, significantly fewer than the hundreds of steps required for image diffusion [5]. Group 2: Discriminator Innovations - GraspGen's discriminator innovatively reuses the generator's object encoder, reducing memory usage by 21 times compared to traditional methods [7]. - The discriminator is trained on a dataset generated by the generator, allowing it to better identify failure modes such as collisions and distant grasps, achieving an AUC of 0.947 compared to 0.886 when trained solely on offline data [16][21]. Group 3: Experimental Results - In single-object scenarios, GraspGen's precision-recall curve AUC exceeds baseline by 48% on the ACRONYM dataset, demonstrating the importance of the discriminator [10]. - In cluttered scenes, GraspGen achieves the highest task success rate and grasp success rate, outperforming Contact-GraspNet by 16.9% and M2T2 by 7.8% [13]. - Real robot experiments on the UR10 robotic arm show an overall success rate of 81.3% across various scenarios, significantly higher than M2T2 (28%) and AnyGrasp (17.6%) [19]. Group 4: Limitations and Future Directions - GraspGen shows limitations in performance on cubical objects and relies heavily on the quality of depth sensing and instance segmentation, with training requiring approximately 3,000 GPU hours [21].