Workflow
RoboTwin 2.0
icon
Search documents
社区准备做一些访谈了,关于求职,读博/转方向......
具身智能之心· 2025-11-01 05:40
Core Insights - The article emphasizes the growing opportunities in the embodied intelligence sector, highlighting an increase in funding and job openings compared to the previous year [1][2] - The community is preparing interviews with industry leaders to provide insights on job hunting and research advice for newcomers [1][2] Group 1: Community Engagement - The community is organizing interviews with experienced professionals to share their career paths and insights into the industry [1] - There is a focus on creating a closed-loop system for sharing knowledge across various fields, including industry, academia, and job opportunities [2][5] - The community has established a referral mechanism for job placements with various companies in the embodied intelligence sector [11] Group 2: Educational Resources - A comprehensive technical roadmap has been developed for beginners, outlining essential skills and knowledge areas [7] - The community has compiled numerous open-source projects and datasets relevant to embodied intelligence, facilitating quick access for newcomers [12][26] - Various learning paths have been organized, covering topics such as reinforcement learning, multi-modal models, and robotic navigation [12][40] Group 3: Industry Insights - The community is hosting roundtable discussions and live streams to address ongoing challenges and developments in the embodied intelligence industry [5] - A collection of industry reports and research papers has been compiled to keep members informed about the latest advancements and applications [19] - The community includes members from renowned universities and leading companies in the field, fostering a rich environment for knowledge exchange [11][15]
移动操作&双臂操作开源硬件与方案
具身智能之心· 2025-10-20 00:03
Core Viewpoint - The article emphasizes the importance of open-source projects in advancing mobile and dual-arm robotic operations, highlighting their role in breaking down technical barriers and accelerating innovation in various applications, from household robots to industrial automation [3]. Group 1: Open-Source Projects Overview - XLeRobot, developed by Nanyang Technological University, focuses on flexible movement and precise operation in complex environments, providing a reference framework for mobile and dual-arm control [4]. - AhaRobot from Tianjin University emphasizes autonomy and environmental adaptability in dual-arm operations, integrating perception, planning, and control modules for service robots [6]. - ManiGaussian++, released by Tsinghua University, optimizes dual-arm operation accuracy using Gaussian models, particularly in 3D environment perception and motion planning [8]. - H-RDT, a collaboration between Tsinghua University and Horizon Robotics, aims at efficient decision-making and real-time operations for mobile robots in various settings [11]. - RoboTwin 2.0, developed by Shanghai Jiao Tong University and the University of Hong Kong, integrates simulation and physical platforms for mobile and dual-arm operations [14]. - Open X-Embodiment, from Arizona State University, focuses on a generalized learning framework for robotic operations, supporting cross-scenario skill transfer [16]. - 3D FlowMatch Actor, a joint project by Carnegie Mellon University and NVIDIA, enhances dynamic adaptability in 3D space for mobile and dual-arm operations [19]. - OmniH2O, developed by Carnegie Mellon University, focuses on human-robot action mapping and humanoid operation, facilitating remote control and action teaching [24]. - TidyBot++, a collaboration between Princeton University and Stanford University, targets household organization tasks, integrating object recognition and dual-arm collaboration algorithms [27]. - robosuite, from the University of California, Berkeley, is a mature simulation platform for robotic operations, providing standardized tasks and evaluation tools [29]. - SO-ARM100, a standardized dual-arm operation hardware and software solution, aims to lower development barriers for educational and research purposes [32]. - GOAT, developed by UIUC and CMU, focuses on goal-directed movement and operation for robots, emphasizing robustness and versatility [34]. - Mobile ALOHA, from Stanford University, combines mobile chassis and dual-arm operations for low-cost, easily deployable service robots [35].
RoboTwin系列新作:开源大规模域随机化双臂操作数据合成器与评测基准集
机器之心· 2025-07-07 07:50
Core Viewpoint - The article discusses the release of RoboTwin 2.0, a scalable data generator and benchmark for robust bimanual robotic manipulation, highlighting its advancements over the previous version, RoboTwin 1.0, and its applications in dual-arm collaboration tasks [5][34]. Group 1: Introduction and Background - RoboTwin 2.0 is developed by researchers from Shanghai Jiao Tong University and the University of Hong Kong, focusing on overcoming limitations in data collection and simulation for dual-arm robotic operations [6][8]. - The RoboTwin series has received recognition in major conferences, including CVPR and ECCV, and has been utilized in various competitions [3][9]. Group 2: Features of RoboTwin 2.0 - RoboTwin 2.0 introduces a large-scale domain randomization data synthesis framework, which includes a dataset of 731 instances across 147 object categories, enhancing the robustness of models in unseen environments [8][12]. - The system employs a more user-friendly API for expert code generation, significantly lowering the barrier for utilizing large multimodal models [10][34]. Group 3: Domain Randomization Strategies - The article outlines five key dimensions of domain randomization implemented in RoboTwin 2.0, including scene clutter, background textures, lighting conditions, tabletop heights, and diverse language instructions [16][18][20][21][22]. - These strategies aim to improve the model's adaptability and performance in real-world scenarios by exposing it to a wide variety of training conditions [16][34]. Group 4: Performance Metrics - RoboTwin 2.0 shows significant improvements in performance metrics compared to RoboTwin 1.0, with an average success rate (ASR) increase from 47.4% to 62.1% in typical tasks, and further enhancements with structured feedback [26][27]. - The adaptive grasping capabilities of RoboTwin 2.0 also demonstrate an average success rate improvement of 8.3% across five robotic platforms [28]. Group 5: Real-World Application and Transferability - The system exhibits strong zero-shot transfer capabilities, achieving notable success rates in unseen tasks and complex environments, indicating its potential for real-world applications [31][33]. - The results highlight RoboTwin 2.0's comprehensive advantages in code generation, grasping expansion, environmental robustness, and sim-to-real transfer, providing a solid foundation for future dual-arm operation research [34].
穆尧团队最新!RoboTwin 2.0:用于鲁棒双臂操作的可扩展数据基准
自动驾驶之心· 2025-06-24 12:41
Core Insights - The article discusses the development of RoboTwin 2.0, a scalable data generation framework aimed at enhancing bimanual robotic manipulation through robust domain randomization and automated expert data generation [2][6][18]. Group 1: Motivation and Challenges - Existing synthetic datasets for bimanual robotic manipulation are insufficient, facing challenges such as lack of efficient data generation methods for new tasks and overly simplified simulation environments [2][5]. - RoboTwin 2.0 addresses these challenges by providing a scalable simulation framework that supports automatic, large-scale generation of diverse and realistic data [2][6]. Group 2: Key Components of RoboTwin 2.0 - RoboTwin 2.0 integrates three key components: an automated expert data generation pipeline, comprehensive domain randomization, and entity-aware adaptation for diverse robotic platforms [6][18]. - The automated expert data generation pipeline utilizes multimodal large language models (MLLMs) and simulation feedback to iteratively optimize task execution code [10][12]. Group 3: Domain Randomization - Domain randomization is applied across five dimensions: clutter, background texture, lighting conditions, desktop height, and diverse language instructions, enhancing the robustness of strategies against environmental variability [12][13]. - The framework generates a large object library (RoboTwin-OD) with 731 instances across 147 categories, each annotated with semantic and operational labels [3][18]. Group 4: Data Collection and Benchmarking - Over 100,000 dual-arm operation trajectories were collected across 50 tasks, supporting extensive benchmarking and evaluation of robotic strategies [24][22]. - The framework allows for flexible entity configurations, ensuring compatibility with diverse hardware setups and promoting scalability for future robotic platforms [20][22]. Group 5: Experimental Analysis - Evaluations demonstrated that RoboTwin 2.0 significantly improves the success rates of robotic tasks, particularly for low-degree-of-freedom platforms, with average increases of 8.3% in task success rates [29][31]. - The framework's data enhances the generalization capabilities of models, showing substantial improvements in performance when tested in unseen scenarios [32][34].