机器人训练

Search documents
VR老师手把手教学!百台机器人排队等“入职”
Yang Shi Xin Wen Ke Hu Duan· 2025-10-22 07:26
不久前,我国最大的人形机器人训练场在北京正式启用。各类人形机器人可以在这里进行"培训",给未来规模化应用做准备。 这个训练场长啥样?机器人又是如何在这学到本领的?来看记者的探访↓↓↓ 0:00 在北京市石景山全国最大的人形机器人数据训练中心,这里相当于人形机器人的"技能培训学校"。和人一样,机器人也得先选学科选专业,可以选择工业智 造、生活服务、智慧康养等16个细分领域的专业。 这所学校足足有14000平方米,为什么机器人上课需要这么大的场地呢?因为这里每一处都是1∶1还原生产生活中的真实作业场景。 生活服务场景,有超市货架、快递柜、各种家具等。机器人可以学习叠衣服、扔垃圾、从货架拿商品等动作。 工业智造展区,搭建了电子产品产线、汽车装备生产车间等。 这里不仅课程丰富,师资力量也很雄厚。每个机器人都配备了两名老师,他们戴上VR设备,正在手把手教学各种动作。有的老师还穿上动捕服,可以更好 地帮助机器人采集场景数据。 为什么要让机器人在这么多实景里反复训练?其实机器人和孩子学走路一样,需要通过大量练习才能变聪明。 (总台央视记者 朱江 张丛婧) 责编:陈菲扬、卢思宇 这些"学生"们现在的成绩怎么样呢?老师告诉记者 ...
仅看视频就能copy人类动作,宇树G1分分钟掌握100+,UC伯克利提出机器人训练新方式
量子位· 2025-05-08 04:04
Core Viewpoint - The article discusses the development of a new robotic training system called VideoMimic by a team from UC Berkeley, which allows robots to learn human movements from video without the need for motion capture technology [1][2]. Group 1: VideoMimic System Overview - VideoMimic has successfully enabled the Yushun G1 robot to mimic over 100 human actions [2]. - The core principle of VideoMimic involves extracting pose and point cloud data from videos, training in a simulated environment, and ultimately transferring the learned actions to a physical robot [3][17]. - The system has garnered significant attention online, with comparisons made to characters like Jack Sparrow from "Pirates of the Caribbean" [4]. Group 2: Training Process - The research team collected a dataset of 123 video clips filmed in everyday environments, showcasing various human movement skills and scenarios [5][6]. - The Yushun Go1 robot has been trained to adapt to different terrains and perform actions such as stepping over curbs and descending stairs, demonstrating its ability to maintain balance even when slipping [7][14][16]. Group 3: Technical Workflow - VideoMimic's workflow consists of three main steps: converting video to a simulation environment, training control strategies in simulation, and validating these strategies on real robots [18]. - The first step involves reconstructing human motion and scene geometry from single RGB videos, optimizing for accurate alignment of human movements and scene geometry [19]. - The second step processes the scene point cloud into a lightweight triangular mesh model for efficient collision detection and rendering [21]. Group 4: Strategy Training and Deployment - The training process is divided into four progressive stages, resulting in a robust control strategy that requires only the robot's proprioceptive information and local height maps as input [24]. - The Yushun Go1 robot, equipped with 12 degrees of freedom and various sensors, serves as the physical testing platform for deploying the trained strategies [30][31]. - The deployment involves configuring the robot's PD controller to match the simulation environment and utilizing real-time data from its depth camera and IMU for effective movement [35][39]. Group 5: Research Team - The project features four co-authors, all PhD students at UC Berkeley, with diverse research interests in robotics, computer vision, and machine learning [43][48][52].
谷歌DeepMind CEO展示Genie 2:机器人训练新时代
Sou Hu Cai Jing· 2025-04-22 02:24
Core Insights - Google DeepMind has made a significant breakthrough with its AI model Genie 2, showcasing its potential in robot training [1][3] - Genie 2 can generate interactive 3D environments from a single static image, providing realistic simulation for AI agents and robots [1][3] Group 1: Technology and Innovation - DeepMind CEO Demis Hassabis highlighted Genie 2's ability to create dynamic environments that simulate real-world physical properties, making it suitable for both entertainment and efficient robot training [3][6] - The model aims to build an understanding of the real world, offering a low-cost and high-efficiency solution for robot training, overcoming the limitations of traditional data collection methods [3][6] Group 2: Applications and Future Prospects - Genie 2 can generate nearly unlimited data in a simulated environment, allowing robots to learn initially in a virtual world before fine-tuning with minimal real-world data [3][6] - Future versions of the Genie model are expected to create more diverse and complex virtual worlds, supporting robots in learning new skills and interacting with humans and objects [6]