LeRobot
Search documents
HuggingFace联合牛津大学新教程开源SOTA资源库!
具身智能之心· 2025-10-27 00:02
Core Viewpoint - The article emphasizes the significant advancements in robotics, particularly in robot learning, driven by the development of large models and multi-modal AI technologies, which have transformed traditional robotics into a more learning-based paradigm [3][4]. Group 1: Introduction to Robot Learning - The article introduces a comprehensive tutorial on modern robot learning, covering foundational principles of reinforcement learning and imitation learning, leading to the development of general-purpose, language-conditioned models [4][12]. - HuggingFace and Oxford University researchers have created a valuable resource for newcomers to the field, providing an accessible guide to robot learning [3][4]. Group 2: Classic Robotics - Classic robotics relies on explicit modeling through kinematics and control planning, while learning-based methods utilize deep reinforcement learning and expert demonstration for implicit modeling [15]. - Traditional robotic systems follow a modular pipeline, including perception, state estimation, planning, and control [16]. Group 3: Learning-Based Robotics - Learning-based robotics integrates perception and control more closely, adapts to tasks and entities, and reduces the need for expert modeling [26]. - The tutorial highlights the challenges of safety and efficiency in real-world applications, particularly during the initial training phases, and discusses advanced techniques like simulation training and domain randomization to mitigate risks [34][35]. Group 4: Reinforcement Learning - Reinforcement learning allows robots to autonomously learn optimal behavior strategies through trial and error, showcasing significant potential in various scenarios [28]. - The tutorial discusses the complexity of integrating multiple system components and the limitations of traditional physics-based models, which often oversimplify real-world phenomena [30]. Group 5: Imitation Learning - Imitation learning offers a more direct learning path for robots by replicating expert actions through behavior cloning, avoiding complex reward function designs [41]. - The tutorial addresses challenges such as compound errors and handling multi-modal behaviors in expert demonstrations [41][42]. Group 6: Advanced Techniques in Imitation Learning - The article introduces advanced imitation learning methods based on generative models, such as Action Chunking with Transformers (ACT) and Diffusion Policy, which effectively model multi-modal data [43][45]. - Diffusion Policy demonstrates strong performance in various tasks with minimal demonstration data, requiring only 50-150 demonstrations for training [45]. Group 7: General Robot Policies - The tutorial envisions the development of general robot policies capable of operating across tasks and devices, inspired by large-scale open robot datasets and powerful visual-language models [52][53]. - Two cutting-edge visual-language-action (VLA) models, π₀ and SmolVLA, are highlighted for their ability to understand visual and language instructions and generate precise control commands [53][56]. Group 8: Model Efficiency - SmolVLA represents a trend towards model miniaturization and open-sourcing, achieving high performance with significantly reduced parameter counts and memory consumption compared to π₀ [56][58].
手把手带你入门机器人学习,HuggingFace联合牛津大学新教程开源SOTA资源库
机器之心· 2025-10-26 07:00
Core Viewpoint - The article emphasizes the significant advancements in the field of robotics, particularly in robot learning, driven by the development of artificial intelligence technologies such as large models and multi-modal models. This shift has transformed traditional robotics into a learning-based paradigm, opening new potentials for autonomous decision-making robots [2]. Group 1: Introduction to Robot Learning - The article highlights the evolution of robotics from explicit modeling to implicit modeling, marking a fundamental change in motion generation methods. Traditional robotics relied on explicit modeling, while learning-based methods utilize deep reinforcement learning and expert demonstration learning for implicit modeling [15]. - A comprehensive tutorial provided by HuggingFace and researchers from Oxford University serves as a valuable resource for newcomers to modern robot learning, covering foundational principles of reinforcement learning and imitation learning [3][4]. Group 2: Learning-Based Robotics - Learning-based robotics simplifies the process from perception to action by training a unified high-level controller that can directly handle high-dimensional, unstructured perception-motion information without relying on a dynamics model [33]. - The tutorial addresses challenges in real-world applications, such as safety and efficiency issues during initial training phases, and high trial-and-error costs in physical environments. It introduces advanced techniques like simulator training and domain randomization to mitigate these risks [34][35]. Group 3: Reinforcement Learning - Reinforcement learning allows robots to autonomously learn optimal behavior strategies through trial and error, showcasing significant potential across various scenarios [28]. - The tutorial discusses the "Offline-to-Online" reinforcement learning framework, which enhances sample efficiency and safety by utilizing pre-collected expert data. The HIL-SERL method exemplifies this approach, enabling robots to master complex real-world tasks with near 100% success rates in just 1-2 hours of training [36][39]. Group 4: Imitation Learning - Imitation learning offers a more direct learning path for robots by replicating expert actions through behavior cloning, avoiding complex reward function designs and ensuring training safety [41]. - The tutorial presents advanced imitation learning methods based on generative models, such as Action Chunking with Transformers (ACT) and Diffusion Policy, which effectively model multi-modal data by learning the latent distribution of expert behaviors [42][43]. Group 5: Universal Robot Policies - The article envisions the future of robotics in developing universal robot policies capable of operating across tasks and devices, inspired by the emergence of large-scale open robot datasets and powerful visual-language models (VLMs) [52]. - Two cutting-edge VLA models, π₀ and SmolVLA, are highlighted for their ability to understand visual and language instructions and generate precise robot control commands, with SmolVLA being a compact, open-source model that significantly reduces application barriers [53][56].
250美元起售,还开源,Hugging Face 发布史上最亲民人形机器人
机器之心· 2025-05-31 04:00
Core Viewpoint - Hugging Face has officially open-sourced two humanoid robots, HopeJR and Reachy Mini, moving closer to Elon Musk's prediction of 10 billion humanoid robots by 2040 [1][31]. Group 1: Robot Specifications - HopeJR is a full-sized humanoid robot with 66 degrees of freedom, capable of walking and arm movement [3]. - Reachy Mini is a desktop robot that can move its head, speak, and listen, designed for testing AI applications [5][20]. Group 2: Pricing and Availability - HopeJR is priced at approximately $3,000, while Reachy Mini costs between $250 and $300, depending on tariffs [7]. - The company plans to start shipping the first batch of robots by the end of the year, with a waiting list already open [7]. Group 3: Open Source and Community Impact - The open-sourcing of these robots allows anyone to assemble and understand their workings, democratizing access to robotic technology [7][28]. - Hugging Face aims to build an open-source robotics ecosystem, breaking down barriers to knowledge and technology, making robotics accessible to a wider audience [28][30]. Group 4: Development and Features - HopeJR requires developers to manually control it and record actions for training through imitation learning algorithms [10][12]. - Reachy Mini is designed to help develop AI applications, allowing for testing before deployment in real-world scenarios [20]. Group 5: Previous Initiatives - This is not Hugging Face's first venture into robotics; they previously launched the LeRobot project and the SO-100 robotic arm design [26][28].
速递|Hugging Face全力进军AI机器人:发布两款开源人形机器人,最低仅售250美元
Z Potentials· 2025-05-30 03:23
Core Viewpoint - Hugging Face has launched two new humanoid robots, HopeJR and Reachy Mini, as part of its expansion into the robotics sector, emphasizing open-source technology and affordability [1][3]. Group 1: Product Launch - The company introduced HopeJR, a full-sized humanoid robot with 66 degrees of freedom, capable of walking and arm movements, and Reachy Mini, a desktop robot that can rotate its head, speak, and listen [1]. - The estimated price for HopeJR is around $3,000, while Reachy Mini is priced between $250 and $300, depending on tariff policies [3]. Group 2: Open Source and Accessibility - The open-source nature of these robots allows anyone to assemble, reconstruct, and understand their operation, preventing monopolization by a few large companies [3]. Group 3: Strategic Acquisitions - The launch of these robots is partly attributed to the acquisition of Pollen Robotics, which provided new capabilities for the development of these humanoid robots [4]. Group 4: Future Developments - Hugging Face has been actively entering the robotics industry, with plans to launch LeRobot in 2024, a resource collection that includes open-source AI models, datasets, and tools for building robotic systems [6]. - In 2025, the company released an upgraded version of its 3D printable programmable robotic arm SO-101, developed in collaboration with The Robot Studio [6].