Workflow
Robotics
icon
Search documents
很多初学者想要的具身科研平台来了,为具身领域打造,高性价比
具身智能之心· 2025-10-27 00:02
面向具身科研领域打造的轻量级高性价比机械臂 还在为具身智能领域的硬件选择发愁吗? 太贵的机械臂买不起,太便宜的又难用、难上手? ✅ 提供全流程开源工具链+代码示例,从数据采集到模型部署一气呵成; ✅ 支持 Python / C++ 双语言接口,无论你擅长哪种语言都能快速上手; ✅ 兼容 ROS1 / ROS2,并提供 URDF 模型,仿真与真机无缝切换; ✅ 24小时快速售后响应,遇到问题不卡壳,学习路上有保障! 该机械臂融合高精度运动控制、低功耗设计与开放软硬件架构,支持从仿真到真机的无缝联调,并提供全 流程开源SDK与工具链,助力用户快速实现算法验证、数据采集、模型训练与部署应用。 其紧凑型结构与模块化接口,尤其适用于嵌入式AI与机器人学习平台的开发与应用推广。 别担心,Imeta-Y1 来了——这是一款专为新手和科研初学者设计的轻量级高性价比机械臂。 无论你是学生、教育工作者,还是刚踏入机器人领域的开发者,Imeta-Y1 都能帮你低成本、高效率地完成 算法验证与项目开发。 对小白尤其友好的是: | 本体重量 | 4.2KG | 额定负载 | 3KG | 自由度 | 6 | | --- | --- | ...
智源&悉尼大学等出品!RoboGhost:文本到动作控制,幽灵般无形驱动人形机器人
具身智能之心· 2025-10-27 00:02
作者丨 Zhe Li等 点击下方 卡片 ,关注" 具身智能 之心 "公众号 编辑丨具身智能之心 本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 ★ 本文的主要作者来自悉尼大学、哈尔滨工业大学、香港科技大学、上海交通大学和北京智源人工智能研究院。 本文的第一作者为即将入学悉尼大学的博士生李哲,主要研究方向为具身智能和3D数字人。 本文的共一作者兼项目负责人为北京智源人工智能研究院研究员迟程。 本文的通讯作者为北京大学计算机学院研究员、助理教授仉尚航和悉尼大学副教授徐畅。 领域研究痛点:多阶段流程带来的信息损失 在虚拟世界中,自然语言可以轻松驱动一个3D数字人完成人们所描述的动作,于是人们将目光转向于现实,从3D虚拟数字人转向人形机器人。然而,自然语言 虽为人形机器人提供了天然交互接口,但现有基于语言引导的人形机器人运动流程仍显臃肿且不可靠。 通过绕开显式的运动解码与重定向流程,RoboGhost使基于扩散模型的策略能够直接从噪声中解算出可执行动作,在保持语义完整性的同 ...
HuggingFace联合牛津大学新教程开源SOTA资源库!
具身智能之心· 2025-10-27 00:02
Core Viewpoint - The article emphasizes the significant advancements in robotics, particularly in robot learning, driven by the development of large models and multi-modal AI technologies, which have transformed traditional robotics into a more learning-based paradigm [3][4]. Group 1: Introduction to Robot Learning - The article introduces a comprehensive tutorial on modern robot learning, covering foundational principles of reinforcement learning and imitation learning, leading to the development of general-purpose, language-conditioned models [4][12]. - HuggingFace and Oxford University researchers have created a valuable resource for newcomers to the field, providing an accessible guide to robot learning [3][4]. Group 2: Classic Robotics - Classic robotics relies on explicit modeling through kinematics and control planning, while learning-based methods utilize deep reinforcement learning and expert demonstration for implicit modeling [15]. - Traditional robotic systems follow a modular pipeline, including perception, state estimation, planning, and control [16]. Group 3: Learning-Based Robotics - Learning-based robotics integrates perception and control more closely, adapts to tasks and entities, and reduces the need for expert modeling [26]. - The tutorial highlights the challenges of safety and efficiency in real-world applications, particularly during the initial training phases, and discusses advanced techniques like simulation training and domain randomization to mitigate risks [34][35]. Group 4: Reinforcement Learning - Reinforcement learning allows robots to autonomously learn optimal behavior strategies through trial and error, showcasing significant potential in various scenarios [28]. - The tutorial discusses the complexity of integrating multiple system components and the limitations of traditional physics-based models, which often oversimplify real-world phenomena [30]. Group 5: Imitation Learning - Imitation learning offers a more direct learning path for robots by replicating expert actions through behavior cloning, avoiding complex reward function designs [41]. - The tutorial addresses challenges such as compound errors and handling multi-modal behaviors in expert demonstrations [41][42]. Group 6: Advanced Techniques in Imitation Learning - The article introduces advanced imitation learning methods based on generative models, such as Action Chunking with Transformers (ACT) and Diffusion Policy, which effectively model multi-modal data [43][45]. - Diffusion Policy demonstrates strong performance in various tasks with minimal demonstration data, requiring only 50-150 demonstrations for training [45]. Group 7: General Robot Policies - The tutorial envisions the development of general robot policies capable of operating across tasks and devices, inspired by large-scale open robot datasets and powerful visual-language models [52][53]. - Two cutting-edge visual-language-action (VLA) models, π₀ and SmolVLA, are highlighted for their ability to understand visual and language instructions and generate precise control commands [53][56]. Group 8: Model Efficiency - SmolVLA represents a trend towards model miniaturization and open-sourcing, achieving high performance with significantly reduced parameter counts and memory consumption compared to π₀ [56][58].
科技日报:新方法提升机器人复杂地形自主导航能力
Xin Lang Cai Jing· 2025-10-26 23:56
10月21日,记者从哈尔滨工业大学(深圳)获悉,该校智能学部智能科学与工程学院教授陈浩耀团队在 机器人路径规划方面取得重要进展。该团队通过引入地形分析与构型稳定性估计形成层次化路径规划框 架,实现地面移动机器人在崎岖地形下安全、稳定、高效自主导航。相关研究成果于日前发表在学术期 刊《IEEE机器人学汇刊》上。 ...
3 Robotics Stocks to Buy Right Now
The Motley Fool· 2025-10-26 23:15
Industry Overview - The robotics market is projected to reach $130 billion by 2035, with $38 billion in humanoid robots and $94 billion in industrial systems [1][2] - The growth is driven by advancements in artificial intelligence, leading to a robotics revolution [1] Company Insights - Amazon operates over 1 million robots across more than 300 facilities, significantly enhancing its logistics capabilities [5][8] - Tesla is developing the Optimus humanoid robot, targeting a price range of $20,000 to $30,000, which could disrupt the market if successful [9][12] - Nvidia provides the AI platforms essential for robotics, with its technology being utilized by various companies in the sector [13][16] Competitive Landscape - Amazon's robotics infrastructure is unmatched in scale, handling billions of packages annually, giving it a competitive edge [8] - Tesla's success with Optimus hinges on achieving cost-effective production, which could transform humanoid robots into practical industrial tools [9][10] - Nvidia's technology is integral to the robotics ecosystem, benefiting from widespread adoption across different companies [14][16] Investment Considerations - Investors are encouraged to consider these three companies as they represent distinct opportunities within the robotics sector [17] - Each company offers unique risk profiles and value propositions, making them solid picks for investment [18]
What's New with the Figure 03 Humanoid?
CNET· 2025-10-26 12:01
Figure recently unveiled its new figure 3 humanoid robot. We dig into all the latest updates and announcements to understand how Figure has optimized its newest creation for the home, the warehouse, and beyond. Before we get into the new demos, the obvious question.Figure CEO Brett Adcock has said publicly nothing in this video is teleyoperated. But that isn't quite the same as saying it's fully autonomous either. Autonomy is more like a spectrum with complete human control at one end and pure robotic plann ...
IROS2025视触觉结合磁硅胶的MagicGel传感器
机器人大讲堂· 2025-10-26 10:03
在目前的视触觉传感器领域,VBTS的力估计精度受限于相机接收接触表面三维空间形变信息的缺失。有研究 者使用双层标记点来提升触觉标记点密度,到使用颜色叠加方式改进触觉传感器的传感原理,但双层标记生产 和制造难度大,且视觉传感原理的固有特性——光学形变特征与接触力间的非线性映射易受环境扰动影响,利 用双层标记的方式无疑是降低了接触的稳定性。此外,还有研究利用使用双目相机(双目系统)和结构上的改 进,接收更丰富信息形成来提升力估计精度,但不利于集成。具体而言,现有方案面临双重困境:(1)复杂 光学结构与传感器微型化需求存在根本性冲突;(2)传感原理改进引起的工艺复杂化导致传感器泛化能力不 足。 IROS 2025论文提出了 视触觉结合磁触觉的MagicGel传感器。 通过引入磁传感模态构建视觉-磁场异构数据 融合框架。其核心在于利用磁场信息补偿视觉图像的信息缺失,通过视、磁关联建立更完备的接触力学表征体 系。 MagicGel 的主体结构如 图 1 所示。其结构主要分为:涂层、强磁颗粒标记点、弹性体、灯带、霍尔传感器 和相机。此外还有接收视觉图像和磁场信息的接收模块。 MagicGel 的整体尺寸为 31*31*2 ...
Meet the "Infinite Money Glitch" That Could Send Tesla Stock Soaring, According to Elon Musk
The Motley Fool· 2025-10-26 09:05
Core Insights - Tesla's CEO Elon Musk claims that the Optimus robot could revolutionize productivity for businesses, referring to it as an "infinite money glitch" due to its potential efficiency [2] - Despite the excitement surrounding Optimus, Tesla's electric vehicle (EV) sales have been sluggish, with a 13% year-over-year decline in the first half of 2025 [9][10] - The company is facing increasing competition in the EV market, particularly from cheaper brands like BYD, leading to a loss of market share [11] Tesla's Future Product Platforms - The Optimus robot is seen as Tesla's most complex manufacturing challenge, requiring in-house production of components due to the lack of an existing supply chain [5] - Musk predicts that humanoid robots could outnumber humans by 2040, with Optimus potentially being five times more productive than a human worker [6] - Long-term revenue from Optimus sales could reach $10 trillion, with plans to scale production from 1 million units annually to as many as 100 million units in the future [7] Current Business Performance - Over 75% of Tesla's revenue still comes from EV sales, which have seen a decline, delivering 720,803 cars in the first half of 2025 [9] - Although there was a 7% increase in deliveries in the recent third quarter, this may have been influenced by consumers purchasing vehicles ahead of the expiration of a tax credit [10] - Tesla's recent launch of a low-cost Model Y aims to revive its passenger EV business, although Musk has previously resisted this strategy in favor of focusing on the Cybercab autonomous robotaxi [13] Competitive Landscape - Tesla's market share in China has decreased from 11.7% to 7.5% in the first half of 2025, highlighting the competitive pressures from other EV manufacturers [11] - The company is testing a ride-hailing service using its EVs with full self-driving software, but current operations require a human supervisor, adding costs [14] Valuation Considerations - Tesla's market capitalization is currently $1.3 trillion, with a price-to-earnings (P/E) ratio of 254, making it significantly more expensive than the Nasdaq-100 index and Nvidia [15][16] - Given the current state of the EV business and the timeline for Optimus production, there are questions about the wisdom of investing at such a high valuation [18]
“WoW”具身世界模型来了!机器人实现从想象预演到动作执行“知行合一”
Yang Shi Wang· 2025-10-26 05:23
央视网消息:当前,机器人的运动能力正在迅速进化。但是,要让它们像人一样理解一些事情还是比较困难的。日前,我国科研团 队开源出一个名叫"WoW"的具身世界模型,它有什么进步? 这里是北京人形机器人创新中心,各种形态的机器人本体正在进行具身智能数据采集和动作模型训练。这台"天工"机器人正在进行 的就是自主地1:1复刻这个视频中的动作姿态,而这个视频就是机器人在行动之前"想象出来"的预演画面,可以用来指导它与真实世 界的交互。这样从想象预演到动作执行的"知行合一"的能力,依托的就是由科研团队自主研发的具身世界模型。 WoW具身世界模型项目算法负责人贾沛东介绍,他们采集了百万级别真实交互的具身智能数据,让世界模型能够在真实非常泛化的 场景下真正去操作。 具身世界模型向全球研究者与开发者开放。可以适配人形、类人形、机械臂等不同本体机器人,覆盖家居、商超、工业、物流等多 种场景。还能高精度模拟水洒在电脑上等极端情况,为真机训练难以实现的数据采集提供重要补充。 WoW具身世界模型项目负责人池晓威介绍,世界模型本质上就是AI模拟人类思考和决策的时候,去进行想象和预测的一个模型。它 需要去生成符合物理规律的未来预测视频,帮助机 ...
从世界模型到VLA再到强化,具身大小脑算法原来是这样的!
具身智能之心· 2025-10-26 04:02
Core Insights - The article discusses the evolution and current state of embodied intelligence, focusing on the roles of the brain and cerebellum in robotics, where the brain handles perception and planning, while the cerebellum is responsible for execution [3][10]. Technical Evolution - The development of embodied intelligence has progressed through several stages, starting from grasp pose detection, moving to behavior cloning, and now advancing to diffusion policy and VLA models [7][10]. - The first stage focused on static object grasping with limited decision-making capabilities [7]. - The second stage introduced behavior cloning, allowing robots to learn from expert demonstrations but faced challenges in generalization and error accumulation [8]. - The third stage, marked by the introduction of diffusion policy, improved stability and generalization by modeling action sequences [8]. - The fourth stage, emerging in 2025, explores the integration of VLA models with reinforcement learning and world models to enhance robots' predictive and interactive capabilities [9][10]. Current Trends and Applications - The integration of VLA with reinforcement learning enhances robots' trial-and-error learning and self-improvement abilities, while the combination with world models allows for future prediction and better planning [10]. - The article highlights the growing demand for embodied intelligence applications across various sectors, including industrial, home, restaurant, and medical rehabilitation, leading to increased job opportunities and research interest in the field [10]. Educational Initiatives - The article outlines a structured learning program aimed at equipping individuals with comprehensive knowledge of embodied intelligence algorithms, including practical applications and real-world projects [11][14]. - The course targets individuals with a foundational understanding of embodied intelligence and aims to bridge the gap between theoretical knowledge and practical deployment [18][24].