Workflow
Robotics
icon
Search documents
具身智能入门必备的技术栈:从零基础到强化学习与Sim2Real
具身智能之心· 2025-06-30 03:47
在近20年AI发展的路线上,我们正站在⼀个前所未有的转折点。从早期的符号推理到深度学习的突破,再到 如今⼤语⾔模型的惊艳表现, AI 技术的每⼀次⻜跃都在重新定义着⼈类与机器的关系。⽽如今,具身智能正 在全面崛起。 想象⼀下这样的场景:⼀个机器⼈不仅能够理解你的语⾔指令,还能在复杂的现实环境中灵活移动,精确操作 各种物体,甚⾄在⾯对突发情况时做出智能决策。这不再是科幻电影中的幻想,⽽是正在快速成为现实的技术 ⾰命。从Tesla的Optimus⼈形机器⼈到Boston Dynamics的Atlas,从OpenAI的机械⼿到Google的RT-X项⽬,全 球顶尖的科技公司都在竞相布局这⼀颠覆性领域。具身智能的核⼼理念在于让AI系统不仅拥有"⼤脑",更要拥 有能够感知和改变物理世界的"身体"。这种AI不再局限于虚拟的数字空间,⽽是能够真正理解物理定律、掌握 运动技能、适应复杂环境。它们可以在⼯⼚中进⾏精密装配,在医院⾥协助⼿术操作,在家庭中提供贴⼼服 务,在危险环境中执⾏救援任务。这种技术的潜在影响⼒是⾰命性的:它将彻底改变制造业、服务业、医疗健 康、太空探索等⼏乎所有⾏业。 从顶级会议ICRA 、IROS到Neu ...
2025年人形机器人中期策略
2025-06-30 01:02
机器人行业目前的发展现状如何?未来有哪些看点? 机器人行业目前正处于从实验室迭代过渡到垂直场景商业化阶段。自 2024 年 下半年以来,机器人进入了快速的成本和技术迭代周期。一方面,供应链定点 和量产扩展使得成本加速迭代,核心部件如丝杠、电机、减速器等出现明显降 本,尤其是丝杠。另一方面,国内机器人的迭代周期仅约两个月,目前特斯拉 等公司已经迭代到 Optimus GEM Three,使得产品越来越接近大规模量产的 成熟状态。 下半年整个行业开始进入垂直场景商业化阶段,例如巡检、康养、 交互、物流、工厂等多个垂直场景都开始部署人形机器人。从产品和技术角度 来看,人形机器人的商业化主要卡在大小脑以及上肢协作领域。上肢协作能力 是工作效率的根本,因此未来人形机器人迭代最快的应该是上肢能力,这也是 目前最关键的领域。上肢协作需要采集大规模、高质量的数据,如 Figure Helix 智源 JOY 以及工业通用的 Grasp VLA 等均针对上肢协作的大模型。 下半年重点关注灵巧手、丝杠、Pick and Place、减速器、关键总成、 电子皮肤、六维力传感器七大赛道,关注特斯拉、华为等公司的供应链 机会。 Q&A ...
江苏首个智能机器人训练中心启用
Xin Hua Ri Bao· 2025-06-30 00:14
Group 1 - The establishment of the Intelligent Robot Training Center in Suzhou marks Jiangsu Province's first facility dedicated to the training and application of embodied robots, providing essential support for their development and deployment [1] - The training center spans approximately 1,500 square meters and is operated by a collaboration between Suzhou Wujiang District Big Data Company, Suzhou Bay Group, and Leju (Suzhou) Robot Technology Co., Ltd [1] - The center features various functional areas simulating different industrial scenarios, including 3C and automotive factories, equipped with 30 data collection stations that can generate over 2 million data entries annually [2] Group 2 - The training center is supported by a 5G-A intelligent application scenario incubation base, developed in partnership with China Mobile, Huawei, and Leju Robot, which enhances the training capabilities through high-speed, low-latency communication [2] - The center collaborates with the National and Local Joint Human-shaped Robot Innovation Center to establish a cooperative framework for standardization, data sharing, and platform co-creation, aiming to enhance the quality of human-shaped robot data collection and research [2] - Future developments in humanoid robots are expected to focus on companionship and care, with an initial emphasis on simpler scenarios to gradually evolve their capabilities [3]
【早鸟票倒计时1天】CCRS2025 I 抢先看!大会日程和论坛首曝光!
机器人圈· 2025-06-29 13:04
Core Points - The 6th China Robotics Academic Annual Conference (CCRS2025) will be held from August 1 to 3, 2025, in Changsha, Hunan Province, with the theme "Human-Machine Integration, Intelligent Future" [14][15] - The conference aims to gather over 200 experts and academicians in the field of robotics and artificial intelligence to discuss trends and exchange technological achievements, expecting more than 3,000 attendees [14][15] Conference Overview - CCRS2025 is one of the largest and most influential academic events in China's robotics field, focusing on cutting-edge technologies, industry development, and innovative achievements [13][14] - The conference is co-hosted by multiple professional committees and societies related to robotics and automation in China [14] Agenda Highlights - The conference will feature various forums, including the Main Forum, Youth Scholar Forums, and specialized forums on industrial robots, service robots, special robots, and more [8][9][11] - Specific sessions will include keynote speeches, poster exhibitions, and discussions on embodied intelligence and large models [8][9] Registration Information - Registration fees are set at 2,300 RMB for non-students and 1,300 RMB for students if registered by June 30, 2025 [41] - Payment methods include WeChat Pay and bank transfer, with specific instructions provided for registration [41][42] Organizing Committee - The conference is chaired by prominent professors from leading universities and research institutes, ensuring high academic standards and collaboration opportunities [15][18][20][22]
港科大 | LiDAR端到端四足机器人全向避障系统 (宇树G1/Go2+PPO)
具身智能之心· 2025-06-29 09:51
Core Viewpoint - The article discusses the Omni-Perception framework developed by a team from the Hong Kong University of Science and Technology, which enables quadruped robots to navigate complex dynamic environments by directly processing raw LiDAR point cloud data for omnidirectional obstacle avoidance [2][4]. Group 1: Omni-Perception Framework Overview - The Omni-Perception framework consists of three main modules: PD-RiskNet perception network, high-fidelity LiDAR simulation tool, and risk-aware reinforcement learning strategy [4]. - The system takes raw LiDAR point clouds as input, extracts environmental risk features using PD-RiskNet, and outputs joint control signals, forming a complete closed-loop control [5]. Group 2: Advantages of the Framework - Direct utilization of spatiotemporal information avoids information loss during point cloud to grid/map conversion, preserving precise geometric relationships from the original data [7]. - Dynamic adaptability is achieved through reinforcement learning, allowing the robot to optimize obstacle avoidance strategies for previously unseen obstacle shapes [7]. - Computational efficiency is improved by reducing intermediate processing steps compared to traditional SLAM and planning pipelines [7]. Group 3: PD-RiskNet Architecture - PD-RiskNet employs a hierarchical risk perception network that processes near-field and far-field point clouds differently to capture local and global environmental features [8]. - The near-field processing uses farthest point sampling (FPS) to reduce data density while retaining key geometric features, and employs gated recurrent units (GRU) to capture local dynamic changes [8]. - The far-field processing uses average down-sampling to reduce noise and extract spatiotemporal features from distant environments [8]. Group 4: Reinforcement Learning Strategy - The obstacle avoidance task is modeled as an infinite horizon discounted Markov decision process, with state space including the robot's kinematic information and historical LiDAR point cloud sequences [10]. - The action space directly outputs target joint positions, allowing the policy to learn the mapping from raw sensor inputs to control signals without complex inverse kinematics [11]. - The reward function incorporates obstacle avoidance and distance maximization rewards to encourage the robot to seek open paths while penalizing deviations from target speeds [13][14]. Group 5: Simulation and Real-World Testing - The framework was validated against real LiDAR data collected using the Unitree G1 robot, demonstrating high consistency in point cloud distribution and structural integrity between simulated and real data [21]. - The Omni-Perception tool showed significant advantages in rendering efficiency, maintaining linear growth in rendering time as the number of environments increased, unlike traditional methods which exhibited exponential growth [22]. - In various tests, the framework achieved a 100% success rate in static obstacle scenarios and demonstrated superior performance in dynamic environments compared to traditional methods [26][27].
X @Forbes
Forbes· 2025-06-29 05:30
Startup That Makes Robots For Cataract Surgery Raises $125 Million In Second-Largest Fundraise For A Surgical Robotics Firm https://t.co/PlCpFbwxhb https://t.co/PlCpFbwxhb ...
IJRR发表!中山大学研究团队提出Koopman-ILC系统,实现对连续体机器人数据驱动建模与迭代学习控制!
机器人大讲堂· 2025-06-29 03:53
目前 已有的基于 Koopman算子的方法难以补偿不确定性和干扰,导致训练与现实之间的差距,从而造成性 能不佳。由于连续机器人天生的易受干扰性,这一差距很容易在实际场景中削弱其任务空间性能。 连续机器人控制中的鲁棒性与泛化能力 同时, 机器人可能需要在训练过程中未覆盖的区域进行操作,而且连续机器人的结构多样性进一步增加了控 制的复杂性。此外,现有基于 Koopman算子的控制方法的收敛性和鲁棒性尚未从理论或实验角度得到验证。 因此, 开发一种具有显著增强的鲁棒性、高计算效率、强泛化能力和严谨理论分析的数据驱动控制算法,对 于连续机器人而言至关重要 。 连续体机器人在近几十年受到了越来越多的关注。它们的柔顺性和灵活性使其在医疗、工业、农业和航空航天 等诸多领域具有重要应用价值。充分发挥其能力需要设计有效、高效且可靠的控制系统,而由于其结构复杂 性,这一任务仍然具有很大的挑战。 传统的连续体机器人控制方法通常依赖于对机器人物理模型的精确建模。然而,由于其柔性结构和不规则形 态,连续体机器人的建模极其困难,且容易受到环境影响,导致模型难以准确反映机器人的实际动态行为。因 此,研究人员转向了数据驱动的控制方法,尤其是 ...
IJRR发表!浙大控制学院熊蓉团队提出驱动器空间最优控制框架,改善连续体机器人路径跟踪精度!
机器人大讲堂· 2025-06-29 03:53
近年来, 连续体机器人 凭借其高柔顺性、灵活运动能力及轻量化和小型化结构,在医疗、工业检测、人机交 互等领域展现出巨大应用潜力。然而, 如何实现精准的路径跟踪仍是各类应用普遍面临的关键技术难题。 目前, 主流的路径跟踪方法依赖于逆运动学求解 ,即通过数学模型求解末端执行器期望运动路径下驱动器的 对应运动路径,并寻找多解以避开环境障碍物碰撞。然而,和刚性机械臂不同的是,连续体机器人多采用分段 等曲率模型,该模型缺乏逆运动学求解理论,传统数值方法依赖初值,也难以找到多解。部分研究采用模型预 测控制( MPC),通过在线优化调整控制策略,但 该方法无法保证全局最优性。 受刚性机械臂研究的启发, 有学者提出在执行器空间规划全局最优轨迹,作为前馈控制信号以提高跟踪精 度。 然而,连续体机器人的高度非线性特性使得其任务空间、配置空间和执行器空间之间的映射关系更为复 杂。 数值逆运动学算法通常只能提供单一解,且对初始值敏感,难以满足全局轨迹优化的需求。 因此,如何突破逆运动学求解的局限性,发展高效、多解、不依赖初始值的解算和规划方法,是提升连续体机 器人路径跟踪性能的关键。 ▍提出最优路径跟踪框架,实现运动控制突破性优化 ...
公布最新研究!这次1XWorldModel如何颠覆人形机器人领域?
机器人大讲堂· 2025-06-29 03:53
2024年9月,1X Technologies (以下简称 "1X")发布全球首个人形机器人世界模型 1X World Model首证 Scaling Law(人形机器人数据显著增强扩展定律) 。 前不久, 1X对外公布了其世界模型在技术迭代和应 用场景上取得的多项突破,再度成为行业焦点。 据具身智能大讲堂了解, 1X World Model 是一种可以模拟现实世界在智能体作用下如何演变的生成式视频 模型, 其基于视频生成技术( Sora)和自动驾驶世界模型(端到端自动驾驶,E2EAD)构建形成,能够 通 过输入图像状态与动作指令 模拟出机器人在不同动作下的未来场景,预测机器人和操作对象之间的交互效 果,帮助人形机器人完成精准交互,解决具身机器人评估难题。 本次 1X World Model 最新突破集中在 三个方面: ▍ 动作可控性:从基础动作响应到复杂物理场景精准模拟 首次公开的 1X World Model具备根据不同动作命令生成不同结果的能力 , 通过展示以四种不同轨迹为条件 对世界模型进行的不同生成过程,且每条轨迹都从相同初始帧起始,清晰地呈现了其多样化生成特性。 在模拟物体间交互这一核心价值体现上 ...
又一家融到D轮的明星机器人要IPO了
投中网· 2025-06-29 03:07
Core Viewpoint - The article discusses the surge of robotics companies, particularly focusing on Stand Robot's IPO ambitions and the broader trend of robotics firms seeking to go public in Hong Kong's specialized technology sector. Group 1: Stand Robot's IPO Journey - Stand Robot submitted its prospectus to the Hong Kong Stock Exchange on June 23, 2025, aiming to become the "first industrial embodiment intelligent stock" [4] - The company is currently the fifth largest provider of industrial intelligent mobile robot solutions globally and the fourth in industrial embodiment intelligent robots by sales volume as of December 31, 2024 [4] - Stand Robot's founder, Wang Yongkun, has a background in robotics and aims to enhance production efficiency and reduce costs for enterprises through their SLAM technology [15][14] Group 2: Industry Trends and Other IPOs - Multiple robotics companies, including Woan Robot, XianGong Intelligent, and Yunji Technology, have also initiated IPO processes, indicating a growing trend in the industry [5][26] - The market for humanoid robots and automation equipment is expected to reach a scale of 100,000 to 200,000 units, supporting the growth of domestic manufacturers [28] - Stand Robot's revenue grew from 96.3 million yuan in 2022 to 162.2 million yuan in 2023, with a projected increase to 250.5 million yuan in 2024, reflecting a compound annual growth rate of 61.3% [30] Group 3: Investment and Financing - Stand Robot has completed four rounds of financing, achieving a valuation of 2.1 billion yuan [16] - The company has attracted significant investments from notable firms, including Xiaomi and Bohua Capital, which are essential for meeting the requirements for the specialized technology listing [24][22] - The robotics sector has seen a surge in investment, with companies like Yushun Technology completing substantial financing rounds, indicating a robust interest in the industry [31]