Workflow
机器人大讲堂
icon
Search documents
春晚上的万元机器人,松延动力如何把“人形”做成家庭消费品
机器人大讲堂· 2026-02-18 04:01
这是春晚历史上人形机器人第一次走进语言类小品,也是 消费级人形机器人第一次登上春晚舞台。 2026年除夕夜,语言类小品《奶奶的最爱》进行到三分之一处,著名表演艺术家蔡明扮演奶奶与王天放扮演 的孙子正在拌嘴,站在她身边的,是松延动力研发的可爱"机器人孙子"小布米和它的"哥哥们"。 几个机器人孙子会撒娇、讲笑话,还会变魔术、后空翻,技能值爆表!情绪价值拉满!谁能想到,1996年那 个"柔道七段"的机器人,30年后会变成温柔牵挂儿孙的奶奶,而 机器人也从科幻梦想走向现实,甚至来到了 万元级。 与多个机器人节目不同,这一次,人形机器人不再跳舞,而是成为剧情的一部分。它们有台词、有互动、有 情绪反应。观众看到的不是表演演示,而是包括小布米、N2小顽童、E1跳远冠军及定制仿生人形机器人等 四类机器人在12平方米的狭小空间内,与专业演员连续完成一场表演秀。 在这场关于未来的畅想中,机器人走进家庭、迈向生活,似乎并不遥远。而9998元的价格,让这场畅想 从"未来"变成了"可触及的未来"。 01. 为什么春晚选择小布米? 2026年,人形机器人行业站在一个微妙的技术分岔口。过去五年,这个赛道沉浸在"参数竞赛"的热潮中,大 家开 ...
美敦力双线出击:手术机器人战场迎来“系统级”角力
机器人大讲堂· 2026-02-18 04:01
2月,全球医疗科技巨头美敦力在手术机器人领域投下两颗"重磅炸弹"。一周之内,公司相继宣布其脊柱机 器人平台 Stealth AXiS 获FDA许可、软组织机器人 Hugo RAS系统 在美国完成首批商业手术。 这一密集布局不仅彰显了其技术整合能力,也意味着手术机器人行业的竞争维度正在发生变化:从早期围绕 单一设备的功能参数比拼,逐步转向系统整合能力与数据闭环能力的综合较量。 01. 软组织战线:Hugo的商业化破局与差异化突围 一边是Stealth AXiS在骨科红海中的技术深耕,一边是Hugo系统对传统霸主直觉外科的正面突袭。美敦力正 在用"双线并进"的方式,重新划定手术机器人的竞争版图。 2月17日,克利夫兰诊所成功完成美国首例Hugo机器人辅助前列腺切除术,患者次日出院。这一案例的标杆 意义在于:它证明了Hugo不仅具备在顶级医疗机构落地的能力,更承载着美敦力对个性化医疗的诠释。 脊柱战线:从"静态定位"到"动态感知"的进化 2月13日获批的Stealth AXiS系统,并非美敦力在脊柱机器人领域的试水之作。早在2018年,其通过收购 Mazor Robotics并推出Mazor X Stealth E ...
睿柏智悦RB-A850-10机械臂:助力具身智能,精进精密未来
机器人大讲堂· 2026-02-17 15:00
在具身智能研究蓬勃发展的时代,我们正见证一场由AI大模型驱动的机器人技术革命:智能化需求正以前所 未有的高标准"倒逼"机械臂向更高感知精度、更强运动控制能力和更拟人化操作迈进。 在具身智能研究前沿,多模态感知与物理交互对机械臂的重复定位精度和环境适应性提出严苛要求;而在智 能体行为学习领域,灵巧精准的动作、实时交互响应以及安全可靠的操作,已成为决定具身智能商业化落地 的关键。这些不断提升的要求,共同构成了机器人模块必须跨越的技术门槛,也明确指向下一代机械臂的发 展方向——它必须 更精准、更智能、更开放。 在此背景下,苏州睿柏智悦科技有限公司(Raybot Smartworks)推出了 大负载 谐波机械臂——RB-A850- 10。该产品专为 准备工业场景落地的人形机器人、轮式机器人打造 ,定位为一款高精度、大负载、快响应 的开放机械臂平台。 其卓越性能的背后:离不开五大核心优势的全面支撑。 极致精准,物理交互新基准 RB-A850-10以精密设计释放卓越性能: 850mm工作半径 覆盖标准人机作业场景,保障复杂作业流程顺畅 进行;末端重复定位精度高达 ±0.05mm ,相当于发丝直径的二分之一,保障物理交互的 ...
春晚机器人刷屏后的冷思考,能干活才是硬道理
机器人大讲堂· 2026-02-17 14:02
在2026年马年春晚的舞台上,一位特殊的"演员"意外出圈,成为全国观众热议的焦点。 这位特殊演员,正是银河通用人形机器人 "小盖"(Galbot G1)。在贺岁微电影《我最难忘的今宵》中,它 没有炫酷的特技表演,而是以极具生活气息的细腻动作,与沈腾、马丽献上了真实自然的对手戏:盘核桃、 捡玻璃碎片、叠衣服,甚至在剧情中熟练地串起了烤肠。 这是具身智能技术首次以 实景互动 的形式登上国家舞台。 当传承千年的守岁文化,邂逅从工业产线走进聚光灯的"钢铁劳模",这不仅仅是一场科技与文艺的跨界,更 是中国具身智能产业从 "技术炫技"向"实干兴业" 深刻转变的国家级预演。 01. 拒绝"死记硬背"的演员 过去,机器人在春晚的亮相大多依赖于整齐划一的预编程舞蹈。但这一次,"小盖"的剧本完全不同。 在看似简单的"盘核桃"动作中,考验的是灵巧手面对不规则物体的实时力矩控制;在"捡玻璃碎片"中,挑战 的是具身大模型对透明、低分辨率物体的极限感知;而在"叠衣服"这种柔性操作中,验证的则是机器人对 无 规律物体的泛化能力。 微电影《 我最难忘的今宵 》,图片来源:中央广播电视总台《2026年春节联欢晚会》,下同 "和传统的舞蹈机器人 ...
当沈腾遇上“机器人”:揭秘央视春晚的银河通用具身智能技术首秀
机器人大讲堂· 2026-02-17 09:22
2026年马年央视春晚,全球亿万观众的目光聚焦央视演播厅,当歌舞的欢腾、戏曲的婉转轮番上演,一档打破常规的贺 岁微电影《我最难忘的今宵》惊艳全场,成为当晚最具话题度的作品之一。 这档由国民喜剧搭档沈腾、马丽领衔主演的节目,跳出了两人深耕多年的传统小品框架,以细腻的镜头语言、诙谐的戏 中戏叙事,将喜剧张力与温情内核完美融合,镜头里,沈腾顶着那份标志性的松弛感,为圆自己一个演出梦四处求助、 寻找搭档,结果闹出了连串笑料。 整场微电影没有华丽的舞台布景,却以生活化的场景、接地气的对话,将"追梦"与"科技"的主题悄悄融入,让观众在阵 阵欢声笑语中,感受到了这种全新节目形态的独特魅力,也为接下来的科技惊喜埋下了伏笔。 而比节目形式更具里程碑意义的是, 这档微电影中迎来了一位特殊的"特邀演员"——银河通用人形机器人"小盖",这 也是具身智能技术首次以实景互动的形式登上央视春晚的舞台 ,与沈腾、马丽完成真实场景下的同台演绎,标志着中国 具身智能从实验室走向大众视野的关键一步。 不同于传统机器人依赖预编程为主的表演形式,"小盖"在微电影中的一系列操作堪称惊艳全场:从精细地盘核桃、小心 翼翼地捡拾玻璃碎片、灵活地货架取物,到极 ...
以武会春,宇树春晚机器人马年秀出“赛博真功夫”
机器人大讲堂· 2026-02-17 09:21
新春启序,万象更新。在中央广播电视总台2026年春晚舞台上,宇树科技作为春晚机器人合作伙伴第三次登 台亮相,携G1与H2人形机器人献上全球首次全自主人形机器人集群武术表演(带集群快速跑位)。 01. 突破运动性能极限,刷新多个全球第一次 本次节目中,宇树的人形机器人展现出前所未有的运动性能,实现了多项全球首次技术突破: 实现全球第 一次连续 花式翻桌跑酷, 全球第一次弹射空翻、空翻最大高度超3米;全球第一次单脚连续空翻、两步蹬 墙后空翻,以及全球第一次Airflare大回旋七周半等高难度动作。 宇树科技携手河南塔沟武术学校带来武术节目《武BOT》,图片来源:中央广播电视总台《2026年春节联欢晚会》 在集群协同层面, 实现全球第一次集群快速跑位(最快任意跑位速度可达4m/s);并且搭载全新自研灵巧 手,支持武术道具的快速更换与稳定抓持 ,为高难度武术演绎提供了可靠支撑。 这一系列突破不仅刷新了人形机器人运动表现的技术边界,也标志着其在爆发力、灵活性、协调性与可靠 性方面全面迈入新的高度,为未来在复杂场景中的应用奠定了坚实基础。 02. 高并发集群控制系统,自研AI融合定位算法 演出使用全新升级的高并发集群控制 ...
告别机器人“断片”!KAIST和UC Berkeley团队让VLA模型拥有记忆 实测成功率翻倍!
机器人大讲堂· 2026-02-16 15:31
Core Insights - The article discusses the limitations of existing Visual-Language-Action (VLA) models in robotics, particularly their lack of "historical memory," which hampers their ability to perform complex tasks that require context [1][4] - A new framework called HAMLET has been introduced, which enhances VLA models by integrating a lightweight memory system, resulting in a significant increase in task success rates [3][17] Group 1: Current Limitations of VLA Models - Current VLA models, such as GR00T N1.5 and CogACT, rely solely on the current visual frame and text instructions, leading to poor performance in tasks requiring context [4] - For example, in a task where a robot must cover a block with a cup, the lack of historical memory results in a success rate of only 37.5% for GR00T N1.5, causing the robot to repeat actions unnecessarily [4][14] - Simply stacking historical frames does not work effectively, as it slows down inference speed by 35% and increases peak memory usage by 3.6 times [4] Group 2: HAMLET Framework - HAMLET addresses the historical memory gap by adding two core components: moment tokens and a lightweight memory module [5][9] - Moment tokens are designed to compress and store scene information for each time step, allowing the model to focus on dynamic changes relevant to the task [6][8] - The memory module uses a two-layer Transformer architecture to filter and integrate these moment tokens, enabling the model to make more informed decisions based on historical context [9][11] Group 3: Performance Improvements - Extensive experiments show that HAMLET significantly improves success rates in long-term tasks, with an average success rate increase of 47.2% compared to baseline models [12][14] - In specific tasks, HAMLET improved the success rate from 12.5% to 66.7% in "Pick-and-Place Twice" and from 37.5% to 83.3% in "Swap Cubes" [14] - HAMLET also maintains high efficiency, with only a 7% increase in inference speed and a 1x increase in memory usage, compared to traditional methods that drastically slow down performance [15] Group 4: Cross-Task Transferability - The memory module of HAMLET demonstrates cross-task transferability, allowing it to improve success rates even when applied to different datasets, indicating a generalizable capability in processing historical information [16] Conclusion - HAMLET effectively resolves the core issue of historical memory in VLA models without requiring extensive retraining or restructuring, marking a significant step towards more capable and versatile robotic systems [17]
哈佛大学顶刊发布“七十二变”软体机器手,提出旋转多材料3D打印新方法
机器人大讲堂· 2026-02-15 09:09
Core Viewpoint - The article discusses a breakthrough in soft robotics manufacturing through a new technique called Rotational Multi-Material 3D Printing (RM-3DP), which allows for the creation of complex, programmable soft robots with capabilities akin to "72 transformations" [1][2][5]. Group 1: Manufacturing Technique - RM-3DP enables the direct printing of intricate pneumatic networks within soft robots, akin to implanting programmable "blood vessels" and "muscles" [2]. - The technique utilizes a specially designed 3D printing nozzle and two types of inks, including a temperature-sensitive gel that supports structures during printing and can be washed away post-manufacturing to create hollow pneumatic channels [6][7][8]. - The internal channels are designed asymmetrically, allowing for controlled bending and movement when inflated, enhancing the robot's functionality [10][11]. Group 2: Programming Capabilities - The RM-3DP technology allows for dynamic programming of the internal channel's direction, shape, and size during the printing process, enabling unprecedented complexity in soft robot designs [12]. - Various complex deformation patterns can be achieved, such as periodic bending and spiral twisting, demonstrating the method's predictability and reliability [13][15]. - The ability to create localized hinges and patterned actuators further showcases the versatility of the RM-3DP technique in developing soft structures [16][17]. Group 3: Application Demonstrations - The research team successfully printed a soft robotic hand capable of independent finger movements, showcasing the practical application of the RM-3DP method [23]. - The hand was demonstrated to perform a grasping action, highlighting its potential for real-world applications in robotics [25]. - The integration of algorithmic path planning with RM-3DP allows for the rapid transformation of complex biomimetic designs into functional robots, paving the way for future advancements in soft robotics [25].
小鹏IRON vs特斯拉Optimus,到底差在哪?
机器人大讲堂· 2026-02-15 09:09
Core Viewpoint - The competition between Xiaopeng and Tesla in the humanoid robot industry represents a clash of two different technological philosophies and business logics, with Xiaopeng focusing on technology reuse and Tesla emphasizing pure self-research and heavy investment [1]. Hardware Architecture - The core competitiveness of humanoid robots lies in their hardware architecture's adaptability to real-world scenarios, with Xiaopeng's IRON boasting 82 degrees of freedom compared to Tesla's Optimus Gen2 with 50, particularly excelling in high-precision industrial tasks [3][5]. - Xiaopeng's design breaks away from traditional robotic aesthetics, employing a General-Purpose Humanoid Design Framework that allows for a harmonious and efficient human-like structure [5][7]. - The IRON robot features a solid-state battery with an energy density greater than 500Wh/kg, achieving all-day operation, while Optimus relies on a 2.3kWh lithium battery with limited operational hours [10]. Technical Route - Xiaopeng's "car-machine same source" strategy demonstrates remarkable R&D efficiency, leveraging existing automotive technologies for its robot business, allowing for a faster transition from R&D to mass production [12][14]. - Tesla's approach involves developing everything from scratch, which extends the R&D cycle and lacks the cross-domain synergy seen in Xiaopeng's model [14][16]. Ecological Synergy - Xiaopeng has built a physical AI ecosystem that integrates smart cars, humanoid robots, and flying cars, creating a cost advantage and enhancing R&D efficiency through shared data and resources [16][17]. - Tesla's ecosystem remains confined to the automotive sector, limiting its ability to leverage cross-category technological synergies [17][19]. Commercial Implementation - Xiaopeng aims for large-scale production by the end of 2026, having completed necessary preparatory work, while Tesla's production timeline has faced multiple delays, with no clear large-scale delivery date [20][22]. - Xiaopeng's pricing strategy targets a range of 200,000 to 300,000 yuan, facilitating rapid penetration into industrial and commercial markets, whereas Tesla's pricing may exceed expectations due to its historical pricing strategies [22][24]. Long-term Competition - The competition in the humanoid robot sector will ultimately hinge on the ability to construct ecological barriers and the compounding effects of technological iteration, with Xiaopeng's integrated approach providing a significant advantage [24][25]. - Xiaopeng's partnerships and open SDK for industrial applications contrast with Tesla's more insular approach, which may limit its commercial reach [25][27].
Nature Reviews Bioengineering | 香港中文大学任洪亮团队提出人工动觉框架,突破视觉依赖
机器人大讲堂· 2026-02-14 09:25
Core Viewpoint - The article discusses the development of an artificial kinaesthesia framework by a research team at The Chinese University of Hong Kong, aimed at overcoming the limitations of current surgical robots that heavily rely on visual data, thereby enhancing their tactile perception and adaptability in complex surgical environments [4][12]. Group 1: Artificial Kinaesthesia Framework - The proposed framework consists of three layers: physical perception, algorithm interpretation, and collaborative control, enabling robots to not only "see" but also "feel" and "understand" the physical interactions during surgery [5][11]. - The physical perception layer focuses on equipping surgical instruments with sensory capabilities, integrating proprioception and exteroception to replicate human-like tactile feedback [8][9]. - The algorithm interpretation layer aims to provide semantic meaning to the sensory data, allowing robots to process feedback in a two-tiered manner similar to human surgeons, distinguishing between reflexive adjustments and cognitive decision-making [10]. Group 2: Challenges and Solutions - The collaborative control layer seeks to create a closed-loop system that integrates physical perception and algorithm interpretation, enabling robots to execute tasks with greater flexibility and precision [11]. - The article emphasizes the need for a multi-modal model that combines visual, tactile, and linguistic information to enhance the robot's situational awareness and operational adaptability [11]. Group 3: Future Implications - The introduction of the artificial kinaesthesia framework signifies a shift from vision-dependent surgical robots to intelligent partners capable of multi-sensory collaboration, ultimately leading to safer and more precise minimally invasive treatments for patients [12][13].