机器人大讲堂
Search documents
以多形态机器人领跑“具身工业”场景落地,越疆为何持续进化?
机器人大讲堂· 2025-09-23 13:24
Core Viewpoint - The article emphasizes the evolution of robotics towards "embodied intelligence," highlighting the transition from single-function robots to multi-functional, high-precision collaborative robots that can adapt to various scenarios and collaborate across devices [1][3][5]. Group 1: Product Development and Innovation - At the 2025 China International Industry Fair, the company showcased a diverse range of robots, including humanoid robots and multi-legged robots, demonstrating a comprehensive product matrix that supports efficient collaboration and autonomous operations [3][5]. - The "embodied intelligence" concept is central to the company's strategy, leading to the development of a multi-modal robot platform that integrates various robotic forms for enhanced operational capabilities [5][7]. - The robots operate under a "distributed perception - centralized decision - dynamic execution" model, allowing for real-time task planning and execution across different robotic types [7][9]. Group 2: Technological Advancements - The company has achieved a global first in autonomous collaborative operations among multi-form robots, covering essential factory processes such as material sorting and precision assembly [9][11]. - The underlying technology architecture across all products is consistent, enabling significant technology reuse and capability extension, particularly in force control, motion planning, and visual perception [13][15]. - The integration of 2.5D/3D vision with tactile sensing has enhanced the robots' precision and adaptability, allowing them to perform complex tasks in various environments [17][19]. Group 3: Market Position and Future Outlook - The company has established itself as a leader in the collaborative robot sector, with over 90,000 units deployed globally, serving more than 80 Fortune 500 companies [30]. - With the continuous evolution of embodied intelligence technology, the company is positioned to become a significant player in the global robotics field, driving the transition from "Made in China" to "Intelligent Manufacturing in China" [30][28].
快讯|宇树机器人“围殴”测试展硬实力;Optimus AI灵魂人物接连出走;智元机器人GO - 1通用具身基座大模型全面开源
机器人大讲堂· 2025-09-23 13:24
Group 1 - Yushu Technology's humanoid robot G1 demonstrated strong resilience and recovery capabilities during a recent test, showcasing its ability to withstand various attacks and perform complex movements like flips, highlighting the company's advanced robotics technology [3][5] Group 2 - Key members of Tesla's Optimus AI team, including Ashish Kumar and Milan Kovac, have left the company, raising concerns about the timeline for the planned 2025 small-scale production and impacting Tesla's market valuation [4][6] Group 3 - Zhiyuan Robotics has officially open-sourced its GO - 1 model, the first universal embodied intelligence model using the ViLLA architecture, which aims to lower entry barriers for developers in the embodied intelligence field and stimulate innovation [8][9] Group 4 - Shenzhen Sailian Robotics Co., Ltd. was established with a registered capital of 10 million yuan, focusing on the development and sales of industrial robots, indicating a strategic partnership between Haichen Co. and Leju Robotics [11][13] Group 5 - Shenzhen Shengqiao Robotics Co., Ltd. was founded with a registered capital of 30 million yuan, aiming to integrate robotics and AI, reflecting Shengshi Technology's intent to explore new opportunities in the robotics and AI sectors [14][17]
用按摩椅控制人形机器人?肌肉传感器方向
机器人大讲堂· 2025-09-23 13:24
Core Insights - H2L has launched a groundbreaking remote operation device, a capsule interface that allows users to control humanoid robots through muscle movements, marking a significant milestone for industries requiring remote presence without sacrificing physical precision [1][3] - The capsule interface is priced at 30 million yen (approximately 1.4 million yuan), targeting specialized markets rather than the mass market [1] Redefining Remote Interaction - Unlike traditional remote control devices that rely on motion sensors, H2L's technology captures subtle muscle tension changes, enabling a more immersive experience without the need for complex training or equipment [3][5] - The system allows for real-time mapping of muscle activity to humanoid robots, enhancing the realism of remote collaboration and interaction [5][6] Practical Applications Across Various Fields - H2L envisions a wide range of applications, including remote handling of goods by delivery personnel, performing dangerous tasks in disaster zones, and assisting elderly family members from home [8] - The technology can also benefit agriculture by allowing farmers to operate agricultural robots remotely, addressing labor shortages in rural areas [8] - Future plans include integrating proprioceptive feedback to enhance the realism of the user experience and expand the scope of human-robot interaction [8]
跨形态学习来了!轮式机器人的“经验”如何轻松传给双足机器人?
机器人大讲堂· 2025-09-23 13:24
Core Insights - The article discusses the rapid advancements in humanoid robot technology, particularly focusing on the Visual-Language-Action (VLA) model systems that can perform various household tasks with high reliability and generalization capabilities. However, a significant bottleneck remains due to the lack of high-quality, comprehensive demonstration data for bipedal robots [1][20]. Group 1: TrajBooster Framework - The TrajBooster framework was proposed by research teams from Zhejiang University and Westlake University to address the challenge of data scarcity by utilizing rich operational data from wheeled robots and trajectory redirection technology to enhance the action learning efficiency of bipedal humanoid robots [1][20]. - The core idea of TrajBooster is to use the 6D end-effector trajectory (3D position + 3D rotation) as a universal interface, allowing for "cross-modal" teaching regardless of robot morphology [2][4]. Group 2: Process Overview - The process involves three main stages: 1. Source data extraction from large datasets of wheeled robots, including language instructions, multi-view visual observations, and corresponding 6D end-effector trajectories [4]. 2. Trajectory redirection in a simulated environment to teach the target bipedal robot how to coordinate its joints to follow these trajectories [4][5]. 3. Model training and fine-tuning using minimal real data from the target robot to deploy the model effectively in real-world scenarios [4][9]. Group 3: Model Architecture - The model architecture consists of a hierarchical control model that breaks down complex problems into manageable sub-problems, with an upper layer for inverse kinematics (IK) to control the arms and a lower layer for a hierarchical reinforcement learning (RL) strategy to manage the legs and balance [5][8]. - The management policy acts as a "decision brain" to determine how the robot should move to reach the target position, while the worker policy translates these commands into specific joint actions [8]. Group 4: Training Phases - The training process includes two phases: Post-Pre-Training (PPT) and Post-Training (PT). PPT combines redirected action data with source data to create a new dataset for further pre-training the VLA model, allowing it to understand the action space of the target robot [9][10]. - The PT phase involves collecting only 10 minutes of real remote operation data to fine-tune the model, bridging the gap between simulation and reality, thus significantly reducing data collection costs [11]. Group 5: Experimental Results - Experiments conducted on the Unitree G1 bipedal robot demonstrated that the model trained with PPT outperformed models trained solely on real data, achieving significant performance improvements in tasks such as "grabbing Mickey Mouse" and "organizing toys" [12][15]. - The model's ability to perform zero-shot skill transfer was highlighted, as it successfully completed tasks not seen during training, indicating effective skill inheritance through trajectory transfer [15][16]. - The model also showed enhanced trajectory generalization capabilities, achieving an 80% success rate in novel object placements compared to 0% for models not using PPT, demonstrating a deeper understanding of the action space [16].
从傅利叶2025上海工博会展品,看懂产业落地的破局关键!
机器人大讲堂· 2025-09-23 13:01
9月23日,第25届中国国际工业博览会在上海国家会展中心正式开幕。本届博览会以"工业新质,智造无 界"为主题,展览面积达30万平方米,共吸引来自全球28个国家和地区的3000家展商参展。作为智能机器人 领域的代表企业之一, 傅利叶在此次展会中首次对外展示了其第三代人形机器人 GR-3系列的GR-3C"宇航 员",并带来基于GRx系列人形机器人构建的工业场景应用解决方案, 展现出未来工厂的新型运作模式 ,引 来众多行业观众及合作伙伴的竞相关注。 ▍ GR-3C "宇航员"首秀,第三代人形机器人产品矩阵初显 据机器人大讲堂了解,傅利叶本次展会上首次亮相的 GR-3C"宇航员"身高165厘米,体重71公斤,全身具 备最多55个自由度。外观方面,该机型采用科幻白色涂装,搭配简约的圆形头部设计,整体造型近似"宇航 员"。机身外壳方面,GR-3C"宇航员"使用经过强化工艺处理的铝合金与工程塑料,在实现轻量化的同时保 障结构强度,具备良好的抗压性与耐用性,便于维护作业。 在感知与交互方面 , GR-3C"宇航员"具备多项功能配置,其头部搭载4个麦克风阵列,支持全向收声与回声 消除,可在交互过程中定向增强声源并实现声源定位; ...
以具身基础模型驱动产业生态发展,自变量机器人释放真实落地需求
机器人大讲堂· 2025-09-23 01:22
Core Viewpoint - Hefei is strategically developing a comprehensive ecosystem for embodied intelligence, transitioning from a fragmented innovation model to a collaborative cluster approach, focusing on full-chain layout, element synergy, and scene implementation [1][11]. Group 1: Industry Development - Hefei's approach emphasizes "scene innovation" to bridge the gap between technology and application, integrating embodied intelligence into real-world production and life scenarios [1][11]. - The city aims to cultivate embodied robots as effective productivity tools, highlighting the need for a "smart brain" that understands the physical world and "chain master enterprises" that can integrate the industry chain [1][11]. Group 2: Strategic Collaborations - On September 20, a strategic cooperation agreement was signed between the International Advanced Technology Application Promotion Center (Hefei), Hefei High-tech Zone Management Committee, and other entities to promote the development of the drone industry and embodied intelligence [2]. - Self-variable Robotics, a pioneer in end-to-end general embodied intelligence models, is collaborating with Hefei's industrial chain to create impactful robot products and meet local application demands worth hundreds of millions [5][12]. Group 3: Technological Innovations - The "WALL-A" model developed by Self-variable Robotics demonstrates strong generalization capabilities, enabling robots to understand and solve physical world problems autonomously [5][8]. - The newly released open-source model "WALL-OSS" aims to lower industry entry barriers and accelerate the large-scale application of embodied intelligence by attracting more enterprises and developers [6][10]. Group 4: Application and Commercialization - Hefei's established industrial ecosystem provides a testing ground for embodied intelligence robots, facilitating the transition from technology concepts to market products through real-world scenario validation [11][12]. - The collaboration between Self-variable Robotics and Hefei will focus on various application scenarios, including public services, manufacturing, and logistics, to create replicable and valuable robot application demonstration systems [13][14]. Group 5: Education and Research - The partnership will also involve building industry-academia-research platforms with universities to foster cutting-edge embodied intelligence technology development and promote deep integration of research and industry [14].
开璇智能P0级行星滚柱丝杠与智能线性执行器震撼面世
机器人大讲堂· 2025-09-22 10:59
在第 25 届中国国际工业博览会(简称 " 工博会 " )开幕进入倒计时之际,机器人大讲堂前往苏州吴中区, 探访了一家在高端传动 和智能驱动 领域低调深耕却实力超群的科技企业 —— 江苏开璇智能科技有限公司。 机器人大讲堂获悉,作为细分赛道的隐形冠军, 开璇智能此次即将在工博会上首发的 P0 级精度行星滚柱丝 杠与智能线性执行器,被不少业内人士视为破解 人形等智能机器人以及 高端装备核心 硬件 难题的关键突 破。 因为行星滚柱丝杠的精度等级分为 P0 、 P1 、 P2 、 P3 等,其中 P0 级为最高标准,对行程变动 量、 行程误差 等指标的要求极为严苛。此前,全球仅有少数几家国外企业能生产 高精度 级产品,且技术壁 垒高、交货周期长、价格昂贵。 我们深入 了解 开璇智能企业实验室、生产车间与研发现场,对话核心技术团队,试图揭开这两款 " 硬核科 技产品 " 背后的创新密码。 ▍ 构建研发测试平台 , 健全完整评测体系 走进开璇智能的行星滚柱丝杠实验室,不免感慨该公司的研发模式已经相较两年前再次升维, 建立了 " 设计 - 制造 - 评测 - 应用 " 的完整闭环研发体系。 实验中十余台专业测试设备沿墙整 ...
成立仅5个月的新兵,却发布了六维传感变革性技术?未来六维标定架或成行业标配
机器人大讲堂· 2025-09-22 06:40
Core Viewpoint - The six-dimensional force sensor is becoming crucial for dynamic control in robotics, with China's shipment expected to exceed one million units by 2030 and a market size reaching 22.071 billion yuan, reflecting a compound annual growth rate of 108.07% from 2024 to 2030 [1][3]. Group 1: Market Dynamics - The high-end market for six-dimensional force sensors is predominantly occupied by foreign companies due to their early entry and advantages in precision and stability [1]. - Domestic companies face challenges due to a lack of unified standards and evaluation systems, which hinders their competitiveness [1][3]. Group 2: Company Overview - Starry Sensor Technology Co., Ltd. (referred to as "Starry Sensor"), established in April 2025, has a team with a strong background from leading global and domestic sensor companies, focusing on technological breakthroughs in the field [3][4]. - The company aims to drive the development of the robotics force sensing industry with a mission of "technological breakthrough" [3]. Group 3: Technological Innovations - Starry Sensor's recent product launch introduced a comprehensive technology framework for force sensors, addressing customer pain points and enabling lower-cost development and validation of high-performance sensors [4][5]. - The "Nibiru 1.0" calibration system is a highlight, designed to lower the entry barrier for high-end six-dimensional force sensor development, enhancing industry calibration capabilities and trust systems [5][7]. - The new ring beam structure for six-dimensional force sensors and the C-shaped beam structure for torque sensors significantly improve overload resistance and measurement accuracy, enhancing the reliability of robots in dynamic environments [11][13]. Group 4: Manufacturing and Quality Assurance - Starry Sensor is leveraging the supply chain advantages of its investor, Zhongding Group, to ensure stable product quality through a flexible production line and a global supply chain [15]. - The company is building a high-efficiency production system that includes lifecycle quality management and specialized production management teams, aiming for a production capacity of over 200,000 units annually [15]. Group 5: Future Outlook - The technological advancements presented by Starry Sensor mark a significant milestone in the robotics force sensing industry, with potential to elevate industry standards and quality [17]. - The company plans to continue focusing on customer-driven product value, aiming to provide more precise, reliable, and universal solutions for the global market [17].
一等奖50万,进入决赛2万!第二届珠海国际灵巧操作挑战赛初赛报名即将截止
机器人大讲堂· 2025-09-22 06:40
Group 1 - The second Zhuhai International Dexterous Manipulation Challenge is scheduled for October 29-30, 2025, at the Zhuhai International Convention and Exhibition Center, attracting significant industry attention [1][4] - The event focuses on dexterous manipulation technology in robotics, aiming to promote innovation and application in embodied intelligence [2][4] - The theme of the competition is "Dexterous New Forces, Intelligent Future," emphasizing a user-oriented approach in its design [4][6] Group 2 - The competition features a total prize pool of 1.98 million RMB, with the first prize being 500,000 RMB and additional investment opportunities totaling 6 million RMB for participating teams [6][49] - There are two main competition tracks: "Production" and "Life," with a total of six tasks designed to test various robotic capabilities [7][12] - The preliminary round requires teams to submit a registration form and a demonstration video by September 28, 2025, with 20 teams advancing to the finals [10][11] Group 3 - The final competition will take place on October 29-30, 2025, where teams must bring their own robotic systems to complete tasks in a live setting [10][12] - The production track includes tasks such as limited space grasping, rigid component assembly, and flexible wiring connection, while the life track includes human-robot handover, tool recognition and grasping, and tool usage [12][36] - Scoring in both tracks will be based on total points, with additional multipliers for innovative robotic systems [28][45] Group 4 - The competition encourages participation from domestic and international teams, including universities and research institutions, and allows teams to register for both tracks [48] - Each team must consist of 4-10 members, with a maximum of 3 members allowed in the competition area during the finals [48] - The organizing committee will provide accommodation and transportation for participating teams during the finals [48]
快讯|周鸿祎直播试吃机器人炒菜;优必选与富士康云智汇达成战略合作;OpenMind开源全球首个AI原生开源机器人系统发布
机器人大讲堂· 2025-09-22 06:40
Group 1 - Zhou Hongyi, founder of 360 Group, participated in a live cooking test with Yang Jiancheng, founder of Xianglu Robotics, highlighting the capabilities of AI cooking robots in achieving "wok hei" [3][6] - Xianglu Robotics has shipped nearly 10,000 commercial cooking robots, which ensure consistent taste and reduce the difficulty of standardization in chain restaurants [3][6] - Zhou suggested that cooking robots should expand from equipment to an ecosystem, while Xianglu plans to launch a new monthly rental business model in Q4 [3][6] Group 2 - UBTECH, known as the "first stock of humanoid robots," signed a global strategic cooperation agreement with Cloud Intelligence Technology (a Foxconn subsidiary) to advance humanoid robot manufacturing and delivery [6][9] - The agreement outlines the division of responsibilities between the two companies, with Cloud Intelligence focusing on global sales and UBTECH on R&D and production [6][9] - Both companies aim to create a "humanoid robot + smart factory" demonstration scenario to promote smart manufacturing upgrades and global expansion [6][9] Group 3 - Harbin Engineering University's agile underwater robot "Turtle" was unveiled in the South China Sea, achieving centimeter-level precision in near-seabed environmental observation [9][11] - The robot's design reduces sediment interference by 90% during navigation, and it employs a new inertial measurement method that decreases data noise by approximately 76.2% [9][11] - The "Turtle" features a 360-degree maneuverability capability, making it suitable for applications in coral observation, underwater fishing, and search and rescue [9][11] Group 4 - OpenMind released the world's first "AI-native" open-source robot system, aimed at creating a unified development platform for various types of robots [11][13] - The system is hardware-neutral and supports multiple robot forms, allowing for quick deployment and compatibility with various AI models [11][13] - It integrates SLAM technology for stable movement in complex environments and includes a front-end interface for real-time monitoring of robot status [11][13] Group 5 - NVIDIA made its first investment in a Taiwanese startup, MetAI, which specializes in creating digital twin environments for AI training [17] - MetAI's platform can generate virtual defect products to assist in AI defect detection, significantly reducing design time for manufacturers [17] - NVIDIA's interest in MetAI stems from its alignment with the demand for "physical AI," facilitating a bidirectional cycle between the real and virtual worlds [17]