Core Insights - The current advancements in Vision-Language-Action (VLA) models enhance robots' perception capabilities, but human understanding of the physical world extends beyond mere visual perception [1] - Touch perception captures dimensions that vision and language cannot cover, such as surface texture and material properties, which are crucial for physical intelligence [1] Company Highlights - Yimu Technology showcased its self-developed bionic visual-tactile sensor at the 2025 International Conference on Intelligent Robots and Systems (IROS) [1] - The sensor enables robotic arms to perform complex tasks, such as grasping fragile items, demonstrating its practical applications [1] - The sensor features a significantly reduced thickness while maintaining a bionic fingertip shape, allowing compatibility with various dexterous hands and robotic platforms [1] Technical Innovations - The sensor addresses challenges like light field baseline drift and spatial calculation blind spots through unique optical design [1] - The optimized ultra-soft elastomer material enhances wear resistance while maintaining high sensitivity [1] - An end-to-end AI algorithm processes multimodal tactile information, including 3D morphology, texture, and force fields [1] Market Outlook - The bionic visual-tactile sensor is set to be commercialized soon, indicating a shift towards more embodied intelligence in robotics [1]
触觉感知让机器人“更聪明”