Workflow
F3 Gripper
icon
Search documents
Science Robotics|耶鲁大学开源视触觉新范式,看出机器人柔性手的力感知
机器人圈· 2025-07-08 10:36
Core Viewpoint - Yale University's research introduces a new paradigm called "Forces for Free" (F3) in the field of robotic tactile sensing, enabling robots to estimate contact forces using only standard RGB cameras, thus achieving force perception capabilities at almost zero additional hardware cost [1][2][22]. Group 1: Challenges in Force Perception - Traditional high-precision force/torque (F/T) sensors are expensive, bulky, and prone to damage, while integrated fingertip tactile sensors face issues like complex wiring and limited information [2]. - Recent advancements in visual-tactile sensing technology offer new solutions by using visual signals to infer tactile information, but many existing methods require embedded markers or custom sensor skins [2]. Group 2: F3 Gripper Design - The F3 Gripper is optimized from Yale's open-source T42 gripper to enhance visual force estimation signal-to-noise ratio [6]. - Key optimizations include maximizing kinematic manipulability to avoid singular configurations and minimizing friction and hysteresis by replacing metal pins with miniature ball bearings, reducing internal friction from approximately 4.0N to 0.6N [6][7][9]. Group 3: Estimation Algorithm - A sophisticated deep learning estimator is developed to decode precise force information from image sequences, utilizing a CNN-Transformer architecture to handle temporal memory and spatial features [10][11]. - The model processes a sequence of images to mitigate the "same shape, different force" issue and employs a visual foundation model (SAM) to enhance robustness against visual interference [11][13]. Group 4: Experimental Validation - The system demonstrates effective performance in static force prediction tasks with estimation errors between 0.2N and 0.4N, significantly outperforming previous works [14]. - In dynamic closed-loop control experiments, the estimator successfully guided robots through complex tasks such as pin insertion, surface wiping, and calligraphy writing, achieving average force errors as low as 0.15N [21][22]. Group 5: Future Implications - This research presents a practical solution for low-cost robotic force perception, redefining the cost-effectiveness of visual-tactile sensing [22]. - Future expansions may include three-dimensional force/torque estimation and multi-finger dexterous hands, potentially broadening the application of advanced force control technologies in various robotic platforms [22].