Workflow
全身控制
icon
Search documents
VLA之外,具身+VA工作汇总
具身智能之心· 2025-07-14 02:21
Core Insights - The article focuses on advancements in embodied intelligence and robotic manipulation, highlighting various research projects and methodologies aimed at improving robotic capabilities in real-world applications [2][3][4]. Group 1: 2025 Research Initiatives - Numerous projects are outlined for 2025, including "Steering Your Diffusion Policy with Latent Space Reinforcement Learning" and "Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation," which aim to enhance robotic manipulation through advanced learning techniques [2][3]. - The "BEHAVIOR Robot Suite" is designed to streamline real-world whole-body manipulation for everyday household activities, indicating a focus on practical applications of robotics [2]. - "You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from Video Demonstrations" emphasizes the potential for efficient learning methods in robotic training [2][3]. Group 2: Methodologies and Techniques - The article discusses various methodologies such as "Adaptive 3D Scene Representation for Domain Transfer in Imitation Learning" and "Learning the RoPEs: Better 2D and 3D Position Encodings with STRING," which aim to improve the adaptability and efficiency of robotic systems [2][3][4]. - "RoboGrasp: A Universal Grasping Policy for Robust Robotic Control" highlights the development of a versatile grasping policy that can be applied across different robotic platforms [2][3]. - "Learning Dexterous In-Hand Manipulation with Multifingered Hands via Visuomotor Diffusion" showcases advancements in fine motor skills for robots, crucial for complex tasks [4]. Group 3: Future Directions - The research emphasizes the importance of integrating visual and tactile feedback in robotic systems, as seen in projects like "Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation" [7]. - "Zero-Shot Visual Generalization in Robot Manipulation" indicates a trend towards developing robots that can generalize learned skills to new, unseen scenarios without additional training [7]. - The focus on "Human-to-Robot Data Augmentation for Robot Pre-training from Videos" suggests a shift towards leveraging human demonstrations to enhance robotic learning processes [7].