Core Insights - Google DeepMind has launched a new AI model named Gemini Robotics aimed at enabling robots to interact with objects and navigate environments [1] - The model has demonstrated capabilities in executing tasks based on voice commands, such as folding paper and placing glasses in a case [1] - Gemini Robotics is designed to be applicable across various robotic hardware and connects what robots "see" with possible actions [1] - The model has shown impressive performance in environments not covered by training data during testing [1] - A streamlined version called Gemini Robotics-ER has been released for researchers to train their own robot control models [1] - DeepMind has also introduced a benchmark named Asimov to assess the risks associated with AI-driven robots [1]
速递|Google推出新AI模型,Gemini Robotics可实现多硬件机器人语音操控