Workflow
刚刚,首个能在机器人上本地运行的具身Gemini来了
机器之心·2025-06-25 00:46

Core Viewpoint - The article discusses the launch of Gemini Robotics On-Device, a new visual-language-action (VLA) model by Google DeepMind, designed for robots to operate efficiently without continuous internet connectivity [1][2]. Group 1: Product Overview - Gemini Robotics On-Device is the first VLA model that can be directly deployed on robots, enhancing their ability to adapt to new tasks and environments [2][4]. - The model is optimized for efficient operation on robotic hardware, showcasing strong general flexibility and task generalization capabilities [4][12]. - It can operate in environments with no data network, making it suitable for latency-sensitive applications [5]. Group 2: Developer Tools - Google will release the Gemini Robotics SDK, allowing developers to evaluate the model's performance in their specific tasks and environments [7]. - Developers can test the model in DeepMind's MuJoCo physics simulator, requiring only 50 to 100 demonstrations to adapt to new tasks [7][21]. Group 3: Performance and Adaptability - Gemini Robotics On-Device has demonstrated strong performance in various dexterous tasks, such as unzipping bags and folding clothes, all executed directly on the robot [12][16]. - The model shows significant advantages over previous local robot models, especially in challenging out-of-distribution tasks and complex multi-step instructions [15][16]. - It can be fine-tuned for improved performance and can adapt to different robotic platforms, including the Franka FR3 and Apollo humanoid robots [25][26]. Group 4: Updates and Changes - Alongside the new model, Google DeepMind has reduced the free usage limits for its Gemini 2.5 Flash and Gemini 2.0 Flash models, which may not be well-received by free users [30][32]. - The company has also announced the launch of new image generation models, Imagen 4 and Imagen 4 Ultra, in its AI Studio and Gemini API [33].