Workflow
RynnRCP
icon
Search documents
机器人上下文协议首次开源:阿里达摩院一口气放出具身智能「三大件」
具身智能之心· 2025-08-12 00:03
Core Viewpoint - Alibaba's Damo Academy announced the open-source of several models and protocols aimed at enhancing embodied intelligence, addressing challenges in data, model, and robot compatibility, and streamlining the development process [1][2]. Group 1: Open-Source Models and Protocols - The RynnRCP protocol was introduced to facilitate the integration of various data, models, and robotic systems, creating a seamless workflow from data collection to action execution [2][5]. - RynnVLA-001 is a visual-language-action model that learns human operational skills from first-person perspective videos, enabling smoother robotic arm control [7]. - The RynnEC model incorporates multi-modal large language capabilities, allowing for comprehensive scene analysis across 11 dimensions, enhancing object recognition and interaction in complex environments [7]. Group 2: Technical Framework and Features - The RCP framework connects robotic bodies with sensors, providing standardized interfaces and compatibility across different transport layers and model services [5]. - RobotMotion serves as a bridge between large models and robotic control, converting low-frequency commands into high-frequency control signals for smoother robot movements [5][6]. - The framework includes integrated simulation and real-machine control tools, facilitating quick developer onboarding and supporting various functionalities like task regulation and trajectory visualization [5]. Group 3: Industry Engagement and Community Building - Damo Academy is actively investing in embodied intelligence, focusing on system and model development, and collaborating with various stakeholders to build industry infrastructure [7]. - The launch of the WorldVLA model, which merges world models with action models, has garnered significant attention for its enhanced understanding and generation capabilities [8]. - The establishment of the "Embodied Intelligence Heart" community aims to foster collaboration among developers and researchers in the field, providing resources and support for various technical directions [11][12].
腾讯研究院AI速递 20250812
腾讯研究院· 2025-08-11 16:01
Group 1 - xAI announced the free global availability of Grok 4, limiting usage to 5 times every 12 hours, which has led to dissatisfaction among paid users who feel betrayed by the subscription model [1] - Inspur released the "Yuan Nao SD200" super-node AI server, integrating 64 cards into a unified memory system, capable of running multiple domestic open-source models simultaneously [2] - Zhiyuan published the GLM-4.5 technical report, revealing details on pre-training and post-training, achieving native integration of reasoning, coding, and agent capabilities in a single model [3] Group 2 - Kunlun Wanwei launched the SkyReels-A3 model, capable of generating high-quality digital human videos up to one minute long, optimized for hand motion interaction and camera control [4] - Chuangxiang Sanwei partnered with Tencent Cloud to enhance 3D generation capabilities for its AI modeling platform MakeNow, utilizing Tencent's mixed model [5][6] - Alibaba's DAMO Academy open-sourced three core components for embodied intelligence, including a visual-language-action model and a robot context protocol [7] Group 3 - Baichuan Intelligent released the 32B parameter medical enhancement model Baichuan-M2, outperforming all open-source models in the OpenAI HealthBench evaluation, second only to GPT-5 [8] - Lingqiao Intelligent showcased the DexHand021 Pro, a highly dexterous robotic hand with 22 degrees of freedom, designed to simulate human hand functions accurately [9] - A report indicated that 45% of enterprises have deployed large models in production, with users averaging 4.7 different products, highlighting low brand loyalty in a competitive landscape [10][12]
达摩院开源具身智能“三大件” 机器人上下文协议首次开源
Huan Qiu Wang· 2025-08-11 04:17
Core Insights - Alibaba's Damo Academy announced the open-source release of several models and protocols aimed at enhancing the compatibility and adaptability of data, models, and robots in the field of embodied intelligence [1][3] - The introduction of the Robotics Context Protocol (RynnRCP) aims to address challenges such as fragmented development processes and difficulties in adapting data and models to robotic systems [1][2] Group 1: Open-source Models and Protocols - The RynnVLA-001 model is a visual-language-action model that learns human operational skills from first-person perspective videos, enabling smoother robotic arm control [3] - The RynnEC model integrates multi-modal large language capabilities, allowing for comprehensive scene analysis across 11 dimensions, enhancing object localization and segmentation in complex environments [3] - RynnRCP serves as a complete robot service protocol and framework, facilitating the workflow from sensor data collection to model inference and robotic action execution [1][2] Group 2: Technical Framework and Features - The RCP framework within RynnRCP establishes connections between robotic bodies and sensors, providing standardized capability interfaces and compatibility across different transport layers and model services [2] - The RobotMotion module acts as a bridge between large models and robotic control, converting low-frequency inference commands into high-frequency continuous control signals for smoother robotic movements [2] - The integrated simulation-real machine control tool within RobotMotion aids developers in quickly adapting to tasks, supporting simulation synchronization, data collection, playback, and trajectory visualization [2] Group 3: Industry Engagement and Development - Damo Academy is actively investing in embodied intelligence, focusing on system and model development while collaborating with various stakeholders to build industry infrastructure [3] - The recent open-sourcing of the WorldVLA model, which merges world models with action models, has garnered significant attention for its enhanced understanding and generation capabilities in images and actions [3]
机器人上下文协议首次开源:阿里达摩院一口气放出具身智能「三大件」
机器之心· 2025-08-11 03:19
机器之心发布 开源链接: 机器人上下文协议 RynnRCP https://github.com/alibaba-damo-academy/RynnRCP 视觉 - 语言 - 动作模型 RynnVLA-001 https://github.com/alibaba-damo-academy/RynnVLA-001 世界理解模型 RynnEC https://github.com/alibaba-damo-academy/RynnEC 具身智能领域飞速发展,但仍面临开发流程碎片化,数据、模型与机器人本体适配难等重大挑战。 机器之心编辑部 8 月 11 日,在世界机器人大会上,阿里达摩院宣布开源自研的 VLA 模型 RynnVLA-001-7B、世界理解模型 RynnEC、以及机器人上下文协议 RynnRCP ,推动数 据、模型和机器人的兼容适配,打通具身智能开发全流程。 具体而言,RynnRCP 包括 RCP 框架和 RobotMotion 两个主要模块。 RCP 框架 旨在建立机器人本体与传感器的连接,提供标准化能力接口,并实现不同的传输层和模型服务之间的兼容。 RobotMotion 则是具身大模型与机器人本 ...