Workflow
交互智能
icon
Search documents
智元机器人宣布将直播首个机器人晚会
Cai Jing Wang· 2026-02-03 11:44
Core Viewpoint - The company Zhiyuan Robotics announced the global live broadcast of the world's first large-scale robot gala, "Robot Wonderful Night," on February 8 at 20:00, showcasing over 200 robots leading the entire event [1] Group 1: Event Details - "Robot Wonderful Night" will be broadcasted simultaneously on Mango TV, Zhiyuan AGIBOT's official platform, and the "Zhihui Jun" online account [1] - The event aims to challenge and set a series of industry "firsts," demonstrating cutting-edge robotic technologies such as interactive intelligence, motion intelligence, operational intelligence, and multi-robot collaboration [1] Group 2: Industry Impact - The gala represents a significant exploration in advancing robots from being mere "functional carriers" to becoming "cultural participants" and "emotional expressers" [1] - The event is positioned as a showcase of China's robotic capabilities, aiming to provide an innovative stage that is "the most intelligent and warm" [1] Group 3: Audience Engagement - Viewers can participate in live interactions during the broadcast for a chance to win surprise gifts [1]
深度 | 拆解数字华夏:交互与场景智能正成为人形机器人下半场竞争关键
机器人大讲堂· 2026-01-26 10:17
Core Viewpoint - Digital Huaxia, a Chinese company established just over a year ago, has made significant breakthroughs in the commercialization of humanoid robots, focusing on interactive intelligence and scene intelligence as key competitive advantages in the industry [2][3]. Group 1: Bionic Head - Digital Huaxia has prioritized the development and mass production of bionic heads, focusing on B-end interaction and companionship scenarios such as elderly care and educational guidance [4][7]. - The bionic head of the humanoid robot "Xialan" features nearly 30 active degrees of freedom and can reproduce over seven categories of high-precision expressions and dozens of micro-expressions [7]. - The company has developed a digital twin model to achieve high-precision restoration of human micro-expressions, utilizing Bayesian optimization algorithms for automated iteration of micro-level servo displacement parameters [7][11]. Group 2: Interactive Intelligence - The interactive intelligence system of Digital Huaxia is a comprehensive architecture designed for multi-layer processing capabilities, enabling robots to understand and respond to human emotions effectively [13][15]. - The system includes a multi-modal perception layer and a semantic understanding model, which analyzes user input to determine the necessary support from various perception modules [15]. - A powerful emotional computing engine, trained on over 500,000 real interaction data points, allows the robot to understand both explicit and implicit emotions, achieving a recognition accuracy of 91.2% in real scenarios [16]. Group 3: Scene Intelligence - Digital Huaxia's scene intelligence serves as the commercial engine of its technology system, with two main platforms: "Juhua®" and "ROBOEASE" [19][23]. - The "Juhua®" platform acts as a modular framework that integrates various capabilities, enhancing development efficiency and product line scalability [19][21]. - The "ROBOEASE" platform addresses challenges in the commercial robot industry, providing standardized solutions for various applications, significantly lowering the barriers for enterprises to adopt intelligent solutions [24][26]. Group 4: Conclusion - Digital Huaxia's approach combines advanced bionic heads, empathetic interactive intelligence, and practical scene intelligence, creating a complete commercialization path from core technology to platform empowerment [28]. - The company's focus on interactive and scene intelligence is positioned to amplify the commercial value of humanoid robots, moving beyond mere task execution to meaningful communication and emotional engagement [28].
陈天桥旗下盛大AI东京研究院于SIGGRAPH Asia正式亮相,揭晓数字人和世界模型成果
机器之心· 2025-12-22 04:23
Core Insights - Shanda Group's Shanda AI Research Tokyo made its debut at SIGGRAPH Asia 2025, focusing on "Interactive Intelligence" and "Spatiotemporal Intelligence" in digital human research, reflecting the long-term vision of founder Chen Tianqiao [1][10] - The article discusses the systemic challenges leading to the "soul" deficiency in current digital human interactions, which is a significant barrier to user engagement despite substantial investments in visual effects [2][3] Systemic Challenges - **Long-term Memory and Personality Consistency**: Current large language models (LLMs) struggle with maintaining a stable personality over extended conversations, leading to "persona drift" and inconsistent narrative logic [3] - **Lack of Multimodal Emotional Expression**: Digital humans often exhibit "zombie-face" phenomena, lacking natural micro-expressions and emotional responses, which diminishes immersive experiences [3] - **Absence of Self-evolution Capability**: Most digital humans operate as passive systems, unable to learn from interactions or adapt to user preferences, hindering their evolution into truly intelligent entities [3] Industry Consensus - Experts at the SIGGRAPH Asia conference reached a consensus that the bottleneck in digital human development has shifted from visual fidelity to cognitive and interaction logic, emphasizing the need for long-term memory, multimodal emotional expression, and self-evolution as core competencies [13][10] Introduction of Mio - Shanda AI Tokyo Research introduced Mio (Multimodal Interactive Omni-Avatar), a framework designed to transform digital humans from passive entities into intelligent partners capable of autonomous thought and interaction [16][22] - Mio's architecture includes five core modules: Thinker (cognitive core), Talker (voice engine), Facial Animator, Body Animator, and Renderer, which work together to create a seamless interaction loop [20][21] Performance Metrics - Mio achieved an overall Interactive Intelligence Score (IIS) of 76.0, representing an 8.4 point improvement over previous technologies, setting a new performance benchmark in the industry [25][22] Future Outlook - The development of Mio signifies a paradigm shift in digital human technology, moving focus from static visual realism to dynamic, meaningful interactive intelligence, with potential applications in virtual companionship, interactive storytelling, and immersive gaming [22][25] - Shanda AI Tokyo Research has made the complete technical report, pre-trained models, and evaluation benchmarks of the Mio project publicly available to foster collaboration in advancing this field [28]