交互智能
Search documents
智元机器人宣布将直播首个机器人晚会
Cai Jing Wang· 2026-02-03 11:44
智元机器人今日正式宣布,将于2月8日20:00全球直播全球首个大型机器人晚会《机器人奇妙夜》。作 为行业首创的机器人专属晚会,《机器人奇妙夜》最大亮点在于"全机器人主导"——两百余台机器人将 挑起大梁,包揽整场晚会的表演与观众互动。 2月8日20:00,观众可通过智元官方平台直播渠道同步观看,参与直播互动更有机会赢取惊喜好礼。 据透露,《机器人奇妙夜》将在芒果TV、智元AGIBOT官方平台及"稚晖君"全网账号同步直播,晚会内 容将挑战并刷新一系列行业"首创"纪录,展示交互智能、运动智能、作业智能与多机协同等前沿机器人 技术,以"最智能、有温度"的创新舞台,带来一场展现中国机器人实力的视听盛宴。这不仅是智元技术 集成能力的展现,更是推动机器人从"功能载体"向"文化参与者"乃至"情感表达者"演进的重要探索。 ...
深度 | 拆解数字华夏:交互与场景智能正成为人形机器人下半场竞争关键
机器人大讲堂· 2026-01-26 10:17
导语: 当人形机器人赛道普遍聚焦于步态与动力学时, 一家成立仅 一年多 的中国公司 ——数字华夏,已在商业化落地层面取得了 迅速 突破 。 其机器人不仅 频频 登上央视舞台 ,更在国内某头部银行的全国客服大赛中, 与人类顶尖 客服 经理同台竞技并斩获综合成绩第五 。 公司已 收获来自头部银行、运营商 、 电网等客户 的亿元 级 订单 。 这一系列成果背后,是一条鲜明且深刻的技术路线。本文将基于我们获得的一手资料与独家访谈,深度解构数字华夏在 仿生头、交互智能、场景 智能 三大支柱上的全栈技术布局,探究其如何构建 "有温度"的交互,并以此为支点撬动商业具身智能的规模化落地。 相比大家还在思考机器人为什么要有人脸的时候,数字华夏已经在仿生头的研发和量产上一路狂奔。当行业内许多玩家更聚焦在高端收藏品和艺术品时,数字华夏从 一开始就更聚焦于 B端交互/陪伴场景 ,例如养老陪伴、导览接待、 教育教学 等。他们判断,未来将有 10%的机器人是带有人脸的,用于商业服务。这 张"脸",正 是他们技术路线的物理起点。 数字华夏仿人机器人"夏澜"的高度仿生头部,是实现沉浸式情感链接的物理基础。通过软硬件一体化的创新范式,它在 表 ...
陈天桥旗下盛大AI东京研究院于SIGGRAPH Asia正式亮相,揭晓数字人和世界模型成果
机器之心· 2025-12-22 04:23
Core Insights - Shanda Group's Shanda AI Research Tokyo made its debut at SIGGRAPH Asia 2025, focusing on "Interactive Intelligence" and "Spatiotemporal Intelligence" in digital human research, reflecting the long-term vision of founder Chen Tianqiao [1][10] - The article discusses the systemic challenges leading to the "soul" deficiency in current digital human interactions, which is a significant barrier to user engagement despite substantial investments in visual effects [2][3] Systemic Challenges - **Long-term Memory and Personality Consistency**: Current large language models (LLMs) struggle with maintaining a stable personality over extended conversations, leading to "persona drift" and inconsistent narrative logic [3] - **Lack of Multimodal Emotional Expression**: Digital humans often exhibit "zombie-face" phenomena, lacking natural micro-expressions and emotional responses, which diminishes immersive experiences [3] - **Absence of Self-evolution Capability**: Most digital humans operate as passive systems, unable to learn from interactions or adapt to user preferences, hindering their evolution into truly intelligent entities [3] Industry Consensus - Experts at the SIGGRAPH Asia conference reached a consensus that the bottleneck in digital human development has shifted from visual fidelity to cognitive and interaction logic, emphasizing the need for long-term memory, multimodal emotional expression, and self-evolution as core competencies [13][10] Introduction of Mio - Shanda AI Tokyo Research introduced Mio (Multimodal Interactive Omni-Avatar), a framework designed to transform digital humans from passive entities into intelligent partners capable of autonomous thought and interaction [16][22] - Mio's architecture includes five core modules: Thinker (cognitive core), Talker (voice engine), Facial Animator, Body Animator, and Renderer, which work together to create a seamless interaction loop [20][21] Performance Metrics - Mio achieved an overall Interactive Intelligence Score (IIS) of 76.0, representing an 8.4 point improvement over previous technologies, setting a new performance benchmark in the industry [25][22] Future Outlook - The development of Mio signifies a paradigm shift in digital human technology, moving focus from static visual realism to dynamic, meaningful interactive intelligence, with potential applications in virtual companionship, interactive storytelling, and immersive gaming [22][25] - Shanda AI Tokyo Research has made the complete technical report, pre-trained models, and evaluation benchmarks of the Mio project publicly available to foster collaboration in advancing this field [28]