Workflow
情感计算
icon
Search documents
“数字亲人”能否温暖“银色孤独”?
Xin Lang Cai Jing· 2025-12-23 23:14
近日,第一届海峡两岸(厦门)银发博览会在厦门国际会展中心举办。活动现场,"养老机器人"以可定 制容貌、声音并能进行基础交互的功能,成为现场关注焦点之一。 从技术实现路径上看,这款产品整合了目前相对成熟的技术模块,声音克隆、外貌仿真、移动避障等核 心能力,其技术根基大多来源于已有商业应用(如导航语音、蜡像、扫地机器人)的跨界融合与场景化 改造。然而,在肯定其创新与实用价值的同时,也要清醒地看到当前养老机器人的能力边界。据了解, 这款养老机器人核心功能仍集中于陪伴对话与安防联动,无法承担复杂家务或替代人力照护。 展望未来,养老机器人的发展必将伴随人工智能、材料科学和情感计算等技术的进步而不断演化。无论 技术如何演进,其根本评价尺度应始终回归到它是否真正提升了老年人的生活质量与生命尊严。当科技 能以谦逊、温情的姿态,去回应那些因年龄增长而凸显的情感与照护需求,才真正闪耀着以人为本的光 芒。 情感功能的强化,直指当代养老体系中长期被忽视的需求之一——精神孤独。在物质供给与身体照料之 外,许多老人面临的是日复一日无人交流的寂寥。机器人通过日常对话、记忆唤醒乃至定制亲人形象进 行互动,目的是提供一种持续的情感响应与心理慰 ...
可精准识别87种复杂情绪状态
Xin Lang Cai Jing· 2025-12-22 18:17
(来源:新安晚报) 其中,细粒度情感分析平台可精准识别87种复杂情绪状态,绘制200余个维度的"情感能力地图";立体化情感创生 平台让数字人实现"声情并茂"的生命感表达;具身情感交互硬件平台为各类机器人提供通用"情感大脑";情感边 缘计算平台则实现本地化、低延迟、高安全的交互体验。 转自:新安晚报 具身情感智能平台研发团队在实验室开展科研创新。 本报讯 随着人工智能技术的发展,"人机共生"已成必然。但是,当前绝大多数AI都严重缺乏"情商",成为"人机 共生"核心瓶颈之一。12月21日下午,"安徽科技大市场人工智能应用场景对接会"在安徽创新馆举行,合工大研究 团队创办的安徽进化论科技有限公司发布了自主研发的具身情感智能平台,为人工智能技术从"工具赋能"迈向"情 感共鸣"注入了创新力量。 该平台以千万级真实情感交互数据训练的具身情感交互大模型为核心,采用"1个多模态大模型+N个专业小模 型"的灵活架构,构建了覆盖情感感知、情感理解、情感表达、情感交互全链路的四大模块化能力。 "我们形成了三大特色场景。其中,数字生命伙伴PalPet打造低负担、高黏性的陪伴体验;智慧养老伴侣通过情感 感知与实时响应,为老年人提供贴心 ...
经济日报:推动智慧养老从能用变好用
Xin Lang Cai Jing· 2025-12-19 23:22
经济日报刊文指出,推动智慧养老从"能用"走向"好用""常用",接下来,需在技术、制度、场景、数据 等方面协同发力。应加强智慧养老标准体系建设,制定科学完善、协调统一的行业标准,明确算法透 明、数据安全、隐私保护等方面的具体要求,为情感交互产品划定伦理边界,使技术在安全框架内稳健 发展。还应加快关键核心技术迭代,推动具身智能、情感计算、方言识别、大模型嵌入等能力提升,提 高产品在复杂场景中的稳定性与适应性,让智能机器人更像人。依托大数据技术,逐步建立全国范围内 的老年人健康档案和需求数据库并推动相关数据公开,更好服务智慧养老产业发展。 ...
AAAI 2026 | 革新电影配音工业流程:AI首次学会「导演-演员」配音协作模式
机器之心· 2025-12-15 01:44
Core Viewpoint - The article discusses the limitations of AI voice dubbing, particularly its lack of emotional depth, and introduces a new framework called Authentic-Dubber that incorporates director-actor interaction to enhance emotional expression in AI-generated voiceovers [2][3][19]. Group 1: AI Dubbing Limitations - AI voice dubbing often lacks the "human touch," as it skips the crucial director-actor interaction that brings emotional depth to performances [2][3]. - The current AI models simplify the dubbing process by having AI "actors" read scripts without the guidance of a director, resulting in a lack of emotional resonance [2][3]. Group 2: Authentic-Dubber Framework - The Authentic-Dubber framework, developed by a team led by Professor Liu Rui, introduces a director role into AI dubbing, simulating the emotional transmission mechanisms found in traditional dubbing processes [4]. - This system aims to teach AI to "understand first, then express," moving beyond mere imitation of sounds to a more nuanced emotional delivery [4]. Group 3: Mechanisms of Authentic-Dubber - The framework includes a multi-modal reference material library that serves as an emotional guide for AI, integrating various emotional cues such as scene atmosphere and facial expressions [7]. - A retrieval-augmented strategy allows the AI to quickly access emotionally relevant reference clips, mimicking how actors internalize emotional cues under a director's guidance [11]. - The system employs a progressive graph-structured speech generation method to ensure that the final output is rich in emotional layers, enhancing the overall quality of the dubbing [13]. Group 4: Experimental Validation - In tests on the V2C-Animation dataset, Authentic-Dubber significantly outperformed all mainstream baseline models in emotional accuracy (EMO-ACC) [14]. - Subjective evaluations by human listeners showed that Authentic-Dubber achieved the highest scores in emotional matching (MOS-DE) and emotional authenticity (MOS-SE) [15]. - The system demonstrated quantifiable advantages in emotional expression, as evidenced by spectral analysis showing distinct acoustic features for different emotions [16]. Group 5: Significance of the Research - The research elevates the competitive dimension of AI dubbing from mere synchronization to emotional resonance, indicating a deeper understanding of complex emotions by AI [19]. - By simulating key interactions in human collaboration, the framework represents a significant step towards creating AI voiceovers that can truly "inject soul" into characters [19].
第一批被AI统治的人类
投资界· 2025-11-30 08:23
以下文章来源于酷玩实验室 ,作者酷玩实验室 酷玩实验室 . 关注中国科技发展与产业升级! 新一届带娃神器。 来源/酷玩实验室 (ID:coollabs) 几 十 年 前 , 奥 威 尔 在 《 1 9 8 4 》 中 描 绘 过 这 样 一 个 场 景 : 一 个 被 电 屏 包 围 的 社 会 , 人 们 的一举一动都会被监视。 人人谨言慎行,不表现出任何喜怒哀乐,因为"老大哥正在看着你"。 读者们把它当作一种极端的想象,是反乌托邦文学里才会出现的剧情。毕竟技术也没那 么发达,挨家挨户装电子监控挺贵的。 如 今 , AI 让 当 年 的 科 幻 小 说 成 了 家 长 的 带 娃 神 器 : 手 机 一 架 , 豆 包 上 线 , 家 长 瞬 间 变 身老大哥。 第一批被AI统治的倒霉蛋出现了! AI带娃,不费妈 不知道哪位天才家长想出的邪修带娃方法——给豆包打视频,让它监督小孩写作业。 指令很简单:豆包,请帮我看娃,在他不认真或者坐姿不正确时提醒他。 AI 家 教 就 此 上 岗 : 小 朋 友 , 别 玩 笔 了 , 专 心 写 作 业 , 写 完 就 能 玩 了 ; 小 朋 友 , 现 在 坐 姿有点 ...
00后谈恋爱,用AI当「僚机」
3 6 Ke· 2025-11-24 10:58
Core Insights - The rise of AI love assistants is transforming how Generation Z approaches dating, with products generating significant revenue in a short time [1][2] - Despite initial success, more comprehensive AI dating assistants face commercialization challenges due to high model costs and limited user willingness to pay [1][5] - The fundamental question arises about whether optimizing relationships through AI leads to more efficient connections or a retreat from genuine emotional experiences [1][7] Group 1: Product Overview - AI love assistants function as input method plugins rather than standalone apps, allowing users to generate emotionally intelligent responses with ease [2] - These products challenge the "difficult to monetize" issue in AI applications by implementing high-priced subscription models, demonstrating market willingness to pay [2][6] - The market has seen rapid growth, with products like Lovekey generating 31 million yuan in revenue and over 2 million monthly active users within a year [1][2] Group 2: Market Challenges - Comprehensive AI dating assistants, despite their advanced features, are struggling to gain traction, with the first app, Lumi, being taken down due to high operational costs [5][6] - The difficulty in monetization stems from the need for a subscription model that may not align with user behavior, as most users only seek assistance during the early stages of relationships [6][7] - The reliance on user input for data collection and the complexity of operations hinder user engagement compared to simpler AI keyboard products [6][8] Group 3: Future Prospects - Industry experts suggest that AI love assistants could thrive if integrated with other platforms, focusing on niche markets and specific user needs [7][8] - The potential for multi-modal emotional understanding through various data collection methods could enhance the effectiveness of AI in recognizing and responding to human emotions [8][9] - The essence of love, characterized by unpredictability and emotional depth, may require AI to incorporate randomness to better simulate real-life dating experiences [9][10]
早鸟倒计时6天 | 中国大模型大会邀您携手探索大模型的智能边界!
量子位· 2025-10-17 11:30
Core Viewpoint - The article discusses the upcoming "China Large Language Model Conference" (CLM) scheduled for October 28-29, 2025, in Beijing, focusing on advancements in natural language processing and large models in AI, aiming to foster dialogue among top scholars and industry experts [2][3]. Group 1: Conference Overview - The first "China Large Language Model Conference" will take place in June 2024, gathering over a thousand participants and featuring discussions on the path of large models in China [2]. - The 2025 conference will continue the spirit of the first, emphasizing theoretical breakthroughs, technological advancements, and industry applications of large models [2][3]. Group 2: Keynote Speakers and Topics - Notable speakers include Academicians Guan Xiaohong and Fang Binhang, who will present on cutting-edge perspectives in AI and large model development [3]. - The conference will feature 13 high-level forums covering topics such as generative AI, knowledge graphs, embodied intelligence, emotional computing, and social media processing [3]. Group 3: Detailed Agenda - The agenda includes a series of invited reports and thematic discussions, with sessions on topics like the implications of reward functions in AI, ethical and safety-driven key technologies for large models, and the role of computational power in enhancing human intelligence [5][30][25]. - Specific sessions will address the collaboration between large models and AI-generated content, embodied intelligence, and the implications of large models in various sectors including healthcare and multilingual processing [8][10][12][16]. Group 4: Registration and Participation - The registration for the conference is now open, with further details available on the conference website [3][24]. - Participants are encouraged to join in exploring the boundaries of large models and advancing AI technology in China [3].
陪伴机器人:AI陪伴的高级赛道
2025-08-25 14:36
Summary of the Conference Call on AI Companion Robots Industry Overview - The AI companion robot industry is poised for significant growth, driven by the increasing demand for social interaction solutions in society. The market potential is substantial, particularly in the elderly and youth demographics [1][2]. Key Insights and Arguments - **Market Potential**: - The potential demand for companion robots in the elderly market in China is estimated at approximately 420 billion yuan, based on aging trends and consumer penetration rates [5]. - The youth market (ages 15-34) has a potential demand of around 500 billion yuan, reflecting a high acceptance of interactive AI hardware products [6][8]. - The emotional companionship demand among the youth demographic is estimated to encompass about 50 million potential customers, derived from data on toy enthusiasts and pet owners [7]. - **Market Segmentation**: - Companion robots are categorized into three types: - **Desktop Companion Robots**: Small-sized, basic interaction capabilities. - **Pet Companion Robots**: Focused on emotional attachment, featuring realistic designs. - **Facial Expression Companion Robots**: The largest market segment, capable of displaying hundreds of facial expressions for realistic interaction [3]. - **Technological Barriers**: - The core technological barriers for facial expression companion robots include: - Product design that requires a deep understanding of human anatomy. - Advanced perception and interaction technologies, including language models and emotional recognition. - High-precision facial expression control using micro motors, with some advanced products utilizing up to 60 control units for facial features [4][10]. Additional Important Insights - **AI Toy Market**: - The AI toy market, which includes educational applications, has a potential global demand of 3.6 to 3.9 billion USD, particularly targeting children with autism [9]. - **Servo Motor Market**: - The servo motor market in China is projected to reach 69 billion USD, with a current market size of 10.5 billion USD. The market is highly concentrated, dominated by Japanese and German companies, with domestic players also emerging [11]. - **Development Status of Manufacturers**: - Domestic and international manufacturers are at a similar development stage, focusing on facial expression control and emotional perception algorithms. Notable examples include the UK’s Energy Arts and China’s EX Robotics [12].
我们在2025世界人工智能大会上,看到了7大趋势|混沌深度观察
混沌学园· 2025-08-20 12:05
Core Viewpoint - The 2025 World Artificial Intelligence Conference (WAIC) showcased significant advancements in AI, highlighting a shift from theoretical applications to practical implementations across various industries, indicating the emergence of a new era of human-machine collaboration [1][2]. Group 1: Trends in AI Development - Trend 1: Humanoid robots and embodied intelligence are transitioning from demonstrations to real-world applications, showcasing capabilities such as playing Mahjong and performing tasks in factories [5][6][11]. - Trend 2: AI agents are now integrated into workplace workflows, enhancing productivity by autonomously executing tasks across various sectors [12][13][17]. - Trend 3: AI-enabled devices like smart glasses and AI headphones are becoming prevalent in daily life, merging AI capabilities with personal devices to solve everyday problems [19][20][21]. Group 2: Innovations and Market Opportunities - Trend 4: AI foundational model technology is advancing, with a notable increase in open-source initiatives and the development of multi-modal models that understand both language and visual inputs [27][30][31]. - Trend 5: Multi-modal interaction and human-AI collaboration are evolving, with AI systems becoming more proactive and emotionally aware, creating new market opportunities in emotional computing [32][35][38]. - Trend 6: The cost of computing power is decreasing, driven by advancements in domestic chip technology, which will enable broader access to AI capabilities across various sectors [39][42][43]. Group 3: Industry Applications and Future Outlook - Trend 7: AI is empowering a diverse range of industries, including manufacturing, finance, and healthcare, with a growing number of practical applications being developed [44][45][47]. - The conference underscored the potential for AI to drive innovation and create "native innovation enterprises," similar to the transformative impact of the internet in the late 1990s [48][50].
从技术秀到真突破:解码WAIC 2025的核心价值
3 6 Ke· 2025-08-01 03:49
Core Insights - The World Artificial Intelligence Conference (WAIC) 2025 showcases the transition of AI from laboratory experiments to practical applications in various industries and daily life, emphasizing its potential to change societal dynamics rather than just demonstrating capabilities [1][3][21] - The event highlights the importance of understanding how these technologies can integrate into everyday life, serving as a driving force for progress [3][19] Technological Breakthroughs - AI technologies are evolving from simple mechanical responses to more complex interactions, with robots now capable of understanding human emotions and actions, as demonstrated by the GR-3 humanoid robot designed for companionship and care [4][7] - The introduction of advanced AI systems, such as Baidu's NOVA digital human technology, allows for rapid cloning and collaborative content creation, breaking traditional boundaries in content production [6][10] Industry Empowerment - AI is moving beyond experimental stages to become integral in sectors like entertainment, education, and healthcare, enhancing user experiences and creating new business models [10][11] - In the entertainment industry, AI-driven virtual characters are revolutionizing content creation, significantly reducing production costs and time [11][13] - The education sector is witnessing a shift where AI acts as a personalized learning partner, adapting to student needs and enhancing engagement through interactive methods [14][17] - In healthcare, AI innovations are optimizing drug development and improving diagnostic processes, showcasing a transformative impact on medical services [16][19] Emotional AI and Market Growth - The emotional computing and human-like interaction market is projected to grow at an annual rate of 35%, with significant potential in healthcare, education, and customer service sectors [17] - The integration of emotional AI into daily life is expected to redefine human-machine interactions, making AI a more relatable and supportive presence [9][19] Social Impact and Future Directions - The AI Empowerment for Sustainable Development Initiative emphasizes the role of AI in addressing global challenges such as green transformation and equitable healthcare and education [19][22] - The advancements in AI are not just about efficiency but also about fostering social equity and enhancing the quality of life, positioning AI as a true collaborator in human civilization [21][22]