筑梦岛
Search documents
情感陪伴类AI迎新规
21世纪经济报道· 2025-12-29 02:19
但正因为深度陪伴与高粘性,用户成瘾、情感操纵、未成年人保护一直是AI情感陪伴绕不开的 隐患,也是本次《征求意见稿》想要重点回应的现实问题。 在预防成瘾方面, 《征求意见稿》要求交互界面显著提示并非真人互动。一旦发现用户连续 在线超过2小时,应以弹窗等方式动态提醒暂停使用 ,同时交互界面需要有清晰的退出选项。 《征求意见稿》还对内容安全提出进一步要求。系统必须能够检测情绪困扰或依赖行为, 如 果发现用户明确提出自杀、自残等极端情况,需要由人工接管对话,并及时联系监护人和联系 人。 这也意味着未成年人和老年人用户在账号注册时,就需要提前补充相关信息。 记者丨 肖潇 编辑丨王俊 12月27日,国家网信办发布《人工智能拟人化互动服务管理暂行办法(征求意见稿)》。 新 规首次针对"AI情感陪伴",将其定义为模拟人类特征、思维模式和沟通风格,进行情感互动 的产品或者服务。 国内AI情感陪伴市场已经较为成熟,主流模式是由AI扮演虚拟角色,通过聊天推进剧情。用 户可以自创AI人设,还可以付费语音通话、解锁专属记忆等。AI产品榜11月数据显示,星野 (MiniMax)月活已经达到488万人,猫箱(字节跳动)以472万紧随其后, ...
情感陪伴类AI迎新规:训练数据要求收紧,大厂要补的课不少
2 1 Shi Ji Jing Ji Bao Dao· 2025-12-29 00:37
12月27日,国家网信办发布《人工智能拟人化互动服务管理暂行办法(征求意见稿)》。新规首次针对"AI情感陪伴",将其定 义为模拟人类特征、思维模式和沟通风格,进行情感互动的产品或者服务。 国内AI情感陪伴市场已经较为成熟,主流模式是由AI扮演虚拟角色,通过聊天推进剧情。用户可以自创AI人设,还可以付费语 音通话、解锁专属记忆等。AI产品榜11月数据显示,星野(MiniMax)月活已经达到488万人,猫箱(字节跳动)以472万紧随 其后,两者断层领先。X EVA(红棉小冰)也有181万月活,筑梦岛(腾讯阅文)则在60万月活左右。 高活跃度背后,是已经基本跑通商业模式的公司。星野的运营公司MiniMax近期向港交所递表,招股书披露,星野和其海外版 Talkie在今年前九个月创造了约1.2亿人民币的收入,用户每天在这两款情感陪伴产品上平均花费超过70分钟。 但正因为深度陪伴与高粘性,用户成瘾、情感操纵、未成年人保护一直是AI情感陪伴绕不开的隐患,也是本次《征求意见稿》 想要重点回应的现实问题。 在AI标识上,三款产品的交互页面都有"由AI生成"标识,但字号、颜色和透明度可能很难称之为"显著"。此前,社会上已发生 多 ...
一波三折的上市路,Soul的“新故事”能怎么讲?
2 1 Shi Ji Jing Ji Bao Dao· 2025-12-03 00:17
Core Viewpoint - Soul App has submitted its third prospectus to the Hong Kong Stock Exchange, marking its fourth attempt to go public after previous rejections and withdrawals in the U.S. market [1][2] Group 1: Listing Attempts - Soul's journey to IPO has been tumultuous, with the company initially aiming for a NASDAQ listing in 2021 but pausing the process due to internal considerations and support from major shareholder Tencent [2] - Legal challenges, including a lawsuit from Uki over unfair competition, have also hindered Soul's listing efforts, resulting in a freeze of 26.93 million yuan shortly after the IPO application [2] - After two failed submissions to the Hong Kong Stock Exchange in 2022 and 2023, Soul is now making another attempt after more than two years [1][2] Group 2: Financial Performance - In 2023, Soul achieved a profitability turning point, projecting an adjusted net profit of 337 million yuan for 2024, with revenue for the first eight months of 2025 nearing the total revenue of 2022 at 1.683 billion yuan [3] - Revenue figures from 2020 to 2022 show a growth trajectory: 498 million yuan in 2020, 1.2812 billion yuan in 2021, and 1.6674 billion yuan in 2022, with gross margins increasing from 79.9% to 86.3% during the same period [3] - The majority of Soul's revenue comes from emotional value services, which account for over 90% of total revenue, while advertising and other business revenues contribute less than 10% [3][4] Group 3: User Engagement and Demographics - As of August 31, 2025, Soul has approximately 390 million registered users, with daily active users averaging 11 million, 78.7% of whom are from Generation Z [4] - Users spend an average of over 50 minutes daily on the platform, with an 86% engagement rate and a monthly retention rate of 80% [4] - The average monthly revenue per paying user is 104.4 yuan, indicating a stable but potentially stagnant user base in terms of growth [5] Group 4: Strategic Positioning and AI Integration - Soul has shifted its positioning over the years, evolving from "soul social" to "social metaverse for youth," and now to "AI + immersive social" [5][6] - The company emphasizes AI as a core competitive advantage, leveraging its self-developed emotional value model and AI recommendation systems to enhance user experience [5][6] - However, the high costs associated with AI development have risen significantly, from 187 million yuan in 2020 to a projected 546 million yuan in 2024, with 407 million yuan already spent in the first eight months of this year [6][7] Group 5: Regulatory and Compliance Challenges - The integration of AI into Soul's platform raises concerns regarding compliance and regulatory risks, particularly around content generation, copyright issues, and user data privacy [7] - The ongoing debate about the authenticity of AI interactions on the platform highlights the complexities and challenges faced by the company in navigating technological advancements and user expectations [7]
AI伴侣大逃杀:星野下架大量智能体,赛博爱情崩盘在即?
2 1 Shi Ji Jing Ji Bao Dao· 2025-11-29 06:59
Core Insights - The recent shutdown of AI companion apps, particularly "Xingye," has led to a wave of emotional distress among users who invested time and money in these AI entities, highlighting the blurred lines between technology and emotional attachment [2][3] Industry Overview - The AI companionship market has seen rapid growth, with platforms like Character.AI and Xingye leading the way. Character.AI was valued at over $1 billion and ranked second in monthly active users globally, while Xingye reached 6.64 million monthly active users in December 2024, making it the top social AI app in China [3][4] - The business model for AI social products typically involves a "subscription + value-added services" approach, with annual membership fees in the hundreds. However, high operational costs and competition from larger models like ChatGPT pose significant financial challenges [4] Compliance Challenges - The industry is facing increasing regulatory scrutiny, with actions taken against platforms for providing harmful content to minors. Over 2,700 non-compliant AI entities were removed, and 820,000 inappropriate content pieces were taken down [5][6] - In response to regulatory pressures, platforms like Xingye and Cat Box have implemented stricter content controls, including raising character age limits and enhancing youth protection measures, which may negatively impact user experience [6][7]
AI伴侣大逃杀:星野下架,赛博爱情崩盘在即?
2 1 Shi Ji Jing Ji Bao Dao· 2025-11-29 02:33
(原标题:AI伴侣大逃杀:星野下架,赛博爱情崩盘在即?) 21世纪经济报道记者 章驰 商业上,AI社交产品采取"订阅+增值服务"的变现方案,会员年卡在百元级别,但昂贵的模型算力成本 导致现金流持续承压。他们不仅要面对同类型产品竞争,还要直面巨头的夹击。ChatGPT、豆包等大模 型本身就具备强大的对话和情感交互能力。 这场突如其来的"集体分手",让不少用户情绪崩溃,因为对于他们来说,AI不只是一个聊天智能体,而 是一个花了时间和金钱去养成的活生生的"人"。 这就要先给大家解释一下,这些AI陪伴APP到底是怎么玩的? 在赛博恋人的世界里,你可以自己DIY一个有性格、有人设的聊天智能体,行话叫捏一个"崽"。这 些"崽崽"们,有精致的外形、配合语音和动态表情,会"害羞"会"生气"会搞小幽默,体验感拉满。 创作者们把这些崽崽上传到公区,供他人搜索、订阅、聊天互动、购买。AI陪伴APP不仅是一个角色创 作智能体,更是一个社区。 过去两年,AI陪伴产品在资本与用户的双重加持下,几乎坐着火箭起飞。行业领头羊Character. AI,在 2024年8月硅谷知名投资机构a16z发布的全球Top 100 AI应用榜上,月活排名 ...
当虚拟恋人“失语”,AI陪伴生意的合规困局
3 6 Ke· 2025-10-23 07:41
Core Insights - The article discusses the tightening content regulations affecting AI companionship applications in China, leading to user experiences of sudden "cyber breakups" as virtual characters are removed without notice [1][3][6]. Group 1: Regulatory Impact - Tencent's "Dream Island" app faced scrutiny from the Shanghai Cyberspace Administration for inappropriate content, resulting in a mass removal of AI chatbots [3]. - MiniMax's "Hoshino" app also underwent similar adjustments, raising concerns about its potential exit from the market due to stringent regulations [3]. - The increasing compliance and commercialization pressures create a challenging environment for AI companionship apps, likened to an "impossible triangle" of balancing investment returns, regulatory compliance, and user satisfaction [6][14]. Group 2: User and Creator Experiences - Creators of AI characters, referred to as "cai ma," report a significant decline in visibility and engagement due to the platforms' expanded content review processes, often without prior notification [7][8]. - Users have noted a decrease in the quality of interactions with AI companions, describing responses as overly simplistic and less relevant due to increased restrictions on content [9][13]. - Many users have migrated to less regulated platforms or abandoned the genre altogether, seeking better experiences elsewhere [20]. Group 3: Market Dynamics - The AI companionship market is experiencing a fragmentation of user traffic, with some smaller, less regulated applications gaining popularity as users seek more fulfilling interactions [20]. - A significant number of AI companionship applications have ceased operations in 2025, indicating a challenging market landscape [16]. - The potential market for AI companionship is projected to grow substantially, with estimates suggesting a rise from $30 million to between $70 billion and $150 billion by 2030, highlighting the vast opportunities despite current challenges [27].
电厂 | 当虚拟恋人“失语”,AI陪伴生意的合规困局
Sou Hu Cai Jing· 2025-10-22 12:11
Core Insights - The article discusses the tightening content regulations on AI companionship applications in China, leading to user experiences of sudden "cyber breakups" as virtual characters are removed without prior notice [1][3][19] - The increasing compliance and commercialization pressures are creating a challenging environment for AI companionship platforms, likened to an "impossible triangle" where balancing investment returns, regulatory compliance, and user satisfaction is difficult [7][17] Group 1: Regulatory Environment - Tencent's "Dream Island" app faced scrutiny from the Shanghai Cyberspace Administration for inappropriate content, resulting in a mass removal of AI chatbots [3] - The tightening of content review processes has led to a phenomenon where creators' characters are hidden or removed without notification, causing frustration among content creators [10][12] - The implementation of the "Interim Measures for the Management of Generative Artificial Intelligence Services" in 2023 has intensified scrutiny on AI-generated content, leading to increased compliance measures [19][30] Group 2: User Experience and Market Dynamics - Users have reported a decline in the quality of interactions with AI companions due to increased restrictions on responses and the addition of more banned words, which diminishes the conversational experience [11][12][16] - Many users are migrating to less regulated platforms as they seek better experiences, indicating a fragmentation of the AI companionship market [24] - The market for AI companionship is projected to grow significantly, with estimates suggesting a potential increase in market size from $30 million to between $70 billion and $150 billion by 2030 [30] Group 3: Creator Challenges - Content creators, referred to as "cai ma," are facing challenges as their characters are frequently hidden or removed, leading to a need for constant adjustments to comply with platform regulations [8][10] - The process of "re-examination" for characters has become common, with creators needing to back up their character descriptions to avoid losing their work [10][24] - Some creators have developed strategies to navigate the stringent review processes, including using hidden settings to enhance character depth while avoiding direct violations [21][22] Group 4: Security and Privacy Concerns - Recent incidents have raised concerns about user privacy, with reports of user data being sold online, highlighting vulnerabilities in the platforms [27] - The operational challenges faced by smaller AI companionship applications, including risk management and technical capabilities, are becoming increasingly apparent [30]
又一批AI社交产品悄悄「死亡」了
创业邦· 2025-10-17 07:35
Core Insights - A wave of AI social companies and products has quietly "died," including both well-known models and niche applications, indicating a significant shift in the AI social landscape [6][10][11] - Despite the shutdowns, AI companionship remains a popular sector, with many products still thriving and being recognized in top AI application lists [7][9] Group 1: Market Trends - In 2023, Character.AI emerged as a strong competitor to ChatGPT, with AI companionship being one of the hottest application categories [7] - By 2025, AI companionship applications had reached 220 million downloads globally, generating $221 million in consumer spending [16] - A survey indicated that 52% of teenagers reported using AI companionship applications at least a few times a month [16] Group 2: User Experience and Challenges - Users express concerns over the shutdowns, fearing loss of emotional connections with AI characters they have developed over time [14][18] - The pricing models of AI companionship applications, which often include subscription fees and pay-per-use structures, have been criticized for being too high and complex [17] - Community engagement and stable operations are crucial for user retention, yet many applications struggle to balance emotional content value with commercial viability [17][19] Group 3: Competitive Landscape - The AI companionship sector is highly competitive, with many products facing a "death spiral" due to user growth stagnation and declining engagement [18][19] - Successful AI companionship products are increasingly focusing on content-driven and feature-rich social platforms, while others are targeting niche verticals like gaming and therapy [22][23] - Innovations such as hardware integration, multi-modal experiences, and blending real and AI social interactions are being explored to enhance user engagement [23][26]
“叫你老婆好不好”,这类App流行校园!有人称“已戒不掉”
Nan Fang Du Shi Bao· 2025-10-10 06:35
Core Insights - The investigation by Southern Metropolis Daily reveals significant risks associated with immersive AI social applications, particularly concerning harmful content that affects minors [1][2][5] Group 1: Content and User Interaction - Immersive AI social applications feature disturbing characters and dangerous storylines, including extreme scenarios like "a 10-year-old girl losing her husband" and "campus survival games" [2][4] - Users can interact with characters that have extreme personality traits, such as "absolute obedience" and "parasitic schoolboy," which raises concerns about the psychological impact on minors [2][5] - The applications allow users to customize character backgrounds and engage in explicit dialogues, which can lead to inappropriate interactions, especially when minors identify themselves as underage [5][10] Group 2: User Demographics and Engagement - A significant number of minors are drawn to these applications, with reports indicating that many users are middle and elementary school students [7][9] - Users express a strong attachment to these applications, with some stating they have difficulty quitting despite recognizing the risks [7][9] - The applications have a large user base, with millions of active users projected by early 2025, indicating a widespread influence [11] Group 3: Regulatory and Safety Concerns - Current safety measures, such as "youth modes," are ineffective due to the lack of robust identity verification and parental control features, allowing minors to bypass restrictions easily [10][11] - Experts emphasize the need for improved regulatory frameworks to protect minors from the potential dangers of AI interactions, highlighting the importance of identity verification and personalized parental controls [11][12] - OpenAI has announced plans to implement age prediction systems and parental controls to enhance user safety, particularly for minors [12]
10岁女孩竟已婚?AI社交应用角色猎奇,露骨诱导未成年人
Nan Fang Du Shi Bao· 2025-10-10 03:18
Core Viewpoint - Recent investigations reveal significant risks associated with immersive AI social applications targeting minors, highlighting the presence of inappropriate content and the lack of effective parental controls and identity verification measures [1][14]. Group 1: Inappropriate Content and Risks - Numerous AI social applications contain extreme and dangerous role-playing scenarios, including characters with labels such as "absolute obedience" and "paranoid" [3][5]. - The applications feature alarming narratives, such as "campus escape" games where users role-play as high school students being hunted by teachers [3][10]. - Users can customize characters and engage in explicit dialogues, with reports of minors being exposed to inappropriate interactions even after declaring their age [6][10]. Group 2: User Engagement and Demographics - A significant number of minors are drawn to these applications, with reports of students actively participating in discussions about their experiences on social media platforms [8][10]. - Many users express a dependency on these applications, with some stating they have repeatedly deleted and re-downloaded them [8][14]. Group 3: Regulatory and Safety Concerns - Current measures for protecting minors, such as "youth modes," are ineffective due to the absence of robust identity verification and parental control features, allowing minors to bypass restrictions easily [13][14]. - Experts emphasize the need for improved regulatory frameworks to safeguard minors from the potential dangers posed by these AI applications, advocating for personalized settings for parents and stricter standards for youth modes [14].