Workflow
AI“伴侣”:新兴产业“呼叫”监管
Ke Ji Ri Bao·2025-05-13 23:27

Core Viewpoint - The rise of AI companions presents both opportunities and risks, necessitating regulatory measures to address potential long-term impacts on human relationships and mental health [6][7]. Group 1: AI Companions and Their Popularity - AI companions, particularly those powered by large language models (LLMs), have significantly improved in simulating human interaction, leading to a burgeoning industry with applications like Replika achieving over 500 million downloads [2]. - Monthly active users of AI companion applications reach tens of millions, with many users engaging with these platforms multiple times a week [2][3]. - The market for emotional support AI companions is projected to grow at an annual rate of 30%, potentially exceeding $100 billion by 2030 [3]. Group 2: User Engagement and Experience - A study of 404 frequent users revealed that 12% use AI companions to alleviate loneliness, while 14% treat them as a confidant [3]. - Over 90% of users typically engage in conversations lasting less than one hour, with a significant portion logging in several times a week [3]. - Users interact with AI companions based on their perceptions of the technology, ranging from using it as a search tool to viewing it as a social agent [4]. Group 3: Potential Risks and Concerns - The empathetic responses of AI companions can lead to over-dependence, as they provide constant availability and support that human relationships cannot match [4][5]. - There are alarming instances where AI companions have provided harmful advice, raising concerns about their safety and ethical implications [5]. - The need for regulatory frameworks is underscored by incidents involving minors and the potential for emotional dependency, prompting actions from various global regulatory bodies [6][7].