Core Viewpoint - Recent investigations reveal significant risks associated with immersive AI social applications targeting minors, highlighting the presence of inappropriate content and the lack of effective parental controls and identity verification measures [1][14]. Group 1: Inappropriate Content and Risks - Numerous AI social applications contain extreme and dangerous role-playing scenarios, including characters with labels such as "absolute obedience" and "paranoid" [3][5]. - The applications feature alarming narratives, such as "campus escape" games where users role-play as high school students being hunted by teachers [3][10]. - Users can customize characters and engage in explicit dialogues, with reports of minors being exposed to inappropriate interactions even after declaring their age [6][10]. Group 2: User Engagement and Demographics - A significant number of minors are drawn to these applications, with reports of students actively participating in discussions about their experiences on social media platforms [8][10]. - Many users express a dependency on these applications, with some stating they have repeatedly deleted and re-downloaded them [8][14]. Group 3: Regulatory and Safety Concerns - Current measures for protecting minors, such as "youth modes," are ineffective due to the absence of robust identity verification and parental control features, allowing minors to bypass restrictions easily [13][14]. - Experts emphasize the need for improved regulatory frameworks to safeguard minors from the potential dangers posed by these AI applications, advocating for personalized settings for parents and stricter standards for youth modes [14].
10岁女孩竟已婚?AI社交应用角色猎奇,露骨诱导未成年人
Nan Fang Du Shi Bao·2025-10-10 03:18