Core Viewpoint - The rapid development of artificial intelligence (AI) technology presents both opportunities and challenges for the protection of minors in the digital space, necessitating new governance models to ensure their mental and physical well-being [1][3]. Group 1: Challenges in Protecting Minors - The interaction between minors and AI tools occurs in a private context, making it difficult for parents and regulators to detect harmful content and assess the risks associated with AI usage [1][2]. - Minors are not only consumers of content but also active creators, sometimes producing low-quality or inappropriate content, which highlights the need for positive guidance and digital literacy [2][3]. - The evolution of harmful information on the internet complicates regulatory efforts, as such content can easily change form to evade detection, necessitating ongoing monitoring and targeted actions [3][4]. Group 2: Proposed Solutions - There is a need to embed the concept of minor protection throughout the development of AI models and applications, ensuring compliance at all stages from data training to service operation [4]. - The protection model should shift from isolation and control to active cultivation and the creation of a safe digital ecosystem, promoting positive influences while preventing harm [4].
AI时代,未成年人保护也要更“智能”
Huan Qiu Wang Zi Xun·2025-06-23 00:39