Workflow
中国社科大刘晓春:AI时代未成年人保护需转向调控型
Nan Fang Du Shi Bao·2025-09-13 13:09

Core Viewpoint - The forum highlighted the need for a paradigm shift in the protection of minors in the AI era, moving from a restrictive model to a regulatory one that leverages AI's capabilities for personalized risk management and educational content delivery [1][4]. Group 1: Current Challenges in Minor Protection - China's current legal framework for online protection of minors is relatively solid, but practical issues such as content harm, addiction, personal information leakage, and insufficient digital literacy persist [3]. - The use of AI tools by minors presents four distinct characteristics: high-intensity personalized interaction, one-on-one private communication, a shift from content consumers to producers, and AI acting as a gateway to the world, challenging traditional protection boundaries [3][4]. Group 2: Limitations of Traditional Protection Models - The traditional model attempts to create a "safe space" for minors, but its effectiveness is questionable due to three layers of information dilemmas: difficulties in identifying minors without guardian consent, lack of legal basis for processing minors' data, and insufficient data for accurate user profiling [3][4]. Group 3: Proposed Regulatory Model - A more open regulatory model is advocated, utilizing AI's personalized information processing capabilities to allow for accurate identification of minors and tailored risk prevention strategies, while also promoting growth through personalized educational content [4]. - The regulatory framework should be layered: a foundation for identity verification, a focus on specific groups and scenarios for risk prevention, and a support layer for growth needs, alongside empowering parents through non-sensitive usage reports [4]. Group 4: Institutional Support and Collaboration - Three areas for institutional support are emphasized: establishing legal legitimacy for personal information processing under protective purposes, creating risk prevention guidelines in collaboration with platforms, and incentivizing parental involvement in governance [5]. - A gradual evaluation mechanism is necessary to provide AI companies with reasonable leeway, emphasizing the need for collaborative efforts in advancing and regulating AI development [5].