Core Viewpoint - The draft regulation aims to promote the healthy development and standardized application of AI human-like interactive services while ensuring national security and public interest, and protecting the legal rights of citizens and organizations [3]. Group 1: General Principles - The regulation is based on various laws including the Civil Code, Cybersecurity Law, and Data Security Law, among others [3]. - It applies to services that simulate human personality traits and communication styles through various media [3]. - The government encourages innovation in human-like interactive services while implementing prudent and classified regulation to prevent abuse [3][4]. Group 2: Service Standards - Providers are encouraged to expand application scenarios safely, particularly in cultural dissemination and elderly companionship, aligning with socialist core values [6]. - Providers must adhere to laws and ethical standards, prohibiting activities that harm national security, spread false information, or promote self-harm [7]. - Providers are required to establish safety responsibilities and management systems, including algorithm audits and emergency response measures [8][9]. Group 3: User Protection - Providers must identify user states and intervene when extreme emotions or dependencies are detected, including emergency contact protocols for at-risk users [10][11]. - A minor mode must be established, requiring parental consent for emotional companionship services and allowing guardians to monitor usage [12][13]. - Providers must ensure data security and allow users to delete interaction data, with additional consent required for minors [13][15]. Group 4: Compliance and Oversight - Providers must conduct safety assessments under specific conditions, such as reaching a user base of over 1 million or having significant service changes [21][22]. - Internet application stores are responsible for verifying the safety assessments of AI interactive service applications [24]. - Violations of the regulation may result in penalties, including service suspension [20].
发现用户明确提出实施自杀、自残等,将由人工接管AI对话
第一财经·2025-12-29 06:47