经参调查“去水印”绕过监管 “反标识”生意红火 AI生成内容“持证上岗”为何难落地
Xin Hua She·2026-01-13 06:58

Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" on September 1, 2025, marks the beginning of a regulated era for AI-generated content in China, requiring explicit identification for such content [1][3]. Group 1: Implementation and Impact of AI Identification Policy - The AI identification policy has been in effect for over 100 days, with many platforms launching identification features and management measures, yet challenges remain in execution, as many AI-generated contents still lack clear identification [1][3]. - The user base for generative AI in China reached 515 million by June 2025, an increase of 266 million from December 2024, indicating a rapid growth in AI content consumption [3]. - Various platforms, including Douyin and Kuaishou, have established unique AI identification mechanisms, allowing users to declare AI-generated content, which is then marked accordingly [4]. Group 2: Challenges and Issues in AI Content Identification - Despite the introduction of the identification policy, AI forgery remains prevalent, with more sophisticated techniques emerging, creating a challenging environment for identification governance [8]. - A survey from a university AI governance team indicated a 40% increase in users' skepticism towards unknown content following the policy's implementation, and the time to trace AI-generated false news has decreased from 72 hours to 12 hours due to implicit identification [7]. - The emergence of a black market for "anti-identification" services, including tools to remove AI watermarks, has been noted, with prices ranging from tens to thousands of yuan [9][10]. Group 3: Recommendations for Strengthening AI Governance - Experts suggest enhancing the technical standards for AI identification to prevent tampering and ensure traceability, as current regulatory technologies have weaknesses [11]. - There is a call for clearer delineation of responsibilities among content generators, platforms, and distributors, along with stricter penalties for violations related to identification [11][12]. - A collaborative governance model involving government, public, and platforms is recommended to improve reporting and oversight mechanisms, encouraging public participation in AI content governance [12][13].

经参调查“去水印”绕过监管 “反标识”生意红火 AI生成内容“持证上岗”为何难落地 - Reportify