Workflow
AI去水印工具
icon
Search documents
经参调查“去水印”绕过监管 “反标识”生意红火 AI生成内容“持证上岗”为何难落地
Xin Hua She· 2026-01-13 06:58
Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" on September 1, 2025, marks the beginning of a regulated era for AI-generated content in China, requiring explicit identification for such content [1][3]. Group 1: Implementation and Impact of AI Identification Policy - The AI identification policy has been in effect for over 100 days, with many platforms launching identification features and management measures, yet challenges remain in execution, as many AI-generated contents still lack clear identification [1][3]. - The user base for generative AI in China reached 515 million by June 2025, an increase of 266 million from December 2024, indicating a rapid growth in AI content consumption [3]. - Various platforms, including Douyin and Kuaishou, have established unique AI identification mechanisms, allowing users to declare AI-generated content, which is then marked accordingly [4]. Group 2: Challenges and Issues in AI Content Identification - Despite the introduction of the identification policy, AI forgery remains prevalent, with more sophisticated techniques emerging, creating a challenging environment for identification governance [8]. - A survey from a university AI governance team indicated a 40% increase in users' skepticism towards unknown content following the policy's implementation, and the time to trace AI-generated false news has decreased from 72 hours to 12 hours due to implicit identification [7]. - The emergence of a black market for "anti-identification" services, including tools to remove AI watermarks, has been noted, with prices ranging from tens to thousands of yuan [9][10]. Group 3: Recommendations for Strengthening AI Governance - Experts suggest enhancing the technical standards for AI identification to prevent tampering and ensure traceability, as current regulatory technologies have weaknesses [11]. - There is a call for clearer delineation of responsibilities among content generators, platforms, and distributors, along with stricter penalties for violations related to identification [11][12]. - A collaborative governance model involving government, public, and platforms is recommended to improve reporting and oversight mechanisms, encouraging public participation in AI content governance [12][13].
“去水印”绕过监管 “反标识”生意红火 AI生成内容“持证上岗”为何难落地
Xin Hua Wang· 2026-01-13 01:49
Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" on September 1, 2025, marks the beginning of a regulated era for AI-generated content in China, requiring explicit identification for such content [3][5]. Group 1: Implementation and Impact of AI Identification Policy - The AI identification policy has been in effect for over 100 days, with various platforms launching identification features and management measures, yet many AI-generated contents remain unmarked, indicating challenges in execution [3][5]. - The user base for generative AI in China reached 515 million by June 2025, a significant increase of 266 million from December 2024, indicating rapid growth in AI content consumption [5]. - Major platforms like Douyin, Toutiao, and Kuaishou have implemented unique AI identification mechanisms, allowing users to declare AI-generated content, which is then marked visibly [5][6]. Group 2: User Awareness and Content Traceability - A survey conducted by an AI governance team showed a nearly 40% increase in users' skepticism towards unknown content sources following the implementation of the AI identification policy [6]. - The introduction of implicit identification has significantly reduced the time for tracing AI-generated false news from an average of 72 hours to just 12 hours [6]. Group 3: Challenges and Evolving Risks - Despite the progress, AI forgery remains prevalent, with more sophisticated techniques emerging, creating a complex black and gray market for evading identification [7][9]. - The market for "AI watermark removal" tools has developed, with services ranging from tens to thousands of yuan, indicating a thriving business for circumventing identification [7][8]. - Some violators exploit differences in identification rules across platforms to bypass regulations, demonstrating the need for more standardized enforcement [8][9]. Group 4: Recommendations for Enhanced Governance - Experts suggest that AI governance should focus on creating a robust framework for identification that is resistant to tampering and ensures traceability [10][11]. - There is a call for clearer delineation of responsibilities among content creators, platforms, and distributors to enhance accountability [10][11]. - Strengthening penalties for black and gray market activities, including the establishment of a blacklist for repeat offenders, is recommended to deter such practices [11].