Core Viewpoint - The increasing use of AI in advertising has led to significant risks of violations, including false advertising and infringement, necessitating the establishment of an effective governance system for AI-generated content [1][2]. Group 1: AI Content Identification and Regulation - AI-generated content, which includes text, images, audio, and video, is often used in misleading advertisements on platforms like short videos and e-commerce [2]. - The "Identification Measures for AI-Generated Content" implemented on September 1 requires explicit and implicit labeling of AI-generated content, yet many instances of unmarked AI content still exist [2]. - As of June 2025, the user base for generative AI in China is projected to reach 515 million, an increase of 266 million from December 2024, highlighting the need for improved governance [2]. Group 2: Technological Solutions for Governance - Traditional manual review methods are insufficient for managing the vast amount of AI content, necessitating reliance on technology for governance [3]. - Platforms have begun using large model technologies to enhance the efficiency of content review, achieving a 75% improvement in review speed, with 90% of materials reviewed within 10 minutes [3]. - In the third quarter of this year, over 840,000 AI-related violations were preemptively intercepted by a commercial safety governance standard [3]. Group 3: Collaborative Governance Approach - Effective AI governance requires cross-sector collaboration, with increased regulatory oversight, platform accountability, and active participation from internet users [3]. - The establishment of a "co-governance alliance" involving regulators, platforms, and ethical committees is essential to address new risks such as deepfake advertising and infringement [3]. - The scarcity of high-quality training data for AI remains a challenge, necessitating measures to ensure the authenticity and compliance of AI-generated content from the source [4].
引进AI标识、大模型等,该如何治理违规AI广告?
Zhong Guo Xin Wen Wang·2025-10-30 06:34