Workflow
引进AI标识、大模型等 该如何治理违规AI广告?
Zhong Guo Xin Wen Wang·2025-10-30 06:39

Core Viewpoint - The widespread application of AI has led to increasing risks of violations in AI advertising, including issues of public morality, infringement, and false advertising, necessitating the establishment of an effective governance system for AI advertising [1][2]. Group 1: AI Content Identification and Regulation - AI-generated content, which includes text, images, audio, video, and virtual scenes, is often used by unscrupulous advertisers to create realistic but false consumer scenarios on platforms like short videos and e-commerce [2]. - The "Identification Measures for AI-Generated Content" implemented on September 1 requires explicit and implicit labeling of AI-generated content, yet instances of unmarked AI content still persist, particularly from pre-existing promotional materials [2][3]. - As of June 2025, the user base for generative AI in China is projected to reach 515 million, an increase of 266 million from December 2024, highlighting the growing demand for effective AI content governance [2]. Group 2: Governance Strategies and Technological Solutions - Traditional manual review methods are becoming inadequate for managing the vast amount of AI content, necessitating reliance on technology for governance [3]. - Platforms are utilizing large model technologies to enhance the efficiency of AI content review, achieving a 75% improvement in review speed, with 90% of materials being reviewed within 10 minutes [3]. - A multi-tiered response system has been established to address violations, including pre-warning, account restrictions, and content removal, with over 840,000 AI-related violations intercepted in the third quarter of this year [3]. Group 3: Collaborative Governance and Ethical Considerations - Experts emphasize the need for cross-sector collaboration in AI governance, involving regulatory bodies, platforms, and public participation to address new risks such as deepfake advertising and infringement [3][4]. - The scarcity of high-quality training data for AI remains a challenge, necessitating measures to ensure the authenticity and compliance of AI training materials from the outset [4]. - The Ministry of Public Security has advised on identifying potential AI-generated falsehoods by observing typical signs in images and videos, such as unnatural shadows or discrepancies in lip-syncing [4].