管住AI造假,留住社会信任
Ke Ji Ri Bao·2025-10-17 01:09

Core Points - A notable case of using artificial intelligence (AI) for false advertising has been reported in Beijing, where a company falsely claimed its product could treat various diseases during a live broadcast, while it was merely a regular food product [1] - The incident involved the AI-generated likeness of a well-known CCTV host, highlighting the growing misuse of AI technology to create realistic fake videos [1] - The emergence of AI deepfake technology poses significant challenges to content safety and erodes the foundation of social trust, as it allows for the creation of deceptive representations of public figures [1] Industry Response - In September, China implemented the "Artificial Intelligence Generated Synthetic Content Identification Measures," requiring all AI-generated content to include explicit identification and encouraging the use of digital watermarks for implicit identification [1] - Regulatory bodies are urged to enhance oversight and enforcement against platforms and individuals violating these regulations, as demonstrated by the recent actions taken by Beijing's market supervision department [1] - Content dissemination platforms and AI service providers are expected to fulfill their responsibilities by improving AI recognition technology and enhancing the ability to trace and verify content authenticity [2] Public Awareness - The public is encouraged to remain vigilant and improve their ability to discern the authenticity of information to avoid being misled by false information [2] - The rapid development of AI technology in China necessitates the continuous improvement of safety standards and legal guidelines for various application scenarios [2] - A collaborative effort is required from all stakeholders to restore the integrity of the online space and safeguard the foundation of social trust [2]