Core Viewpoint - The rapid development and widespread adoption of AI technology have lowered the barriers to generating synthetic false content, leading to increased infringement and criminal activities, particularly "AI-generated rumors" [1] Group 1: Joint Declaration and Principles - Over 61 privacy regulatory agencies from more than 40 countries and regions signed a Joint Declaration on AI-generated images and privacy protection, emphasizing that unauthorized use of AI to generate private images may constitute a criminal offense [1][4] - The Joint Declaration outlines four fundamental principles for the development and use of AI content generation systems, including strong safeguards against misuse of personal information, transparency in information disclosure, effective data deletion mechanisms, and enhanced protections for vulnerable groups, particularly children [4][5] Group 2: Regulatory Background and Global Response - The Joint Declaration was prompted by incidents involving the misuse of AI tools, such as the Grok chatbot, which was used to generate inappropriate images, leading to international regulatory scrutiny and criticism [6] - Countries like Indonesia and Malaysia temporarily banned Grok, while officials from the UK, France, Brazil, and the EU condemned its misuse and initiated investigations [6] - Various regions have enacted laws to prevent the misuse of AI generation technology, such as Singapore's Criminal Law Amendment recognizing unauthorized generation of false private images as a crime, and Australia's legislation imposing severe penalties for deepfake pornography [7]
全球61家隐私监管机构声明:抵制AI虚假信息,保护隐私
Nan Fang Du Shi Bao·2026-02-26 05:01