Workflow
AI水印
icon
Search documents
AI生成内容需“表明身份”,虚假信息将套上紧箍咒
3 6 Ke· 2025-09-02 11:35
Core Viewpoint - The rapid advancement of generative artificial intelligence (AIGC) has made it increasingly difficult for internet users to distinguish between true and false content, leading to a proliferation of AI-generated misinformation [1][3]. Regulation and Responsibility - The National Internet Information Office and three other departments have introduced the "Artificial Intelligence Generated Synthetic Content Identification Measures," effective from September 1, requiring explicit and implicit labeling for all AI-generated content [3][10]. - The new regulation places the primary responsibility for AI-generated content on the content creators, marking a significant shift from previous content management systems established by platforms like WeChat and Douyin [3][14]. AI Misuse and Challenges - AI has become a major tool for misinformation, with examples of scams and fraudulent activities utilizing AI-generated content [5][6]. - The emergence of user-friendly AI technologies has made it easier for malicious actors to create deceptive content, as seen with the rise of deepfake technology [6][7]. Safety Measures and Limitations - Major tech companies are developing "AI Guardrails" to prevent harmful content generation, but these measures face inherent limitations due to the need for AI models to maintain a degree of autonomy [9][10]. - The balance between safety and functionality is challenging, as overly strict safety measures could render AI models ineffective [10]. Watermarking and Content Authenticity - Companies like Microsoft, Adobe, and OpenAI have formed the C2PA alliance to implement watermarking techniques to distinguish AI-generated content from human-created works, but these watermarks can be easily removed [12]. - Current operational strategies by internet platforms to require creators to disclose AI-generated content have not been effective, as many creators fear that such disclosures will limit their content's reach [12][14].
1/15成本,实现AI水印新SOTA | 南洋理工大学&A*STAR
量子位· 2025-05-31 03:34
Core Viewpoint - The article discusses the emerging consensus in the industry regarding the need for watermarking AI-generated content to ensure traceability, highlighting the limitations of traditional watermarking methods and introducing a new approach called MaskMark developed by researchers from Nanyang Technological University and A*STAR [1][3]. Group 1: Limitations of Traditional Watermarking - Traditional watermarking methods treat images as a whole, leading to failures in watermark extraction when parts of the image are altered, and they cannot protect specific areas like faces or logos [2]. - The inability to protect specific regions poses a significant challenge for content verification and copyright protection [2]. Group 2: Introduction of MaskMark - MaskMark is a novel local robust image watermarking method that significantly outperforms the state-of-the-art model WAM from Meta, with a training cost only 1/15 of WAM [4][5]. - The core idea of MaskMark is to inform the model where the watermark is embedded, allowing for precise insertion and extraction [5]. Group 3: Technical Features of MaskMark - MaskMark has two versions: MaskMark-D (decoding mask) and MaskMark-ED (encoding and decoding mask), focusing on dual optimization during training and inference [6]. - It supports multiple watermark embeddings, precise localization of tampered areas, and flexible extraction of local watermarks, adaptable to various bit lengths (32/64/128 bits) [7][8]. Group 4: Performance Metrics - MaskMark demonstrates high extraction accuracy, maintaining nearly 100% bit accuracy even under high visual fidelity conditions (PSNR > 39.5, SSIM > 0.98) [13]. - In local watermarking tasks, MaskMark outperforms existing global methods and the leading local watermark model WAM, especially in small area embedding scenarios [14][18]. Group 5: Efficiency and Scalability - MaskMark's training process is efficient, requiring only about 20 hours on a single A6000 GPU, with a computational efficiency (TFLOPs) 15 times higher than WAM [22]. - The method allows for easy scalability to different bit lengths while maintaining high performance, unlike WAM which is limited to 32 bits [20].