深度伪造(Deepfake)
Search documents
ACM MM 2025 Oral | 新加坡国立大学提出FractalForensics,基于分形水印的主动深度伪造检测与定位
机器之心· 2025-11-04 03:45
Core Viewpoint - The article discusses the development of FractalForensics, a novel method for active deepfake detection and localization using fractal watermarking, addressing existing challenges in deepfake detection and positioning [4][5][12]. Group 1: Introduction and Motivation - Recent years have seen a growing interest in active defenses against deepfakes, with existing methods like robust and semi-fragile watermarks showing limited effectiveness [4]. - The paper aims to tackle the issues of existing watermarking techniques, which struggle with robustness and the simultaneous detection and localization of forgeries [8]. Group 2: Methodology - FractalForensics introduces a watermarking approach that utilizes a matrix format instead of traditional watermark vectors, enhancing the capability for forgery localization [5]. - The watermark generation and encryption process is parameterized, allowing users to select values for various parameters, resulting in 144 different fractal variants [6][9]. - A chaotic encryption system is constructed based on fractal geometry, which enhances the security and variability of the watermark [7]. Group 3: Watermark Embedding and Extraction - The watermark embedding model is based on convolutional neural networks, employing an entry-to-patch strategy to embed watermarks into images without disrupting their integrity [10][11]. - The method ensures that modified areas in deepfake images lose the watermark, enabling both detection and localization of forgeries [11][18]. Group 4: Experimental Results - The proposed watermarking method demonstrates optimal robustness against common image processing techniques, maintaining high detection rates [13][14]. - In tests against various deepfake methods, FractalForensics shows reasonable vulnerability, allowing for effective detection and localization [15][16]. - The article presents comparative results indicating that FractalForensics achieves superior detection performance compared to state-of-the-art passive detection methods [17][18].
马斯克:Grok将推出AI视频检测工具;加速进化发布可自主做家务机器人丨AIGC日报
创业邦· 2025-10-14 00:08
Group 1 - The core viewpoint of the article highlights advancements in AI technology, particularly in visual models and robotics, showcasing the launch of the "Juzhou" model and the Booster T1 robot [2][3]. Group 2 - The "Juzhou" model, developed by Hunan Huishiwei Intelligent Technology Co., is the first domestically produced visual model based on pure domestic computing power, with the V1.5 version released on October 11, featuring enhanced performance and cross-platform capabilities from iOS to Android [2]. - The "Juzhou" model can generate 1024×1024 resolution images in seconds on iOS devices without internet access, boasting low cost, high quality, fast speed, and lightweight characteristics [2]. - The model's parameters have been reduced to 1/50, with training speed increased by 5 times and generation speed by 7 times, allowing it to become a specialized model for various industries [2]. - The Booster T1 robot, launched by Accelerated Evolution, is an upgraded version that can understand vague language commands and perform household chores autonomously [2]. - Perplexity CEO Srinivasan has transitioned from traditional investor presentations to using AI for investor roadshows, indicating a shift in how funding discussions are conducted [3]. - Elon Musk announced that Grok will soon have the capability to detect AI-generated videos and trace their online origins, addressing concerns over deepfake content [3].
国家安全部:警惕!AI助手的“阴暗面”!
Huan Qiu Wang Zi Xun· 2025-07-15 22:45
Group 1 - The core viewpoint emphasizes that while AI technology drives high-quality economic and social development, it poses significant risks to national security if misused by malicious actors [1] - Deepfake technology, a combination of deep learning and forgery, can create realistic simulations but also presents security risks when exploited for misinformation and panic [2] - Generative AI's ability to process vast amounts of data raises concerns about user privacy and potential leaks of sensitive information, which could be exploited by foreign espionage [4] Group 2 - AI algorithms can propagate biased ideologies if manipulated, potentially serving as tools for hostile foreign forces to disrupt social stability through misinformation [6] - The government has implemented regulations and frameworks to enhance AI governance, urging citizens to understand legal standards and promote healthy AI development [8] - Public awareness and critical thinking regarding AI-generated content are essential to mitigate risks, including safeguarding personal information from AI platforms [8]