Workflow
SynthID
icon
Search documents
AI图像水印失守,开源工具5分钟内抹除所有水印
3 6 Ke· 2025-08-14 09:02
AI图像的水印技术要变天了! 目前,UnMarker已经在GitHub上免费开源,用户仅凭消费级显卡就可实现本地部署。 UnMarker的出现,让原本被视为有效对抗AI造假的水印技术变得不再可靠。 UnMarker创作者Andre Kassis表示: 一款全新的去水印技术——UnMarker,能在5分钟内去除市面上几乎所有的AI图像水印。 其中,谷歌水印工具HiDDeN已被完全破解,SynthID也已被攻破79%! UnMarker在高效去除水印的同时,还保持了较高的图像质量。 我就想知道,这些水印技术是否真像他们说得那么厉害。 AI图像水印技术 想知道UnMarker是怎么去除AI图像水印的,有必要先了解一下AI图像的水印是怎么回事。 与一般直接在图片上打上品牌名的显性水印(Visible Watermark)不同,AI图像的水印主要是藏在频谱特征等图像深层信息中的隐性水印(Invisible Watermark)。 频谱特征描述的是图像中像素值彼此之间的变化方式,由频谱幅度(Magnitude)和频谱相位(Phase)两个要素构成。 当前,水印技术主要通过修改频谱幅度这一频谱特征,在图像中嵌入隐性水印。 ...
AI图像水印失守!开源工具5分钟内抹除所有水印
量子位· 2025-08-14 04:08
Core Viewpoint - A new watermark removal technology called UnMarker can effectively remove almost all AI image watermarks within 5 minutes, challenging the reliability of existing watermark technologies [1][2][6]. Group 1: Watermark Technology Overview - AI image watermarks differ from visible watermarks; they are embedded in the image's spectral features as invisible watermarks [8]. - Current watermark technologies primarily modify the spectral magnitude to embed invisible watermarks, which are robust against common image manipulations [10][13]. - UnMarker's approach targets the spectral information directly, disrupting the watermark without needing to locate its specific encoding [22][24]. Group 2: Performance and Capabilities - UnMarker can remove between 57% to 100% of detectable watermarks, with complete removal of HiDDeN and Yu2 watermarks, and 79% removal from Google SynthID [26][27]. - The technology also performs well against newer watermark techniques like StegaStamp and Tree-Ring Watermarks, achieving around 60% removal [28]. - While effective, UnMarker may cause slight alterations to the image during the watermark removal process [29]. Group 3: Accessibility and Deployment - UnMarker is available as open-source on GitHub, allowing users to deploy it locally with consumer-grade graphics cards [5][31]. - The technology was initially tested on high-end GPUs but can be adjusted for use on more accessible consumer hardware [30][31]. Group 4: Industry Implications - The emergence of UnMarker raises concerns about the effectiveness of watermarking as a solution to combat AI-generated image authenticity [6][36]. - As AI image generation tools increasingly implement watermarking, the development of robust removal technologies like UnMarker could undermine these efforts [35][36].
电商上演「魔法对轰」:卖家用AI假图骗下单,买家拿AI烂水果骗退款
3 6 Ke· 2025-08-05 08:54
由于水果这种东西不方便退货验证(即使当时没坏,退回去也得坏了,而且运费很高),商家只能硬着头皮退钱。 AI 作图,不止卖家在用,买家也在用。 最近,不少网友晒出了一个令人啼笑皆非的操作:为了从卖家那里占到一点便宜,一些买家会故意声称商品有瑕疵,并要求退款。但其实,瑕疵图是他们 自己用 AI 做的,比如把好的榴莲做成腐烂掉的榴莲。 还有些商品虽然可以退货验证,但由于客单价比较低,退货流程比较繁琐,一来二去还不如退钱成本低。所以这类商家在收到瑕疵反馈的时候,一般也会 选择退钱或赔偿部分金额了事。 不过,商家也留了一手,会要求买家把瑕疵品剪坏,从而确保其失去使用价值。但就是留的这手,现在也被 AI 破解了。 我们从电商卖家口中了解到,其实这种骗术由来已久。大概在十年前,就有买家通过 PS 等工具在正常商品的照片上 P 上瑕疵。但以普通用户的 P 图水 平,如果卖家放大图片仔细去看,大部分时候还是能是看出来的。但现在 AI 做的图,鉴别起来就没有那么容易了。 这看起来有点像一场「魔法打败魔法」的闹剧,因为如今的购物平台上,商家滥用 AI 的情况也屡见不鲜,导致大量买家收到货后发现「货不对板」。 商家们利用 AI 进行 ...
电商上演「魔法对轰」:卖家用AI假图骗下单,买家拿AI烂水果骗退款
机器之心· 2025-08-05 08:41
Core Viewpoint - The article discusses the increasing misuse of AI technology by both buyers and sellers in e-commerce, leading to a trust crisis and the need for better verification methods to combat fraud [2][10][21]. Group 1: Buyer Misuse of AI - Some buyers are using AI-generated images to falsely claim product defects in order to obtain refunds, exploiting the difficulty of verifying the condition of perishable goods like fruits [2][6]. - This practice has evolved from earlier methods where buyers used basic photo editing tools, making it harder for sellers to detect fraud due to the sophistication of AI-generated images [8][10]. - The phenomenon reflects a "tit-for-tat" mentality among buyers who have previously been deceived by sellers using AI-enhanced product images [10][21]. Group 2: Seller Misuse of AI - Sellers are also misusing AI to create misleading product images, over-enhancing ordinary items, and generating fake reviews, which contributes to the issue of "goods not matching the description" [10][24]. - The article highlights that sellers may use virtual models and AI-generated content to cut costs, further complicating the authenticity of product representations [10][24]. Group 3: Proposed Solutions - Various proposed solutions to combat this issue include requiring buyers to submit videos of defective products, taking multiple photos from different angles, and using in-app cameras to prevent the upload of AI-generated images [11][15][24]. - However, these solutions have limitations, as advanced AI tools can still generate convincing content, making it challenging to establish foolproof verification methods [11][15][23]. Group 4: Technological Innovations - The article suggests that implementing digital watermarking and content provenance technologies could help in identifying and tracing AI-generated content, thus enhancing trust in e-commerce [19][21]. - The development of standards like C2PA and tools such as Google's SynthID aims to embed invisible watermarks in AI-generated media, which could serve as a digital identity for content [19][21][26]. Group 5: Ongoing Challenges - The ongoing "cat-and-mouse" game between AI generation and detection technologies poses a continuous challenge, as both sides evolve rapidly [23][24]. - E-commerce platforms are exploring various strategies, including strengthening evidence chains and utilizing big data analytics to monitor user behavior and detect anomalies [24][26].
「人类飞机上吵架看呆袋鼠」刷屏全网,7000万人被AI耍了
机器之心· 2025-06-16 09:10
Core Viewpoint - The article discusses the increasing sophistication of AI-generated content, highlighting how realistic AI videos can mislead viewers into believing they are real, as exemplified by a viral video featuring a kangaroo at an airport [2][12][18]. Group 1: AI Video Generation - The video in question was created using advanced AI technology, making it difficult for viewers to discern its authenticity [18]. - The account that posted the video, InfiniteUnreality, features various surreal AI-generated animal videos, contributing to the confusion surrounding the content's legitimacy [13][16]. - Despite the account labeling its content as AI-generated, the indication was subtle, leading many viewers to overlook it [19]. Group 2: Viewer Misinterpretation - The viral nature of the video was amplified by its engaging content, with many users commenting positively and reinforcing the belief that it was real [24]. - Other social media accounts, such as DramaAlert, shared the video without clarifying its AI origins, further perpetuating the misunderstanding [21]. - The phenomenon illustrates a broader trend where viewers struggle to identify AI-generated content, as traditional visual cues for authenticity are becoming less reliable [34]. Group 3: AI Detection Tools - Google DeepMind and Google AI Labs have developed SynthID, a tool designed to identify content generated or edited by Google’s AI models through digital watermarking [35]. - SynthID embeds a subtle digital fingerprint in the content, which can be detected even after editing, but it is limited to Google’s AI outputs [36]. - The tool is still in early testing and requires users to join a waitlist for access [39].
Google's SynthID is the latest tool for catching AI-made content. what is AI 'watermarking,' and does it work?
TechXplore· 2025-06-03 13:43
Core Viewpoint - Google has introduced SynthID Detector, a tool designed to identify AI-generated content across various media formats, but it is currently limited to early testers and specific Google AI services [1][2]. Group 1: Tool Functionality - SynthID primarily detects content generated by Google AI services like Gemini, Veo, Imagen, and Lyria, and does not work with outputs from other AI models like ChatGPT [2][3]. - The tool identifies a "watermark" embedded in the content by Google's AI products, rather than detecting AI-generated content directly [3][5]. - Watermarks are machine-readable elements that help trace the origin and authorship of content, addressing misinformation challenges [4][5]. Group 2: Industry Landscape - Multiple AI companies, including Meta, have developed their own watermarking and detection tools, leading to a fragmented landscape where users must manage various tools for verification [5][6]. - There is a lack of a unified AI detection system, despite calls from researchers for a more cohesive approach [6]. Group 3: Effectiveness of Detection Tools - The effectiveness of AI detection tools varies significantly; they perform better on entirely AI-generated content compared to content that has been edited or transformed by AI [10]. - Many detection tools do not provide clear explanations for their decisions, which can lead to confusion and ethical concerns, especially in academic settings [11]. Group 4: Use Cases - AI detection tools have various applications, including verifying insurance claims, assisting journalists and fact-checkers, and ensuring authenticity in recruitment and online dating scenarios [12][13]. - The need for real-time detection tools is increasing, as static watermarking may not suffice for addressing authenticity challenges [14]. Group 5: Future Directions - Understanding the limitations of AI detection tools is crucial, and combining these tools with contextual knowledge will remain essential for accurate assessments [15].