AI造假

Search documents
网信、公安重点整治AI造假、挑动负面情绪等乱象
Zhong Guo Xin Wen Wang· 2025-10-10 05:58
网信、公安重点整治AI造假、挑动负面情绪等乱象 中新网10月10日电 据"网信中国"微信公众号消息,9月网络谣言主要集中在公共政策、灾情汛情、社会 民生等领域,伪造政策文件、虚构悲情故事、滥用AI工具编造虚假灾情,侵害民众权益、扰乱公共秩 序。网信、公安等部门严厉打击造谣传谣行为,持续净化网络环境。 有不法分子编造"2025年国家薪资补贴申领认证通知",以"国家补贴"为幌子,诱骗群众点击链接,套取 实名信息实施诈骗。所谓"中国三农"投资理财APP,冒用农业农村部名义,伪造部委公文,以定期分 红、到期返还收益为诱饵,开展非法集资活动。还有人以"扩大内需""国家项目"为噱头,发布"参与国 家项目获荣誉证书和政绩补贴"等不实信息,严重侵害群众财产安全。 9月,涉汛、涉灾害类谣言多发。有自媒体发布"广东或遭遇人类最大台风灾害"的信息,夸大事实、制 造恐慌,据中央气象台数据,"桦加沙"远远达不到"史上最强台风"等级。随着"桦加沙"台风逼近广东沿 海地区,社交平台上出现多张所谓"深交所门前牛雕塑被拴""车辆被五花大绑"的图片,均被证实系AI合 成。河南发生区域性暴雨天气期间,多个账号发布"郑州9月16日遭大暴雨,比7.2 ...
伪造官方项目 夸大灾情信息 演绎悲情剧本 网信、公安重点整治AI造假、挑动负面情绪等乱象
Yang Shi Wang· 2025-10-10 05:28
悲情故事类谣言也时有发生。有自媒体为博眼球、吸流量,策划拍摄"大凉山姑娘被拐24年后回到亲人 身边"的视频。另有编造"中国籍女子嫁到外国贫民窟求助回国"剧情,用苦情戏码吸引网民、博取关 注。这些行为欺骗网民感情,渲染负面情绪。 中央网信办近日部署开展"清朗·整治恶意挑动负面情绪问题"专项行动,重点整治挑动群体极端对立情 绪、宣扬恐慌焦虑情绪、挑起网络暴力戾气、过度渲染消极悲观情绪等问题。同时,网信部门针对微 博、快手、今日头条、UC等平台未落实信息内容管理主体责任,依法查处上述平台破坏网络生态案 件。公安机关依法打击造谣传谣违法行为,假冒官方名义编造虚假项目、传播"大凉山姑娘被拐24年后 回到亲人身边""中国籍女子嫁到外国贫民窟求助回国"等造谣传谣者已被依法处罚。 央视网消息:据"网信中国"公众号消息,9月网络谣言主要集中在公共政策、灾情汛情、社会民生等领 域,伪造政策文件、虚构悲情故事、滥用AI工具编造虚假灾情,侵害民众权益、扰乱公共秩序。网 信、公安等部门严厉打击造谣传谣行为,持续净化网络环境。 有不法分子编造"2025年国家薪资补贴申领认证通知",以"国家补贴"为幌子,诱骗群众点击链接,套取 实名信息实 ...
用AI伪造门店照片,“假门面”带不来真流量
Xin Jing Bao· 2025-09-15 09:44
醒目的招牌,精良的装修,坐满了食客的烟火气……一些看起来人气爆棚的"网红门面",可能是AI生成 的。 据北京晚报报道,最近,不少网友反映,点外卖时发现平台商家照片看似精美,实则是AI生成的图 片,目的是营造门店排队的热闹氛围。这些标注"堂食"标签的"人气店",真实面貌却是与图片悬殊的小 作坊。这引发消费者对食品安全的担忧。 AI生成技术普及以来,从声音到图片再到视频,利用AI工具皆可制造,不仅操作简单,而且成本极 低。如今一些外卖平台上有商家也利用AI制造"网红门面",营造虚假人气,用来引流。 比如,对一家使用AI门头图并自称有"堂食店"的商家地址,记者实地探访发现,在狭小的通道里,数十 家外卖小作坊开门营业,无一家堂食店。这意味着商家使用AI造假欺骗消费者。据悉,外卖平台上不 少商家使用AI生成门头图、菜品图和封面设计,这些图片呈现的醒目招牌、精良装修以及坐满了食客 的烟火气,都是假的,与实际情况有天壤之别。 这些商家之所以使用AI伪造"网红门面",既是为了误导、欺骗消费者,以达到引流、成交之目的;也是 因为这种造假方式成本极低。 不了解真相的消费者,被AI制造的不实图片误导、欺骗后,不仅知情权、选择权受到 ...
如何不让AI成为造假者的利器?
Zhong Guo Jing Ji Wang· 2025-08-29 09:47
今年3月,国家网信办等四部门联合发布了《人工智能生成合成内容标识办法》。9月1日,这项办法就 将正式施行。其中最值得关注的是所有人工智能生成、合成的内容,必须打上标识。 客观地说,解决AI造假泛滥问题,各平台并非无动于衷。比如,社交媒体普遍升级了AI内容识别系 统,要求AI生成作品"亮明身份"。但一些人处心积虑规避平台强制标注机制,甚至有的AI起号从业者, 还在网上大肆兜售绕过平台"AI打标"的经验。未标注的AI内容频现,这也不断提醒平台,与时俱进升级 监测方式,显然还有很多工作要做。 以技术治理技术,是一条可取之道。平台层面要切实履行主体责任,比如开发更高效的内容检测工具, 结合人工审核,防止AI造假"浑水摸鱼";同时健全辟谣机制,及时下架不实炒作内容。此外,对被侵权 人提出的删除伪造内容、赔偿损失等诉求,平台应及时处理。 近段时间,在互联网上,奥运冠军的"声音"突然开始推销农产品,知名演员的"声音"24小时不间断在直 播间与粉丝互动……这些以假乱真的AI克隆声音正成为部分自媒体博主的"流量密码"。 必须要明确,利用技术工具仿冒他人的声音带货,已经超出正常的商业营销和广告创意,是违法行为。 我国民法典第一千 ...
“完美候选人”可能啥都不会?AI造假攻陷远程面试
3 6 Ke· 2025-08-15 12:10
Group 1 - Gartner predicts that by 2028, one in four job applicant profiles will be fake, based on a survey of 3,000 job seekers, where 6% admitted to manipulating their interviews [2][5] - The rise of AI-generated deepfake images, voice synthesis technology, and chatbots is making cheating more covert and efficient, targeting remote, technical, and high-paying positions [3][5] - AI is being used as a "new engine" for fraud, allowing impersonators to present themselves as highly skilled candidates, using voice cloning and deepfake video technology to deceive interviewers [5][6] Group 2 - Companies like Google, Cisco, and McKinsey are reverting to in-person interviews to verify candidates' authenticity and skills, as remote interviews have been exploited by fraudsters [6] - The shift back to face-to-face interviews is a reluctant response to the challenges posed by AI's ability to create convincing impersonations, leading to a crisis of trust in the hiring process [6] - Gartner emphasizes the need for enhanced verification processes in recruitment, as the potential for fake candidate profiles increases significantly [6]
AI图像水印失守!开源工具5分钟内抹除所有水印
量子位· 2025-08-14 04:08
Core Viewpoint - A new watermark removal technology called UnMarker can effectively remove almost all AI image watermarks within 5 minutes, challenging the reliability of existing watermark technologies [1][2][6]. Group 1: Watermark Technology Overview - AI image watermarks differ from visible watermarks; they are embedded in the image's spectral features as invisible watermarks [8]. - Current watermark technologies primarily modify the spectral magnitude to embed invisible watermarks, which are robust against common image manipulations [10][13]. - UnMarker's approach targets the spectral information directly, disrupting the watermark without needing to locate its specific encoding [22][24]. Group 2: Performance and Capabilities - UnMarker can remove between 57% to 100% of detectable watermarks, with complete removal of HiDDeN and Yu2 watermarks, and 79% removal from Google SynthID [26][27]. - The technology also performs well against newer watermark techniques like StegaStamp and Tree-Ring Watermarks, achieving around 60% removal [28]. - While effective, UnMarker may cause slight alterations to the image during the watermark removal process [29]. Group 3: Accessibility and Deployment - UnMarker is available as open-source on GitHub, allowing users to deploy it locally with consumer-grade graphics cards [5][31]. - The technology was initially tested on high-end GPUs but can be adjusted for use on more accessible consumer hardware [30][31]. Group 4: Industry Implications - The emergence of UnMarker raises concerns about the effectiveness of watermarking as a solution to combat AI-generated image authenticity [6][36]. - As AI image generation tools increasingly implement watermarking, the development of robust removal technologies like UnMarker could undermine these efforts [35][36].
“特朗普爱上保洁”和“1.5亿美金短剧神话”:社会信任资本正在被谁透支?
3 6 Ke· 2025-08-08 02:20
Core Viewpoint - The article discusses the emergence of a fabricated short drama titled "Trump Falls in Love with the White House Cleaner," which falsely claimed to have generated $150 million in revenue, highlighting the failure of media verification processes and the rise of AI-generated misinformation [1][2][4]. Group 1: Media and Misinformation - The short drama was initially reported by a self-media account, which misled readers with a sensational title that implied the existence of the drama without confirming it [4][5]. - Major platforms like ReelShort, YouTube, and Netflix showed no evidence of the drama's existence, revealing a significant gap in media fact-checking [2][4]. - The spread of this false narrative reflects a broader issue of media's responsibility in verifying facts, as some outlets failed to uphold their duty, leading to a loss of public trust [8][19]. Group 2: AI and Content Creation - The article emphasizes the role of AI in generating fake content, which lowers the cost of misinformation production while increasing its appeal [13][20]. - The ease of creating convincing fake narratives using AI raises concerns about the integrity of information in the digital age [20]. - The phenomenon of AI-generated content highlights the need for a robust mechanism to ensure the value of truthful information exceeds that of falsehoods [20]. Group 3: Economic Implications - The article outlines how the false narrative attracted significant attention, leading to a surge in traffic for fake news websites, which often outperformed reputable media in terms of engagement [14][19]. - Self-media operators benefit financially from sensational headlines and misleading content through advertising revenue and paid subscriptions [15][19]. - The article warns of a "grey industry" that profits from misinformation, where the allure of quick financial gain overshadows ethical considerations [15][19]. Group 4: Cultural and Political Context - The absurdity of the narrative raises questions about cultural perceptions and the potential manipulation of political figures for entertainment purposes [18][19]. - The blending of entertainment with political discourse can dilute the seriousness of political issues, leading to a trivialization of important topics [18][19]. - The article suggests that the propagation of such narratives may reflect deeper anxieties about cultural differences and the portrayal of political figures [18][19].
“仅退款”风波再起, 用AI伪造证据竟成作弊利器
Qi Lu Wan Bao· 2025-08-05 02:16
Core Viewpoint - The rise of AI technology has led to an increase in fraudulent refund claims in the e-commerce sector, with some consumers exploiting the "refund without return" policy to gain products without payment [1][2][3]. Group 1: E-commerce Refund Mechanism - The "refund without return" policy was initially designed to protect consumers in specific scenarios, but it has been misused by some buyers, leading to significant losses for merchants [2][3]. - Major e-commerce platforms have recently adjusted their "refund without return" policies, allowing merchants to handle refund requests autonomously [2][5]. - A report indicated that 50.36% of complaints from merchants on e-commerce platforms were related to "refund without return" issues, highlighting the prevalence of this problem [2]. Group 2: AI Technology and Fraud - Some consumers are using AI tools to create fake images of products to claim refunds, which has resulted in losses of 5% to 8% of revenue for affected merchants [1][2]. - Experts suggest that the misuse of AI for fraudulent activities could hinder public acceptance of new technologies and disrupt market rules [3][5]. - Recommendations include implementing AI image recognition technology and a tiered evidence submission system for refund claims to mitigate fraud [3][5]. Group 3: Legal Implications - The use of AI-generated fake content for refund claims can lead to legal consequences, including potential fraud charges if the amount involved is significant [4][5]. - The Civil Code allows merchants to demand returns or compensation for breaches of the "refund without return" agreement [4]. - New regulations regarding the identification of AI-generated content are set to take effect in September 2025, aiming to curb misuse [4][5]. Group 4: Recommendations for Improvement - A multi-faceted approach involving rule enhancement, technological countermeasures, and legal deterrents is necessary to address the issues surrounding "refund without return" fraud [5]. - E-commerce platforms are urged to establish a rapid response mechanism for AI fraud cases and to collaborate on data sharing to combat fraudulent activities effectively [5].
DeepSeek又惹祸了?画面不敢想
Xin Lang Cai Jing· 2025-07-06 04:24
Core Viewpoint - The article discusses the increasing prevalence of misinformation generated by AI, highlighting the challenges posed by AI hallucinations and the ease of feeding false information into AI systems [3][10][21]. Group 1: AI Misinformation - AI hallucination issues lead to the generation of fabricated facts that cater to user preferences, which can be exploited to create bizarre rumors [3][10]. - Recent examples of widely circulated AI-generated rumors include absurd claims about officials and illegal activities, indicating a trend towards sensationalism over truth [5][6][7][8]. Group 2: Impact of Social Media - The combination of AI's inherent hallucination problems and the rapid dissemination of information through social media creates a concerning information environment [13][14]. - The article suggests that the current state of information is deteriorating, likening it to a "cesspool" [15]. Group 3: Recommendations for Improvement - AI companies need to enhance their technology to address hallucination issues, as some foreign models exhibit less severe problems [17]. - Regulatory bodies should improve their efforts to combat the spread of false information, although the balance between regulation and innovation remains delicate [18]. - Individuals are encouraged to be cautious with real-time information while relying on established knowledge sources [20].