Workflow
深度伪造技术
icon
Search documents
美国两家外卖平台否认“有骑手斩杀线”!称系AI生成的谣言
Nan Fang Du Shi Bao· 2026-01-13 05:05
近期,"斩杀线"一词在网络语境中热度飙升,该词原指游戏玩家角色生命值过低时,可能会被一击终 结,如今被引申以形容美国社会保障体系的冷漠与脆弱。而在海外社交网站Reddit上,一则声称揭 露"美国外卖平台算法内幕"的匿名网帖引发关注,帖文称,"外卖平台通过名为'绝望评分'的算法剥削骑 手,成为平台对骑手实施'斩杀线式'压榨的证据。" 南都N视频记者注意到,随着帖文内容不断发酵,美国两大外卖平台Uber Eats及DoorDash紧急辟谣。最 终经证实,这篇"内部爆料"文章实为AI生成的谣言,原帖已被删除。事实上,这起事件并非个案,过去 一段时间,由AI生成的所谓"行业黑幕"网帖在海外社交平台上频繁出现。 外卖员在美国纽约街头冒雪送餐。新华社发 海外外卖平台辟谣"骑手斩杀线" 年初,一位自称"Uber Eats软件工程师"的用户在Reddit上发布长文,声称"自己签了保密协议,正在冒险 揭露外卖平台通过名为'绝望评分'的算法剥削骑手"。 南都记者注意到,事件发生后,Uber总裁兼首席运营官安德鲁·麦克唐纳在社交平台辟谣称,"我本人负 责Uber Eats业务,这条帖子绝对不是在说我们。我怀疑这些信息完全是捏造的, ...
科技史与文化研究 文摘两则
Xin Lang Cai Jing· 2026-01-08 16:57
(来源:千龙网) 每逢周五,本版摘选两篇论文。《新京报·书评周刊》在图书评介的基础上拓展了"学术评议和文摘"这 一全新的知识传播工作,筹备"新京报中文学术文摘服务所",与期刊界一道服务中国人文社会科学事 业。每一期均由本所初选,由相关学科领域的专家学者担任评议人参与推选。我们希望将近期兼具专业 性和前沿性的论文传递给大家,我们还希望所选论文具有鲜明的本土或世界问题意识,具有中文写作独 到的气质。 此为2026年第1期(总第16期)。第一篇论文的作者彭必生讲述了中国古代自动机的文本、技术。在我 们印象中,自动机好像只是现代世界的产物,若是从动力角度来看,限定由电动机或内燃机驱动的自动 装置,确实为现代所独有。但其实有关自动机的想象、印象源远流长,是我们观念世界的一部分,甚至 部分在技术上也有实物。作者回顾了中国古代自动机的智慧及其向理性方向的演进,当然争议也伴随其 中,或关于文本版本的,或关于技术可行性的,直至近代巨变,由自我演进之路转向革新。第二篇论文 的作者施畅考察了数字年代有关面容的风险。面容为我们所有,它经过媒介传播,由他人接收并解读。 而随着AI(人工智能)演进,AI换脸等技术已经逐渐铺开,当人们仍将 ...
生成式AI被滥用如何治理?学者建议用好现有规则发展中规范
Nan Fang Du Shi Bao· 2025-12-18 10:55
王利明表示,当前对人工智能侵权,专门立法为时尚早,应秉持"在发展中规范"的思路,以《民法典》 《个人信息保护法》为基础,充分解释、用好现有规则,再通过案例积累、司法解释制定,有效应对人 工智能引发的侵权问题。 谈及深度伪造技术的规范,王利明建议,依据《民法典》第1019条,禁止利用深度伪造技术侵害他人权 益,关键在于充分解释该条款,应对人工智能领域的深度伪造侵权问题。 针对人工智能侵权专门立法为时尚早,建议用好现有规则 当前,人工智能时代已然来临,人工智能将深刻改变我们的生产方式、生活方式和社会治理模式,"人 工智能+"也成为当前乃至今后经济增长的重要增长点。王利明表示,与此同时,生成式人工智能也可 能引发侵权问题。因此,应对好生成式人工智能的侵权,是"时代之问",必须给出回应。 王利明介绍,针对人工智能侵权问题,比较法上有两种核心模式:一是以欧盟为代表的"强监管模式", 该模式注重对个人信息、隐私的保护,设置了多项合规义务,违反义务易引发侵权责任;二是以美国为 代表的"轻监管重利用"模式,模型开发者、服务提供者的合规义务相对较轻,个人信息、隐私侵害主要 通过侵权法处理。 此前,不少学者建议我国尽快针对人工智 ...
遏制AI滥用,韩国要求对AI广告进行“显著标识”
Huan Qiu Shi Bao· 2025-12-11 22:48
【环球时报驻韩国特约记者 黎枳银】韩国政府要求广告商对使用人工智能(AI)技术制作的广告进 行"显著标识"。相关法律将在修订后于2026年年初正式实施。韩媒分析称,社交媒体上利用深度伪造技 术制造"虚假专家"或"名人代言"的欺骗性广告持续增加,新规旨在遏制这一不良趋势。 据美联社报道,韩国国务总理金民锡于10日召开政策会议。会上,政府官员表示将全面加强对AI生成 广告的筛查与清理,并对违规行为处以罚款。韩国政府国务调整室财政金融政策官李东勋表示,此类广 告正在"扰乱市场秩序",必须迅速采取行动。今后凡利用AI生成、编辑或上传的图片与视频均须标 注"AI制作",用户不得删除或篡改相关标签,平台运营企业也需确保广告商遵守规定。 报道称,随着深度伪造技术扩散,伪造专家及名人音视频推广减肥药、化妆品甚至非法赌博网站的广告 在韩国多个社交媒体频繁出现,对消费者构成直接风险,尤其是无法辨认AI的老年消费者。韩国食品 医药品安全处2024年共查处9.67万条食品及药品类非法在线广告,今年前9个月已查处接近7万条,远高 于2023年约5.9万条的水平。 与此同时,AI扩散对广告业生态的影响在业内引发讨论。据韩联社报道,韩国多 ...
经济学人:人工智能正在颠覆情色行业
美股IPO· 2025-11-30 02:07
AI is upending the porn industry 合成色情内容即将充斥互联网,带来新的机遇和风险。 插图:Getty Images / 《经济学人》 长期以来,成人娱乐行业一直是新技术试验场。15世纪,约翰内斯·古腾堡发明印刷机后,很快就被用于印刷低俗小册子。成人电影于1977年 发行录像带,比好莱坞主流电影早一年,并在一段时间内占据了销售主导地位。20世纪80年代初,法国互联网的前身Minitel推出时,色情服务 最初占其总流量的三分之一到一半。8毫米胶片摄影机、有线电视,以及如今的人工智能(AI),都经历了类似的过程。 尽管许多企业仍在犹豫是否部署人工智能,但这项技术已被用于制作色情内容。色情网站充斥着人工智能生成的视频和图像。大型人工智能公司 为了从其超级智能模型中盈利并证明其高昂估值的合理性,也纷纷加入这一行列。x AI的 Grok 已经提供了一种"刺激"模式,能够生成露骨的 图像和视频。Open AI将于 12 月起在 Chat GPT上提供色情内容(但仅限已验证的成年人)。据研究机构 Global Commerce Media 称,今 年人工智能驱动的成人内容市场价值将达到 25 ...
新兴「诈骗三件套」,批量涌入直播间
3 6 Ke· 2025-11-19 01:47
Core Viewpoint - The article discusses the alarming rise of AI-generated digital avatars that impersonate real individuals, particularly celebrities, for fraudulent activities such as live streaming sales, raising concerns about identity theft and the implications of deepfake technology [4][10][25]. Group 1: AI Impersonation Incidents - Actress Wen Zhengrong was found to be impersonated by AI in multiple live streams, leading to confusion and concern among her fans [6][10]. - The phenomenon is not isolated to Wen Zhengrong; many celebrities, including Liu Tao and Zhang Bicheng, have also been victims of AI impersonation in promotional activities [10][12]. - The technology has advanced to a point where AI-generated avatars can convincingly mimic the appearance and voice of real people, making it difficult for the public to discern authenticity [4][35]. Group 2: Impact on Individuals and Society - The misuse of AI technology has raised significant concerns about personal identity and privacy, as anyone's likeness can potentially be exploited [5][25]. - Ordinary individuals are also at risk, with reports of deepfake technology being used to create harmful content, such as fake adult videos, affecting their personal and professional lives [30][31]. - Victims of AI impersonation often face severe psychological distress and social repercussions, highlighting the urgent need for regulatory measures [31][43]. Group 3: Regulatory Challenges - The rapid advancement of AI technology has outpaced existing legal frameworks, making it difficult to effectively regulate and combat deepfake-related crimes [39][41]. - There is a growing call for legislative action to address the challenges posed by AI impersonation, as seen in responses from various stakeholders, including celebrities and government officials [25][39]. - The article emphasizes the need for a comprehensive approach to tackle the ethical and legal implications of AI technology, as the current state of regulation is inadequate [43][44].
AI黄仁勋演讲骗倒10万老外,冒充GTC直播,干出8倍观看量
3 6 Ke· 2025-10-30 12:37
Core Points - Deepfake technology has become pervasive, with a recent incident involving a deepfake of NVIDIA's CEO Jensen Huang garnering 96,000 views on YouTube, significantly surpassing the 12,000 views of the actual live stream [2][8] - The deepfake live stream was misleadingly promoted as an official NVIDIA event, ranking first in search results for "Nvidia gtc dc" on YouTube [5] - The deepfake impersonated Huang, promoting a cryptocurrency distribution scheme linked to NVIDIA, which included requests for viewers to scan a QR code for transfers [8] Industry Implications - This incident highlights the growing ease and realism of generating deepfake content, raising concerns about the potential for misinformation and scams targeting unsuspecting audiences [8][9] - The recurrence of deepfake scams, including previous instances involving figures like Elon Musk, underscores the urgent need for regulatory frameworks to govern the use of deepfake technology [9] - The entertainment industry has also been affected, with deepfake technology being used to create non-consensual adult content featuring celebrities, indicating a broader societal issue that requires legal intervention [9]
保时捷销冠遭AI合成虚假不雅视频背后:有换脸工具仅卖数元
Nan Fang Du Shi Bao· 2025-10-11 10:53
Group 1 - A woman in Qingdao reported being defamed by AI-generated fake explicit videos, leading to a police report [1] - The misuse of deepfake and similar technologies has given rise to a black market for such services [1][2] - A previous incident involved a woman in Guizhou whose live-streamed image was altered to create a nude photo, resulting in the arrest of the perpetrator [2] Group 2 - AI technologies for face and clothing alteration have become highly advanced, enabling the creation of realistic images and videos [3] - Detection methods for identifying fake images include analyzing inconsistencies in lighting and unnatural deformations, though these methods are not foolproof [3] - Regulatory bodies and platforms are working to enhance public awareness and detection capabilities regarding AI-generated content [3]
从赛场到市场,深度伪造图像识别技术构建金融安全防线
Guo Ji Jin Rong Bao· 2025-09-24 13:02
Core Insights - The rapid development of deepfake technology poses significant threats to personal privacy and financial security, with incidents of identity theft and fraud becoming increasingly common globally [1] Group 1: Event Overview - The 10th Xinyi Technology Cup Global Artificial Intelligence Algorithm Competition was held in Shanghai on September 24, focusing on developing algorithms capable of accurately identifying genuine and fake images to combat deepfake attacks across various scenarios [1] - The competition featured experts from Fudan University, Zhejiang University, and the Chinese Academy of Sciences as judges, who provided in-depth evaluations of the contestants' results based on technical approaches, training methods, and application value [1] Group 2: Competition Highlights - The champion team excelled in cross-domain recognition, maintaining high accuracy across diverse scenarios [1] - The competition encouraged participants to utilize deep learning algorithms, training and validating models on both public and exclusive private datasets, which included 100,000 facial authentication images with various regional, ethnic, lighting, and quality differences [1] - The exclusive private dataset also introduced samples generated by the latest face-swapping technology, emphasizing the algorithms' ability to handle unknown forgery methods, thereby increasing the challenge [1] Group 3: Technical Insights - Contestants demonstrated solid technical foundations and proposed innovative technical ideas, with some teams identifying that face-swapping forgeries often leave traces on high-frequency features of images [2] - By employing frequency domain analysis to capture forgery traces on high-frequency features and designing targeted processing based on statistical feature differences, teams significantly improved recognition effectiveness [2] Group 4: Industry Implications - Xinyi Technology's Vice President Chen Lei noted that the contestants' explorations have practical applications in real-world scenarios, providing new reference paths for deepfake detection [4] - He emphasized that while deepfake technology poses challenges to financial security and social trust, technological advancements also offer new protective possibilities [4] - The company aims to continuously promote innovative exploration through the competition, integrating outstanding results with business scenarios to build a future-oriented financial security defense line [4]
开创多元协同治理格局 促进人工智能安全有序发展
Ke Ji Ri Bao· 2025-08-29 06:37
Group 1 - The core viewpoint of the article emphasizes the strategic importance of AI as a key driver for high-quality development in China, as outlined in the recent government opinion document [1][3][10] - The document identifies six key actions and eight foundational supports to promote the dual empowerment of technology and application, aiming for deep integration of AI into various sectors including scientific research, industry, and public welfare [1][3] Group 2 - AI is positioned as a "key increment" for high-quality development, with its core value reflected in four dimensions: empowerment, burden reduction, quality improvement, and efficiency enhancement [3][10] - AI is expanding the cognitive boundaries of scientific research, acting as an accelerator for foundational studies, such as AlphaFold solving the protein folding problem [3] - The document highlights AI's role in reducing workload through automation, thereby creating better job opportunities and enhancing consumer satisfaction [3] - In manufacturing, AI has been shown to reduce equipment failure rates by 20%, while in education and healthcare, AI systems are customizing learning paths and assisting doctors, respectively [3] Group 3 - The document addresses the need for a "safety and controllability" principle, emphasizing the importance of preventing security risks associated with AI [6][10] - It outlines inherent risks of AI models, including their "black box" nature, which leads to challenges in understanding decision-making processes and vulnerabilities to adversarial attacks [6] - Ethical challenges are also highlighted, where biases in training data can amplify societal issues, potentially leading to the spread of negative sentiments [6] Group 4 - The document proposes a new governance framework that emphasizes multi-dimensional collaboration to ensure the safe development of AI [8][9] - It suggests a "four-in-one" collaborative governance system that includes improving legal frameworks, establishing a multi-faceted public safety system, creating a network governance system, and developing an intelligent emergency response system [8] - The document also emphasizes enhancing safety governance capabilities across four key areas: technical safety, ethical safety, application safety, and national security [9]