深度伪造
Search documents
国家安全部:一份给您的“智能生活安全说明书”
Yang Shi Wang· 2025-12-25 23:00
央视网消息:据国家安全部公众号消息,初中教师小李自从使用AI备课后,能在五分钟内生成一份生动有趣的教案,包括图片、视频和互动问答。"以前备 课要两小时,现在可以省下更多时间关注学生个人情况,还能一对一定制专属练习题。" 某独居老人陈爷爷,通过子女赠送的智能音箱,找到了新乐趣。"小智不仅能陪我听戏、聊天,还会在我忘记吃药时主动提醒,它居然还记住了我所有孙辈 的生日。" ——算法偏见与决策"黑箱"。AI的判断源于其所学习的资料,若训练数据本身存在社会偏见或代表性不足等问题,大模型便可能放大歧视。测试显示,某 些AI会系统性偏向西方视角。当研究人员用中英文向某同一个AI提问历史问题时,发现英文回复会刻意回避、淡化某些史实,甚至会给出包含错误历史信 息的内容,引发重大歧误,而中文回复则相对客观。 安全守则:给"数字伙伴"的三条守则 ——守则一:划定"活动范围"。做到"权限最小化",联网AI不处理涉密数据、语音AI不收集环境语音、智能助手不保存支付密码,关闭"数据共享""云空 间"等不必要的访问权限。 ——守则二:检查"数字足迹"。养成定期清理AI聊天记录、修改AI工具密码、更新防病毒软件、查看账号登录设备等习惯。同时 ...
人民网发布《中国内容科技十大进展(2021-2025)》
Ren Min Wang· 2025-12-25 13:16
人民网上海12月25日电 12月25日,第七届人民网内容科技论坛在上海模速空间隆重举行。会上, 《中国内容科技十大进展(2021-2025)》报告正式发布。报告系统梳理了"十四五"期间我国内容科技 领域取得的一系列突破性成果,勾勒出从"工具赋能"迈向"生态重构"的清晰发展脉络,展现了智能化、 垂直化、体系化的鲜明发展态势。 报告指出,内容科技作为推动内容生产、传播、审核及治理全链条智能化升级的关键力量,正深刻 重塑传媒生态与数字社会面貌。此次发布的十大进展,集中体现了我国在该领域的创新活力与治理智 慧。 进展一:国产通用大模型崛起与媒体融合应用。 2023年以来,以"文心一言""通义千问"等为代表的国产大模型集中上线,DeepSeek等开源模型以其 高性能、低成本等特点备受关注。大模型积极向媒体等垂直领域深度渗透,催生人民网"写易"、人民智 媒大模型、新华社MediaGPT、"央视听大模型"等行业专用模型,成为推动内容生产、交互与服务智能 化升级的新型基础设施,显著拓展了媒体融合的深度与广度。 进展六:智能修复与数字活化助力文化遗产传承创新。 2020年以来,基于GAN和扩散模型的影像修复技术日趋成熟,可实现 ...
日本近半数涉未成年人深度伪造色情内容由同学所为
Xin Hua She· 2025-12-18 04:41
深度伪造指用人工智能(AI)技术制作看起来真实度很高的视频、图片或音频片段,这种技术可将色情影 像中的人物改变成另外一人的样貌。这类犯罪事件大多针对女性和未成年人。 日本警方17日说,警方今年前9个月掌握涉及18岁以下未成年人的深度伪造色情内容案件中,约半数出 自受害者同学之手。 日本共同社18日援引日本警察厅数据报道,日本警方今年前9个月处理的79起此类案件中,逾八成受害 者为初中生或高中生,其中约半数案件的作案人与受害者在同一所学校就读。用于生成深度伪造色情内 容的照片,通常取自毕业纪念册。 这是日本警察厅首次就此类案件发布详细数据。上述案件受害者中,初中生占比约52%,高中生占比约 32%,小学生占比约5%。约53%案件中的深度伪造图像由受害者的同学制作或传播;约6%的案件作案 人是受害者通过社交媒体结识的人,另有6%案件由受害者所在学校教职员工或其他学校的学生实施。 在日本,未经当事人同意制作并传播其色情图像构成侵权,涉案者还可能面临诉讼。上述79起案件中, 日本警方对4起以涉嫌诽谤罪等为由展开刑事调查,另有6起对实施者给予行为矫正指导。 日本警察厅说,上述案件中,约18%涉及生成式AI制作的照片。虽 ...
AI与人|“AI垃圾”泛滥,最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 05:26
Core Viewpoint - The rise of "AI Slop" content, characterized by low-quality, repetitive, and meaningless material generated by AI tools, is increasingly prevalent on the internet, particularly on social media platforms [1][2][4]. Group 1: Definition and Characteristics of "AI Slop" - "AI Slop" refers to low-quality content produced by AI tools, including text, images, and videos, often found on social media and content farms [2][3]. - The term "Slop" originally described cheap and low-nutrition items, and its modern usage highlights the poor quality of AI-generated content [2]. - Unlike "deepfakes" or "AI hallucinations," which have specific deceptive intents or technical errors, "AI Slop" is produced without regard for accuracy or logic, leading to a flood of meaningless content [3]. Group 2: Causes of Proliferation - The proliferation of "AI Slop" is driven by the increasing power and low cost of AI technology, enabling rapid content generation that prioritizes clicks and ad revenue over quality [4]. - New AI tools like ChatGPT, Gemini, and Sora allow for quick production of readable text, images, and videos, leading to the rise of content farms that prioritize quantity over quality [4]. - Algorithms on social media platforms often favor engagement metrics over content quality, further encouraging the spread of "AI Slop" [4]. Group 3: Consequences of "AI Slop" - The overwhelming presence of "AI Slop" can obscure credible sources in search results, blurring the line between truth and fiction [5][6]. - As misinformation spreads more rapidly in an environment where distinguishing fact from fiction becomes challenging, the trust crisis in information sources intensifies [6]. Group 4: Potential Solutions - Some companies, like Spotify, are beginning to label AI-generated content and adjust algorithms to reduce the visibility of low-quality material [7]. - The C2PA (Coalition for Content Provenance and Authenticity) standard aims to embed metadata in digital files to trace their origins, helping to differentiate between human-created and AI-generated content [7]. - The most effective defense against "AI Slop" lies in individual responsibility, encouraging users to verify sources and support genuine creators [7][8].
“AI垃圾”泛滥,最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 02:20
◎科技日报记者 刘 霞 "Slop"原指"猪食""泔水"等廉价、粗糙、缺乏营养之物。如今,借由AI技术的浪潮,一些如同"Slop"的 垃圾内容正在互联网上肆意蔓延。 "AI垃圾"特指由AI工具生成的大量劣质、重复或无意义的文字、图像或视频,常见于社交媒体和自动化 内容农场。 美国科技网站CNET在10月28日的报道中提到,如今社交平台上"AI垃圾"五花八门:OpenAI的Sora可让 人几秒内生成滑稽荒诞的视频;LinkedIn上满是由AI包装的"专家箴言",例如"有时领导力就是保持沉 默的能力";谷歌搜索结果也宛如AI杂货铺,竟会出现"姜黄能治愈心碎"这类无稽之谈。 "AI垃圾"与"深度伪造"或"AI幻觉"虽有重叠,却并不相同,其区别在于意图与质量。 "深度伪造"指利用AI伪造或篡改影音,目的在于欺骗,从虚假政治演讲到诈骗语音皆属此类,其关键在 于以假乱真。"AI幻觉"则属技术错误,聊天机器人可能引用不存在的研究,或编造法律案例,本质是模 型在预测词语时出现了偏差。而"AI垃圾"范围更广,也更随意:当人们用AI批量生产内容却疏于核对准 确性与逻辑时,便会产生此类垃圾。它们堵塞信息渠道,抬高广告收益,用重复无 ...
“AI垃圾”泛滥 最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 00:23
"假冒伪劣"信息充斥网络 如今的互联网上,看似信息海量,但也充斥着大量单调、重复且缺乏质量的内容。 美国《纽约时报》网站在12月8日的报道中指出,当前网络,尤其社交平台正泛滥一种被称为"AI垃 圾"(AI Slop)的内容。英国《新科学家》网站10日也发表文章称,今年,许多人感觉仿佛置身于一堆 华而不实的"AI垃圾"中。英国《经济学人》杂志更是将"Slop"一词选为2025年度词汇。这类错漏百出、 古怪甚至令人尴尬的内容遍布各平台,也在悄然侵蚀着人们的思想。 "Slop"原指"猪食""泔水"等廉价、粗糙、缺乏营养之物。如今,借由AI技术的浪潮,一些如同"Slop"的 垃圾内容正在互联网上肆意蔓延。 "AI垃圾"特指由AI工具生成的大量劣质、重复或无意义的文字、图像或视频,常见于社交媒体和自动化 内容农场。 美国科技网站CNET在10月28日的报道中提到,如今社交平台上"AI垃圾"五花八门:OpenAI的Sora可让 人几秒内生成滑稽荒诞的视频;LinkedIn上满是由AI包装的"专家箴言",例如"有时领导力就是保持沉 默的能力";谷歌搜索结果也宛如AI杂货铺,竟会出现"姜黄能治愈心碎"这类无稽之谈。 "AI垃 ...
山西省消协发布警示:当心“AI换脸”“AI配音”新型诈骗
Zhong Guo Xin Wen Wang· 2025-12-12 03:08
典型诈骗场景主要包括:合成子女、父母或好友的影像,在视频通话中以"急需用钱"为由要求转账;针 对企业财务人员或员工,冒充公司领导通过视频会议下达紧急支付指令;甚至伪造执法人员视频,以涉 嫌违法为由要求消费者将资金转移至所谓"安全账户"进行核查。 面对这类利用生物特征进行欺骗的新型犯罪,山西省消协提示消费者,关键在于建立"多重验证"意识, 不可单纯信赖眼见耳闻之象,并应谨记"三不二要"防范原则。 "三不"即不轻信、不转账、不泄露。对于任何通过非见面方式提出的转账汇款要求,即便看到真人视频 或听到熟悉声音,也应高度警惕,尤其要提防对方制造紧急氛围阻碍核实。在未通过其他可靠途径百分 之百确认对方身份前,坚决不进行任何转账操作。同时,需注重个人信息保护,审慎在社交媒体分享清 晰的正面视频、语音,避免敏感信息过度公开。 "二要"即要核实、要警惕。若遇"亲友"通过视频借钱,务必挂断后使用自行存储的联系方式回拨核实, 或通过共同亲友交叉确认;可与家人、挚友预先约定资金往来暗语。此外,需保持观察,AI合成内容 可能存在人物表情僵硬、眨眼不自然、口型对不上、声音有电子音或断句不合理等细微破绽。消费者需 牢记,公检法等机关绝不会 ...
深度伪造正在重塑商业安全边界:企业生存指南已就位
3 6 Ke· 2025-11-30 23:11
Core Insights - Deepfake technology has evolved from a conceptual threat to a tangible business risk, as evidenced by incidents like the AI-generated image of an explosion at the Pentagon that caused a significant drop in the S&P 500 index, erasing billions in market value [1] - The deepfake market is projected to grow from $75 billion in 2023 to $385 billion by 2032, highlighting the urgent need for businesses to upgrade their defense systems against misinformation [1] Group 1: Threats of Deepfake Technology - Deepfake attacks pose a direct threat to corporate assets, as demonstrated by the case of engineering giant Arup, which lost 200 million HKD due to a sophisticated scam involving AI-generated executive voices [2] - The advertising giant WPP also faced a similar attack, where scammers attempted to replicate the CEO's voice and appearance to deceive employees into transferring funds [2] Group 2: Operational Challenges - Companies that rely on facial recognition technology will need to replace their security systems by 2026, as existing solutions are ineffective against deepfake threats [3] - The addition of verification measures, such as digital watermarks and live detection, increases operational costs and complicates decision-making processes [3] Group 3: Trust Crisis - The rise of deepfake technology is leading to a "trust tax," which includes both direct security investments and indirect costs arising from widespread skepticism in digital communications [4] - The erosion of trust in digital interactions complicates business collaborations, as every communication may require additional verification [4] Group 4: Strategic Solutions - Companies should focus on establishing credible verification methods, such as digital signatures and watermarks on sensitive documents, to mitigate the risks posed by deepfakes [5][6] - Creating a public verification center can provide stakeholders with authoritative information and enhance trust in corporate communications [7] - Training employees to recognize deepfakes and incorporating simulation exercises into onboarding processes can improve organizational resilience [8] - Investing in real-time media tampering detection systems is essential for embedding verification capabilities into core business processes [9] Group 5: Navigating the Crisis - The rise of AI-generated content is accelerating the collapse of shared realities, making it crucial for companies to anchor their communications in verifiable facts [10] - Organizations that prioritize building trustworthy systems will not only withstand the challenges posed by deepfakes but also emerge as leaders in their industries during chaotic times [10]
韩国网络性犯罪数量激增 近半嫌疑人为青少年
Yang Shi Xin Wen· 2025-11-16 18:03
Core Insights - The South Korean National Police Agency reported a significant increase in arrests related to online sexual crimes, with over 3,000 suspects apprehended in the past year, marking a 47.8% increase compared to the previous year [1] Summary by Categories Crime Statistics - From November 2023 to October 2024, the police solved 3,411 cases of online sexual crimes and arrested 3,557 suspects, with 221 formally detained [1] - Cases involving deepfake technology accounted for the highest proportion at 35.2%, followed by child or adolescent pornography at 34.3%, and illegal filming at 19.4% [1] Demographics of Suspects - The majority of suspects, totaling 1,761, were teenagers aged 10 to 19 [1] - The increase in the number of apprehended suspects is attributed to both the rise in deepfake-related cases and intensified law enforcement efforts [1] Public Awareness and Education - A significant number of South Korean students lack awareness of the dangers associated with deepfake technology in sexual crimes, with 62.2% of middle school students and 47.7% of high school students considering deepfakes as mere "pranks" [1]
FT中文网精选:AI假发票是个大问题
日经中文网· 2025-11-13 02:46
Core Viewpoint - The emergence of AI-generated fake invoices poses a significant threat to financial trust, as these sophisticated tools make it easier for individuals to commit fraud [5][6]. Group 1 - Traditional methods of fraud involved basic tools like photocopiers and correction fluid, but advancements in technology have led to more sophisticated techniques [6]. - AI-generated fake invoices are now equipped with realistic trademarks, addresses, and details, even simulating wear and tear like creases and coffee stains [6]. - The potential for "deepfakes"—manipulated videos or audio that can make public figures appear to say things they never said—raises concerns about their use in political and financial fraud [6].