深度伪造

Search documents
视频「缺陷」变安全优势:蚂蚁数科新突破,主动式视频验证系统RollingEvidence
机器之心· 2025-08-26 04:11
近日,蚂蚁数科 AIoT 技术团队独立完成的论文《RollingEvidence: Autoregressive Video Evidence via Rolling Shutter Effect》被网络安全领域学术 顶会 USENIX Security 2025 录用。 该论文提出了一套创新性的主动式可信视频取证系统,利用相机卷帘门效应在视频中嵌入高维物理水印,并结合 AI 技术与概率模型进行精准验证,能够有 效抵御深度伪造(Deepfake)和视频篡改等攻击。相较于传统被动识别技术,该系统在检测准确率和安全防护能力上均有显著提升。 会议简介:USENIX Security 于 1990 年首次举办,已有三十多年历史,与 IEEE S&P、ACM CCS、NDSS 并称为信息安全领域四大顶级学术会议,也 是中国计算机学会(CCF)推荐的 A 类会议,本届会议的论文录用率为 17.1%,被录用的稿件反映了网络安全领域国际前沿研究水平。 在深度伪造(Deepfake)与视频篡改日益泛滥的今天,真实性的边界正在被不断挑战。对此,蚂蚁数科 AIoT 技术团队提出了一项突破性创新 —— RollingEvidence ...
马斯克疯了?AI不拼技术拼脱衣
Hu Xiu· 2025-08-09 13:06
一年前,X 还深陷泰勒·斯威夫特的深度伪造(Deepfake)不雅照风波中。 一年后,它的老板马斯克,亲自为用户送来了生产这些"不雅照"的工具,并大肆宣传。 xAI 的新功能 Grok Imagine,提供了一个名为"Spicy"的火辣模式,被媒体证实可以一键生成名人的部分裸露视频。 这种看起来就像是自讨苦吃的行为,实在让人难以理解为什么马斯克要这么做。 唯一能解释的,或许就是在这场 AI 大战中,技术和创意没能让 Grok 取胜,马斯克索性就继续用我们的欲望赢得更多的流量。 最近,The Verge 的一位记者,在测试 xAI 这项最新的图像视频生成功能 Grok Imagine(Grok 想象) 时,表示自己经历了一次相当震撼的"教育"。 她输入"泰勒·斯威夫特在科切拉音乐节和朋友们庆祝"的提示词,Grok Imagine 生成上面的图片;之后,她选择了一张图,点击Spicy 模式,将生成的图片 转换为视频。 很快的时间,视频就出来了,泰勒·斯威夫特开始扯掉了自己的上衣,在人群里开始跳舞。 你可能会觉得这是不是一次类似"提示词越界"的误操作,但事实是,这就是 Grok Imagine 默认提供的产品功能。 ...
暴力事件频出 美国政治极化撕裂民主外衣
Zhong Guo Qing Nian Bao· 2025-07-16 00:02
Group 1 - The article highlights the increasing political violence in the U.S., with recent incidents raising concerns about a disturbing "new normal" [1][2] - Political polarization between the Democratic and Republican parties is intensifying, eroding the foundations of American democracy [1][2] - Key areas of contention include immigration policy, energy policy, and social welfare, with significant differences in approaches between the two parties [1][2][3] Group 2 - The article discusses the impact of Trump's policies, which have exacerbated class divisions and led to a decline in social mobility and trust in government [2][3] - A significant increase in threats against members of Congress has been reported, with over 9,400 threats in 2024, more than double the number from a decade ago [2][3] - The federal government has increased the budget for the Capitol Police to $833 million in response to rising violence, nearly double the $464 million budget from 2020 [2][3] Group 3 - The rise of generative artificial intelligence is noted as a factor that could further polarize society and influence election outcomes [3][4] - The spread of misinformation and the creation of "information silos" are contributing to the escalation of violence and political extremism [3][4] - A survey of political scientists indicates a belief that the U.S. is moving towards a form of authoritarianism, with concerns about the erosion of democratic norms [4][5] Group 4 - The article emphasizes the need for bipartisan cooperation to address economic inequality and political violence, which are seen as root causes of societal division [5][6] - Restoring public trust in institutions and bridging social divides are identified as critical challenges for the U.S. government [6]
暑期诈骗分子盯上孩子的电话手表 这些“隐形威胁”要重点防范
Yang Shi Xin Wen Ke Hu Duan· 2025-07-05 12:58
Group 1 - The article highlights the increasing risk of telecom network fraud targeting minors during the summer vacation, as students spend more time online and alone [1][3] - Many parents are equipping their children with smartwatches for safety, but there are concerns about the potential risks associated with these devices, including the possibility of fraud [3][5] - Schools are integrating anti-fraud education into their curriculum, using real-life scenarios and role-playing to enhance students' awareness and response to potential scams [5][7] Group 2 - Teachers are advising parents to avoid linking bank cards to their children's smartwatches, as this could expose them to various fraud risks [5][9] - Innovative teaching methods are being employed to address new types of scams, such as AI voice imitation, by encouraging students to establish secret codes or common phrases with their parents for identity verification [7][9] - Schools recommend that parents set daily spending limits on smartwatches, enable transaction alerts, and regularly check for unfamiliar apps to ensure their children's safety [11]
事关重要科技!中国和欧洲双方达成共识
Xin Lang Cai Jing· 2025-06-28 19:24
Core Viewpoint - The rapid development of artificial intelligence (AI) technology has led to significant negative issues, including the misuse of deepfake technology, which poses serious threats to human rights and privacy [1][3][6]. Group 1: AI Misuse and Human Rights Violations - Deepfake technology has been widely abused, leading to harassment and extortion, particularly affecting teachers and women, with a significant percentage of victims being minors [3][6]. - In South Korea, the prevalence of deepfake videos has prompted the government to enact strict laws against child pornography, categorizing the distribution and possession of such content as criminal acts [3][6]. - Experts at the 2025 China-Europe Human Rights Seminar emphasized that existing legislation against deepfakes is insufficient, as the technology's accessibility has lowered the barriers for misuse [7][10]. Group 2: International Cooperation and Legislative Challenges - The challenge of combating deepfake technology is exacerbated by the fact that many of these videos are hosted on foreign servers, complicating evidence collection and enforcement [7][10]. - The need for international cooperation is highlighted, as many perpetrators exploit anonymity on foreign platforms, making it difficult for law enforcement to take action [7][10]. - The discussion at the seminar underscored the importance of collaborative efforts to address the human rights violations stemming from AI misuse [7][10]. Group 3: Broader Implications of AI Technology - The misuse of AI extends beyond deepfakes, with concerns about privacy violations due to unauthorized data collection and the impact of algorithms on social behavior, particularly among minors [8][10]. - Experts pointed out that AI-driven applications can lead to addiction and mental health issues among young people, raising alarms about the societal implications of unchecked AI technology [8][10]. - The monopolization of AI technology by large Western corporations poses risks to individual rights and national sovereignty, as well as potential manipulation of public perception and electoral processes [10][12]. Group 4: China's Role in AI Governance - China is actively addressing the challenges posed by AI misuse and has been recognized for its efforts in establishing regulations to ensure the ethical use of AI technology [12][13]. - Chinese experts presented case studies demonstrating how AI can benefit society, particularly in healthcare, education, and disaster response, while also emphasizing the importance of regulatory frameworks [13][15]. - The seminar concluded with a consensus on the need for cooperation between China and Europe in AI governance, highlighting the complementary nature of their approaches [21][23].
刚立法打击深伪,第一夫人就亲推AI有声书
Jin Shi Shu Ju· 2025-05-23 07:43
Group 1 - Melania Trump has released an AI-generated audiobook narrated in her voice, despite previously warning about the dangers of deepfakes [1] - The "Take It Down Act," which criminalizes deepfakes and revenge porn, was signed by President Trump and promoted by Melania, aiming to combat online sexual exploitation [1][2] - The audiobook is priced at $25 and has a runtime of seven hours, with plans for multiple language versions to be released in 2025 [1] Group 2 - Melania previously launched a physical version of her memoir, priced at $150, which was printed on high-quality art paper [2] - She has been relatively low-profile since her husband took office, but has been involved in initiatives like supporting the "Take It Down Act" [2] - Currently, Melania is collaborating with Amazon on a documentary series, reportedly worth several tens of millions of dollars [3]
香港金管局及数码港推出第二期GenA.I.沙盒计划 为金融业人工智能创新提速
智通财经网· 2025-04-28 10:54
Core Insights - The Hong Kong Monetary Authority (HKMA) and Hong Kong Cyberport Management Company announced the launch of the second phase of the Generative Artificial Intelligence (GenA.I.) sandbox program aimed at providing banks with a controlled environment to develop and test AI-driven innovative solutions [1][2] - The second phase will focus on enhancing risk management, anti-fraud measures, and customer experience use cases, building on the positive response to the first phase launched in January [1] - A key optimization measure in the second phase is the introduction of the "GenA.I. Sandbox Co-Creation Lab," which will facilitate early engagement between banks and technology providers through practical workshops [1] - The HKMA plans to hold workshops in the coming weeks to discuss how to leverage AI to combat the growing threat of deepfake fraud [1] Industry Implications - The initiative reflects the commitment of the HKMA to promote responsible GenA.I. innovation within the banking sector, encouraging banks to integrate AI technology into their risk management frameworks [2] - The fifth FiNETech event, where the second phase was announced, gathered over 150 professionals from banking and technology sectors involved in AI-related fields, indicating strong industry interest and collaboration [2]