Workflow
深度伪造技术
icon
Search documents
度小满“防深伪”技术揭秘:我们是如何从一段视频里揪出AI造假蛛丝马迹的?
Yang Guang Wang· 2026-02-12 10:00
Core Viewpoint - The rise of AI-driven deepfake technology has made it increasingly easy for criminals to create high-quality counterfeit content, posing significant challenges for security in the digital world. Financial technology is leveraging advanced methods to combat these threats and protect users' safety. Group 1: Deepfake Technology and Its Implications - Deepfake technology has become readily accessible, enabling criminals to produce convincing fake content, with instances of scams resulting in significant financial losses, such as a case where 4.3 million yuan was stolen in just 10 minutes using AI face-swapping technology [1]. - The key to identifying AI-generated fakes often lies in subtle flaws that are overlooked by the naked eye, such as abnormal blinking rates, irregular pupil shapes, and defects in teeth rendering [1]. Group 2: Anti-Fraud Initiatives - Chongqing's anti-fraud center and Du Xiaoman have launched a series of AI anti-fraud promotional activities for 2026, including short dramas, a digital anti-fraud song, and an H5 mini-game designed to educate users on recognizing scams through engaging scenarios [1]. - The "Jian Zhen" mini-game simulates real scam scenarios, helping users learn identification techniques in a fun way [1]. Group 3: Advanced Detection Technologies - Du Xiaoman's deep detection technology can uncover hidden "digital fingerprints" in images, allowing for the identification of high-risk behaviors such as impersonation or unauthorized actions [2]. - The technology analyzes unique noise patterns from cameras and AI generators, enabling precise detection of fraudulent activities [2]. Group 4: Micro-Expression Monitoring - The latest advancements in Du Xiaoman's technology include dynamic capture of micro-expressions, which last only about 0.1 seconds and are difficult for the human eye to detect [3]. - The micro-expression risk control model can quantify subtle facial movements, providing insights into the risk level during critical interactions, achieving over 90% recall rate and more than 99% accuracy with a false positive rate of one in a thousand [3]. Group 5: Impact and Future Directions - Du Xiaoman has successfully issued precise fraud warnings to over 450,000 customers, intercepting fraudulent amounts totaling 217 million yuan by leveraging its continuously upgraded anti-deepfake models [3]. - The evolution of technology has not only introduced new scam methods but has also led to the development of robust digital defenses, emphasizing the importance of technology-based identification in fraud prevention [3].
美国两家外卖平台否认“有骑手斩杀线”!称系AI生成的谣言
Nan Fang Du Shi Bao· 2026-01-13 05:05
Group 1 - The term "kill line" has gained popularity online, originally referring to a player's low health in games, now used to describe the indifference and fragility of the U.S. social security system [2] - An anonymous post on Reddit claimed to expose the "algorithmic secrets" of U.S. food delivery platforms, alleging that they exploit riders through an algorithm called "desperation scoring" [2][4] - The post suggested that delivery platforms intentionally underpay full-time riders while offering higher tips to part-time riders, creating a system of exploitation [4][5] Group 2 - The post sparked widespread discussion, with "desperation scoring" being cited as evidence of the platforms' exploitative practices [7] - However, the post was quickly debunked as an AI-generated rumor, with the original poster using a fake Uber Eats badge to support their claims [7] - Uber's COO publicly refuted the claims, stating that the information was fabricated and urging caution against believing everything online [7][8] Group 3 - DoorDash's founder also condemned the post, emphasizing that the described corporate culture was unacceptable and denying the existence of a "rider welfare fee" [7] - The incident highlights the growing issue of AI-generated misinformation on social media platforms, with similar posts frequently appearing [8][9] - Experts warn that the combination of deepfake technology and social engineering could lead to automated attacks, creating market panic or targeted scams [10]
科技史与文化研究 文摘两则
Xin Lang Cai Jing· 2026-01-08 16:57
Group 1 - The core idea of the articles revolves around the exploration of ancient Chinese automata and the risks associated with digital identities in the modern era [2][4][8] - The first paper discusses the historical context and technological evolution of automata in ancient China, highlighting notable examples such as the wooden bird and the wooden man [4][5][6] - The second paper examines the implications of AI technologies, particularly deepfake, on personal identity and the concept of "facelessness," where individuals lose control over their digital representations [9][10][12] Group 2 - The discussion on ancient automata reflects a broader narrative of technological innovation and cultural perception in China, indicating a shift from mythological interpretations to rational engineering considerations [6][7] - The concept of "facelessness" is articulated through various dimensions: the theft of identity, the transformation of appearance, and the systematic erasure of visibility in digital spaces [9][10][11] - The emergence of digital personas, such as "momo," represents a form of resistance against identity theft and the pressures of digital visibility, allowing individuals to express themselves without revealing their true identities [12][13]
生成式AI被滥用如何治理?学者建议用好现有规则发展中规范
Nan Fang Du Shi Bao· 2025-12-18 10:55
Core Viewpoint - The emergence of generative artificial intelligence (AI) raises significant legal challenges, particularly concerning copyright and infringement issues, necessitating a careful regulatory approach that utilizes existing laws rather than rushing into new legislation [2][3][4] Group 1: Current Legal Framework and Recommendations - Current specialized legislation for AI infringement is considered premature; instead, existing laws such as the Civil Code and Personal Information Protection Law should be leveraged to address AI-related infringement issues [3][4] - The approach should focus on "regulating in development," utilizing existing legal frameworks to interpret and apply rules effectively while accumulating case law and judicial interpretations [4][10] Group 2: Infringement Liability and Standards - The core legal issue in AI infringement revolves around the choice of liability principles, with a preference for "fault liability" over "strict liability" to avoid stifling AI development [5][6] - The standard for determining fault in AI infringement should center on "breach of duty of care," considering specific scenarios to balance risk management costs with reasonable obligations [6][7] Group 3: Deepfake Technology and Personal Information - There is a need to prohibit the use of deepfake technology to infringe on others' rights, with recommendations to interpret existing laws to address deepfake-related infringements effectively [8][9] - The treatment of publicly available personal information should differentiate between input and output stages, allowing for the use of such data in model training without individual consent, while ensuring that output results do not infringe on copyright [8][9] Group 4: Regulatory Philosophy - The regulatory philosophy should embrace a cautious, inclusive, and open attitude towards the development of generative AI, ensuring that innovation occurs within a safe and controlled environment [10]
遏制AI滥用,韩国要求对AI广告进行“显著标识”
Huan Qiu Shi Bao· 2025-12-11 22:48
Group 1 - The South Korean government mandates that advertisers must prominently label ads created using artificial intelligence (AI), with new regulations set to take effect in early 2026 [1] - The rise of deceptive ads using deepfake technology, featuring fake experts or celebrity endorsements, has prompted the government to take action to curb this trend [1] - The government plans to impose fines for violations and will require that all AI-generated, edited, or uploaded images and videos be marked as "AI produced," with strict enforcement measures for compliance [1] Group 2 - The proliferation of AI technology has sparked discussions about its impact on the advertising industry, with several major advertising firms in South Korea facing closure or restructuring [2] - Experts indicate that while AI has improved advertising production efficiency, the current challenges in the advertising sector are primarily due to economic downturns, reduced consumer spending, and a strong dollar, rather than AI itself [2] - The government aims to enhance regulations against AI misuse, including stricter penalties for exploitative crimes using deepfakes and a rapid review mechanism for harmful ads, requiring regulatory bodies to complete reviews within 24 hours [2]
经济学人:人工智能正在颠覆情色行业
美股IPO· 2025-11-30 02:07
Core Viewpoint - The adult entertainment industry is undergoing a significant transformation due to the rise of artificial intelligence (AI), which is creating both opportunities and risks as AI-generated content becomes prevalent [1][3]. Group 1: Historical Context and Current Trends - The adult entertainment industry has historically been a testing ground for new technologies, from the printing press to video cassettes and now AI [3]. - AI-generated adult content is projected to reach a market value of $2.5 billion this year, with an expected annual growth rate of 27% until 2028 [3]. - Major AI companies are entering the adult content space to monetize their advanced models, with platforms like x AI's Grok and OpenAI's Chat GPT planning to offer explicit content [3][4]. Group 2: Impact on Industry Dynamics - The proliferation of AI-generated content raises questions about the future of human performers and traditional studios, as audiences may prefer cheaper, synthetic alternatives [4][5]. - The adult industry generates nearly $100 billion annually, significantly outpacing AI's revenue, indicating a lucrative market for AI applications [5]. - Subscription-based platforms like OnlyFans are emerging, with projected revenues exceeding $1.4 billion in fiscal year 2024, highlighting a shift towards personalized and paid content [5]. Group 3: Technological Advancements and User Engagement - AI allows for on-demand generation of customized adult content, which could revolutionize user experiences and engagement [8]. - The rise of AI tools has led to a significant increase in searches for AI-generated adult content, with notable traffic to "de-nude" websites [9]. - AI chatbots are increasingly being used for sexual interactions, demonstrating strong user demand and profitability in this niche [9]. Group 4: Challenges and Ethical Concerns - The rapid advancement of AI poses challenges for regulation, as it can be used to create illegal content, including child sexual abuse images [14][16]. - The normalization of violent behaviors in mainstream adult content raises concerns about societal impacts, particularly among younger audiences [16][18]. - Deepfake technology is being exploited for malicious purposes, including extortion and identity theft, highlighting the need for regulatory measures [18][19]. Group 5: Future Outlook and Industry Response - The adult industry is at a crossroads, with platforms needing to decide whether to embrace AI-generated content or focus on authentic human performances [11][12]. - Companies are exploring self-regulation and transparency in AI content creation, as seen with OnlyFans and the upcoming Vylit platform [12][19]. - The potential for AI to enhance productivity in the industry exists, but it may also lead to job losses and ethical dilemmas regarding content creation and distribution [12][20].
新兴「诈骗三件套」,批量涌入直播间
3 6 Ke· 2025-11-19 01:47
Core Viewpoint - The article discusses the alarming rise of AI-generated digital avatars that impersonate real individuals, particularly celebrities, for fraudulent activities such as live streaming sales, raising concerns about identity theft and the implications of deepfake technology [4][10][25]. Group 1: AI Impersonation Incidents - Actress Wen Zhengrong was found to be impersonated by AI in multiple live streams, leading to confusion and concern among her fans [6][10]. - The phenomenon is not isolated to Wen Zhengrong; many celebrities, including Liu Tao and Zhang Bicheng, have also been victims of AI impersonation in promotional activities [10][12]. - The technology has advanced to a point where AI-generated avatars can convincingly mimic the appearance and voice of real people, making it difficult for the public to discern authenticity [4][35]. Group 2: Impact on Individuals and Society - The misuse of AI technology has raised significant concerns about personal identity and privacy, as anyone's likeness can potentially be exploited [5][25]. - Ordinary individuals are also at risk, with reports of deepfake technology being used to create harmful content, such as fake adult videos, affecting their personal and professional lives [30][31]. - Victims of AI impersonation often face severe psychological distress and social repercussions, highlighting the urgent need for regulatory measures [31][43]. Group 3: Regulatory Challenges - The rapid advancement of AI technology has outpaced existing legal frameworks, making it difficult to effectively regulate and combat deepfake-related crimes [39][41]. - There is a growing call for legislative action to address the challenges posed by AI impersonation, as seen in responses from various stakeholders, including celebrities and government officials [25][39]. - The article emphasizes the need for a comprehensive approach to tackle the ethical and legal implications of AI technology, as the current state of regulation is inadequate [43][44].
AI黄仁勋演讲骗倒10万老外,冒充GTC直播,干出8倍观看量
3 6 Ke· 2025-10-30 12:37
Core Points - Deepfake technology has become pervasive, with a recent incident involving a deepfake of NVIDIA's CEO Jensen Huang garnering 96,000 views on YouTube, significantly surpassing the 12,000 views of the actual live stream [2][8] - The deepfake live stream was misleadingly promoted as an official NVIDIA event, ranking first in search results for "Nvidia gtc dc" on YouTube [5] - The deepfake impersonated Huang, promoting a cryptocurrency distribution scheme linked to NVIDIA, which included requests for viewers to scan a QR code for transfers [8] Industry Implications - This incident highlights the growing ease and realism of generating deepfake content, raising concerns about the potential for misinformation and scams targeting unsuspecting audiences [8][9] - The recurrence of deepfake scams, including previous instances involving figures like Elon Musk, underscores the urgent need for regulatory frameworks to govern the use of deepfake technology [9] - The entertainment industry has also been affected, with deepfake technology being used to create non-consensual adult content featuring celebrities, indicating a broader societal issue that requires legal intervention [9]
保时捷销冠遭AI合成虚假不雅视频背后:有换脸工具仅卖数元
Nan Fang Du Shi Bao· 2025-10-11 10:53
Group 1 - A woman in Qingdao reported being defamed by AI-generated fake explicit videos, leading to a police report [1] - The misuse of deepfake and similar technologies has given rise to a black market for such services [1][2] - A previous incident involved a woman in Guizhou whose live-streamed image was altered to create a nude photo, resulting in the arrest of the perpetrator [2] Group 2 - AI technologies for face and clothing alteration have become highly advanced, enabling the creation of realistic images and videos [3] - Detection methods for identifying fake images include analyzing inconsistencies in lighting and unnatural deformations, though these methods are not foolproof [3] - Regulatory bodies and platforms are working to enhance public awareness and detection capabilities regarding AI-generated content [3]
从赛场到市场,深度伪造图像识别技术构建金融安全防线
Guo Ji Jin Rong Bao· 2025-09-24 13:02
Core Insights - The rapid development of deepfake technology poses significant threats to personal privacy and financial security, with incidents of identity theft and fraud becoming increasingly common globally [1] Group 1: Event Overview - The 10th Xinyi Technology Cup Global Artificial Intelligence Algorithm Competition was held in Shanghai on September 24, focusing on developing algorithms capable of accurately identifying genuine and fake images to combat deepfake attacks across various scenarios [1] - The competition featured experts from Fudan University, Zhejiang University, and the Chinese Academy of Sciences as judges, who provided in-depth evaluations of the contestants' results based on technical approaches, training methods, and application value [1] Group 2: Competition Highlights - The champion team excelled in cross-domain recognition, maintaining high accuracy across diverse scenarios [1] - The competition encouraged participants to utilize deep learning algorithms, training and validating models on both public and exclusive private datasets, which included 100,000 facial authentication images with various regional, ethnic, lighting, and quality differences [1] - The exclusive private dataset also introduced samples generated by the latest face-swapping technology, emphasizing the algorithms' ability to handle unknown forgery methods, thereby increasing the challenge [1] Group 3: Technical Insights - Contestants demonstrated solid technical foundations and proposed innovative technical ideas, with some teams identifying that face-swapping forgeries often leave traces on high-frequency features of images [2] - By employing frequency domain analysis to capture forgery traces on high-frequency features and designing targeted processing based on statistical feature differences, teams significantly improved recognition effectiveness [2] Group 4: Industry Implications - Xinyi Technology's Vice President Chen Lei noted that the contestants' explorations have practical applications in real-world scenarios, providing new reference paths for deepfake detection [4] - He emphasized that while deepfake technology poses challenges to financial security and social trust, technological advancements also offer new protective possibilities [4] - The company aims to continuously promote innovative exploration through the competition, integrating outstanding results with business scenarios to build a future-oriented financial security defense line [4]