AI换脸
Search documents
协和医生被盗脸卖减肥食 整治伪“分身”难在哪
Bei Jing Shang Bao· 2025-11-17 15:44
违法成本低 在某社交平台上,北京商报记者搜索"协和医院于康教授"的词条发现,目前还有不少与该教授有关的健 康饮食科普短视频在传播。以其中某条打出"于康教授:你知道吗?这7种水果对身体好处非常多"的短 视频为例,该视频只能看到教授肩颈以上的部位,背景图以山水画为主。在内容方面,视频中在介绍完 对肝脏好的7种水果后,开始为某养生讲堂引流,称在添加其公众号后可以免费领取相关食疗养生课 程。 在天使投资人、资深人工智能专家郭涛看来,"AI换脸"不法商用现象频发与操作门槛低密不可分。目 前,在技术层面,AI换脸工具已高度普及,手机小程序、开源软件让操作简化为"一键生成",单次换脸 成本低至几分钱,无需专业知识即可完成。 11月17日,话题#医生被AI换脸卖洋葱减肥食谱登上微博北京同城榜,截至发稿,该话题阅读量已超 282万。据悉,北京协和医院不少知名专家被AI换脸,用于"养生""瘦身"等虚假宣传,甚至还有不法商 家利用AI生成假医生,打着北京协和医院的旗号销售各类商品,误导消费者。有分析认为,目前AI换 脸技术门槛低、成本低,即便违法行为被查处,其违法成本偏低,承担后果较轻,难以形成有效震慑。 同时,其伪造内容传播快, ...
AI换脸第一刀砍向明星,杨幂全红婵都中招
3 6 Ke· 2025-11-17 12:00
Core Viewpoint - The rise of AI-generated impersonations of celebrities has created a significant threat to both individual rights and the integrity of the digital ecosystem, leading to widespread scams and erosion of trust in online interactions [2][17][44] Group 1: AI Impersonation Incidents - Celebrity impersonations using AI technology have proliferated, targeting individuals with public recognition, including actors and athletes, resulting in a pyramid-like gray traffic ecosystem [2][17] - The case of actress Wen Zhengrong highlights the issue, as she was impersonated in multiple live streams simultaneously, leading to confusion among fans [12][25] - Other celebrities, such as Li Zimeng and Olympic champions, have also been victims of AI impersonation, with fake accounts promoting fraudulent products [13][16] Group 2: Impact on Trust and Consumer Behavior - The AI impersonation phenomenon has led to a significant breach of trust, as fans are misled into believing they are interacting with their idols, resulting in financial losses [18][20] - Many victims, particularly older individuals, are easily deceived by the realistic AI-generated content, leading to impulsive purchases of low-quality products [21][24] - The emotional manipulation employed by these AI impersonators exploits the trust fans place in their favorite celebrities, making it a highly effective scam strategy [19][20] Group 3: Regulatory and Platform Responses - Platforms like Douyin have initiated actions against impersonation, including the removal of thousands of fraudulent accounts and products, but challenges remain in effectively identifying and managing AI-generated content [28][43] - Despite existing regulations, enforcement is difficult due to the rapid evolution of AI technology and the complexity of the digital landscape [42][44] - The need for clearer legal frameworks and responsibilities among technology providers, content creators, and platforms is critical to combat the misuse of AI in impersonation and fraud [43][44] Group 4: The AI Impersonation Industry - A black market for AI impersonation services has emerged, offering tools for creating realistic fake identities at low costs, further complicating the issue [36][39] - The industry encompasses various stages, from data acquisition to content generation and application in scams, creating a closed-loop profit system [36][43] - The availability of AI tools and services for impersonation highlights the urgent need for consumer awareness and protective measures against such fraudulent activities [38][44]
直通部委|10月份商品住宅售价环比和同比均下降 我国探明国内首个千吨级金矿床
Xin Lang Cai Jing· 2025-11-14 10:17
Employment and Economic Stability - The overall employment situation is stable, with the urban survey unemployment rate decreasing to 5.1% in October, down 0.1 percentage points from the previous month [1] - The average urban survey unemployment rate from January to October was 5.2%, with local registered labor at 5.3% and migrant labor at 4.7% [1] Real Estate Market Trends - In October, new residential sales prices in first-tier cities fell by 0.8% year-on-year, while second-tier cities saw a 2.0% decline and third-tier cities a 3.4% decline [1] - The second-hand housing prices in first-tier cities dropped by 4.4%, with second-tier cities down 5.2% and third-tier cities down 5.7% [1] - The spokesperson from the National Bureau of Statistics indicated that the real estate market is undergoing a transformation that requires time, and fluctuations in certain indicators should be viewed objectively [1] Bird Protection and Wildlife Crime - The Ministry of Public Security has launched a campaign to combat wildlife crimes, particularly those harming bird species, with a focus on dismantling criminal networks and seizing illegal tools [4] - The National Forestry and Grassland Administration reported an increase in protected bird species, with 1,028 species now classified as "three protected" [6] Gold Mining Discovery - A significant discovery of a low-grade, large-scale gold mine, the Dadongou Gold Mine, has been made in Liaoning Province, with a total metal content of 1,444.49 tons, marking it as the largest single gold mine discovered since the founding of New China [7] - The mine has a total ore volume of 2.586 billion tons and an average grade of 0.56 grams per ton, with a promising economic outlook for development [7] Healthcare Fraud Cases - The National Healthcare Security Administration has reported four cases of healthcare fraud involving pharmacies, with the highest case amounting to over 3.3 million yuan [8] - The cases include organized schemes to defraud insurance funds through false prescriptions and collusion with intermediaries [8] E-commerce Trademark Infringement - The State Administration for Market Regulation is seeking public input on new regulations to assist in addressing trademark infringement in e-commerce, highlighting the challenges posed by "ghost stores" with false registration information [9] - In the first three quarters of the year, 27,000 trademark infringement cases were handled, involving 468 million yuan [9] Three Gorges Project Development - During the 14th Five-Year Plan, the Three Gorges Project has allocated 46.91 billion yuan for 1,235 projects aimed at improving the livelihoods of relocated residents and promoting economic development in the reservoir area [10] - The average disposable income for rural migrants in the Three Gorges area is projected to reach 22,000 yuan in 2024, a 4.19-fold increase since 2010 [10]
抖音的算法那么强,为什么也管不住“AI温峥嵘”们?
Sou Hu Cai Jing· 2025-11-13 01:32
Core Viewpoint - The rise of AI impersonation in live streaming poses significant challenges for platforms like Douyin, as they struggle to identify and combat unauthorized use of public figures' likenesses [2][5][9] Group 1: AI Impersonation Incidents - The recent drama "News Queen 2" features a storyline where a TV station uses the likeness of a deceased anchor for an AI virtual host, paralleling real-life incidents where actress Wen Zhengrong's image was used without consent in live streams [2][4] - Douyin has initiated a special governance action against impersonation, addressing over 11,000 accounts involved in infringement since October [5] Group 2: Technical Challenges - Douyin's Vice President Li Liang acknowledged that AI content infringement detection is a technical challenge, with malicious impersonation accounts continuously evolving [7] - Experts explain that while AI-generated images may appear flawless to the human eye, they can be detected through specific technical analyses that reveal anomalies in altered areas [3][8] Group 3: Legal and Regulatory Framework - Legal experts assert that using public figures' likenesses for AI impersonation constitutes a clear violation of portrait rights and potentially defamation [9][10] - The introduction of the "Artificial Intelligence Generated Synthetic Content Identification Measures" mandates platforms to label AI-generated content, ensuring transparency for consumers [10][11] Group 4: Future Implications - The emergence of entirely virtual personas, or "digital humans," presents a more severe challenge, as they can fabricate identities that mislead the public [11] - Experts emphasize the necessity for stringent identification measures for AI-generated content to protect consumers and uphold platform accountability [11]
刷脸认证还安全吗?新型AI换脸盗号案细节曝光
Huan Qiu Wang Zi Xun· 2025-11-11 10:40
Core Viewpoint - The article highlights the alarming rise of AI face-swapping technology being exploited for identity theft and unauthorized access to e-commerce accounts, posing significant risks to personal information security [1][3][9]. Group 1: Incident Overview - In June 2024, police in Hangzhou discovered a suspicious advertisement promoting AI face-swapping technology that could bypass platform verification using just a user's facial photo [1]. - The technology allows criminals to gain unauthorized access to user accounts, potentially compromising sensitive information such as chat records [1][3]. - Two victims, Mr. Liu and Ms. Zhang, experienced unauthorized transfers of their e-commerce accounts, with Ms. Zhang losing her account shortly after purchasing it [5][9]. Group 2: Criminal Network and Arrests - Police identified a network of 150 abnormal e-commerce accounts linked to a criminal group utilizing AI face-swapping technology [10]. - The investigation led to the arrest of Zhang, who was found to be advertising the technology on foreign platforms, and subsequently, two accomplices, Wu and Wang, were also apprehended [12][14]. - The trio established a cooperative scheme where they charged fees for accessing personal information and facilitating account transfers, with profits exceeding 100,000 yuan [14]. Group 3: Legal Consequences - The court sentenced Zhang to three years and two months in prison, while Wu and Wang received three years and a suspended sentence of four years, respectively, for illegally obtaining computer information [14].
360胡振泉谈AI换脸乱象:以现有识别鉴定技术看破有难度
Nan Fang Du Shi Bao· 2025-11-09 01:38
Group 1 - The core issue of AI-generated content, particularly the risks associated with AI face-swapping technology, has gained significant attention following an incident involving actor Wen Zhengrong [1] - Hu Zhenquan, president of 360 Digital Security Group, highlighted the challenges in identifying AI-generated content due to its realism, indicating a need for improved detection technologies [1][3] - The 2025 World Internet Conference in Wuzhen served as a platform for the release of the "Large Model Security White Paper," which outlines the security vulnerabilities associated with AI large models [3][4] Group 2 - The white paper identified 281 security vulnerabilities, with 177 being unique to large models, representing over 60% of the total [3] - Five key risk categories threatening large model security were outlined, including infrastructure security risks, content security risks, data and knowledge base security risks, user-end security risks, and the complexities arising from the interconnection of these risks [4] - The proposed dual governance strategy includes "external security" focusing on model protection and "native platform security" embedding security capabilities within core components [4] Group 3 - Despite the controversies surrounding AI intelligent agents, Hu Zhenquan expressed optimism about their future, likening their current stage to the early days of personal computers [5] - He emphasized that intelligent agents, as essential carriers for large model applications, are expected to evolve and become mainstream in AI applications [5] - The development of intelligent agents is anticipated to lead to significant advancements in efficiency and capability in the near future [5]
“你是温峥嵘我是谁?”AI换脸盗播女星撕开黑灰产一角
第一财经· 2025-11-07 03:22
Core Viewpoint - The article discusses the rising issue of AI-generated impersonation in live streaming, particularly focusing on the case of actress Wen Zhengrong, who was a victim of AI face-swapping and unauthorized live streaming. This incident highlights the challenges of AI content infringement and the ongoing battle between technology and malicious actors in the digital space [3][4][10]. Group 1: Incident Overview - Actress Wen Zhengrong revealed that her likeness was used in multiple live streaming sessions without her consent, leading to confusion among viewers [3][6]. - The incident gained significant attention, prompting Douyin's Vice President Li Liang to clarify that the unauthorized streams did not occur on their platform, although AI impersonation content was found [4][10]. - Wen's team reported that they had filed numerous complaints against fake accounts, with some being taken down, but new ones quickly resurfaced, complicating their efforts [6][10]. Group 2: AI Technology and Its Implications - The article emphasizes that the advancement of AI technology has lowered the barriers for creating fake content, making it easier for malicious actors to exploit [10][11]. - Third-party monitoring data shows that Wen Zhengrong's Douyin account has 3.841 million followers, with a 29.6% growth in the last three months, indicating her popularity and the potential for misuse of her image [10]. - The article notes that AI-generated content can be created with minimal resources, as services for face recognition and video creation are available at low costs [10][11]. Group 3: Legal and Regulatory Responses - Douyin has implemented measures to combat unauthorized use of celebrity likenesses, including suspending accounts and removing infringing content [14][15]. - The article outlines a comprehensive view of the AI deepfake black market, detailing the entire chain from data acquisition to the execution of scams and impersonation [12][14]. - Experts stress the need for ongoing legal and technological advancements to address the evolving challenges posed by AI-generated content, likening it to a "cat-and-mouse game" [15][16].
一场直播,10万人被骗!「AI黄仁勋」比真人火8倍
猿大侠· 2025-11-02 07:54
Core Viewpoint - The article discusses a bizarre incident during the GTC 2025 conference where an AI-generated version of NVIDIA's CEO Jensen Huang outperformed the real Huang in a fraudulent live stream, attracting a significantly larger audience and engaging in scams [1][2][6]. Group 1 - During the GTC 2025 conference, Jensen Huang delivered an enthusiastic speech, while an AI-generated version of him was simultaneously live-streamed, drawing nearly 100,000 viewers [2][6]. - The fraudulent live stream, branded as "NVIDIA LIVE," attracted eight times more viewers than the official broadcast, highlighting the effectiveness of the scam [6][15]. - The AI-generated Huang not only mimicked the real Huang's appearance and voice but also engaged in fraudulent activities, deceiving many viewers [9][10][23]. Group 2 - The orchestrators of the scam, a channel named "Offxbeatz," utilized AI face-swapping and voice synthesis technologies to create a convincing imitation of Huang, capitalizing on the audience's interest in the GTC conference [19][20]. - Many viewers, including notable media outlets like CNBC, were misled by the realistic portrayal of the AI-generated Huang, resulting in significant financial losses, with approximately $115,000 (around 820,000) stolen [23][25]. - The incident underscores the advancements in AI technology, which now allow for the easy creation of realistic deepfake videos, raising concerns about the potential for similar scams in the future [42][44].
Wan2.2-Animate又火了,5分钟让抠脚大汉秒变高冷女神。
数字生命卡兹克· 2025-10-30 01:33
Core Viewpoint - The article discusses the capabilities and implications of the open-source model Wan2.2 Animate, which allows users to create highly realistic face-swapping videos and animations, highlighting its potential in various creative fields while also addressing the ethical concerns associated with such technology [1][25][26]. Group 1: Technology and Features - Wan2.2 Animate can generate natural face-swapping videos by using a combination of user-uploaded videos and images, achieving impressive results in mimicking expressions and movements [1][4][6]. - The model allows for voice modulation alongside visual changes, enhancing the realism of the generated content [9]. - It supports both action imitation and character replacement, enabling users to create videos with different characters while maintaining the original background [14][15][16]. Group 2: Accessibility and Open Source - Wan2.2 Animate is notable for being open-source, which differentiates it from other similar models that are not publicly available [14][25]. - The model can be easily accessed and utilized by anyone, significantly lowering the barrier to entry for animation and video creation [25][26]. - It can be deployed in various settings, including enterprises and film productions, allowing for cost-effective animation and special effects [25]. Group 3: Creative Applications - The technology can be used for various creative projects, including recreating classic film scenes or generating dance videos with different characters [12][26]. - It opens up new possibilities for independent animators and filmmakers, enabling them to bring their characters to life with minimal investment [25][26]. - The potential for reviving deceased actors in new films through AI-generated likenesses is also discussed, showcasing the transformative impact of this technology on the film industry [26]. Group 4: Ethical Considerations - The article raises concerns about the misuse of such technology, particularly in creating misleading or harmful content that could undermine trust in digital media [26]. - It emphasizes the importance of responsible use of technology, likening it to fire that can either warm or destroy [26].
“孙子惹祸,急需用钱”,AI诈骗盯上老年人,换脸换声低至1元
Xin Jing Bao· 2025-10-30 00:00
Core Viewpoint - The rise of AI technology has led to new risks, particularly for the elderly, who are increasingly targeted by scammers using deepfake techniques to impersonate family members and trusted figures [2][4][5]. Group 1: AI Technology and Scams - AI deepfake technology has significantly lowered the barrier for scammers, allowing them to create convincing impersonations of voices and faces for fraudulent purposes [3][4]. - Scammers often exploit the emotional vulnerabilities of elderly individuals, using AI to mimic the voices of their relatives to solicit money under false pretenses [4][5]. - The availability of AI voice and face cloning services at low prices (as low as 1 yuan) has made it easier for scammers to execute their schemes [3][5]. Group 2: Legal and Regulatory Concerns - The use of AI for impersonation and fraud raises serious legal issues, including potential violations of portrait rights and consumer protection laws [3][13]. - New regulations, such as the requirement for AI-generated content to be clearly labeled, aim to combat the misuse of AI technology in scams [15]. - Legal experts emphasize the importance of adhering to laws when using AI technologies, as violations can lead to significant legal repercussions [13]. Group 3: Prevention and Awareness - Experts recommend that elderly individuals enhance their awareness of digital technologies to better recognize potential scams [16]. - Families are encouraged to support elderly relatives in understanding and navigating digital platforms safely, while also maintaining open communication about potential risks [16]. - Authorities suggest practical measures for verifying identities during suspicious calls or video chats, such as contacting known numbers or asking personal questions [14][16].