Workflow
CRC
icon
Search documents
记者实测AI“魔改”明星:1分钟报价数百元,平台审核存盲区,多明星同陷骗局
Xin Jing Bao· 2026-02-27 13:24
Core Viewpoint - The incident involving actor Wang Jinsong highlights the risks associated with AI-generated videos, particularly in the context of unauthorized use of celebrity likenesses for fraudulent advertising purposes [1][11]. Group 1: Incident Overview - On February 26, actor Wang Jinsong reported on social media that his likeness was used in an AI-modified video for advertising without his consent [1]. - The AI-modified video closely resembled a previous video from 2020, originally promoting anti-drug awareness, but was altered to promote a so-called "wealth platform" [1][3]. - The video was taken down by WeChat Video shortly after the incident was reported, raising questions about the ease of creating AI-modified celebrity videos and the preventive measures in place on major video platforms [3][8]. Group 2: Technical Aspects of AI Video Generation - Generating a convincing AI-modified video of Wang Jinsong is technically challenging, requiring advanced skills beyond what typical AI video tools can offer [4][5]. - Various AI video generation tools were tested, revealing limitations in their ability to create seamless lip-syncing and realistic modifications [4][5]. - Despite the challenges, there is a growing market for AI video modification services, with prices ranging from 130 to 400 yuan for a one-minute modified video [7]. Group 3: Legal and Ethical Implications - Legal experts suggest that the responsibility for unauthorized use of celebrity likenesses falls on the video creators, platforms, and AI tool providers, especially if they fail to implement adequate identification measures [8][9]. - The rapid advancement of AI technology complicates the detection of such modified videos, making it difficult for platforms to enforce regulations effectively [9][10]. - The incident has raised concerns about the potential for AI-generated content to mislead the public, particularly in relation to fraudulent investment schemes associated with terms like "CRC" and "RWA" [11][14]. Group 4: Fraudulent Activities and Public Response - The AI-modified video was linked to a broader scheme involving illegal fundraising activities under the guise of investment opportunities, which have been ongoing for several years [14][15]. - Authorities have issued warnings about the fraudulent nature of these schemes, emphasizing the need for public vigilance against such deceptive practices [14][15]. - The use of celebrity images in these scams not only infringes on their rights but also risks damaging their reputations, leading to potential legal consequences for the perpetrators [15].