AI语音合成软件

Search documents
“盗声”泛滥虚构名人带货,AI生成合成内容9月起须明确标识
Bei Jing Ri Bao Ke Hu Duan· 2025-08-28 02:39
Core Viewpoint - The misuse of AI technology, particularly in voice cloning and video manipulation, is leading to fraudulent activities on social media platforms, prompting calls for stricter regulations and identification requirements for AI-generated content [1][12]. Group 1: AI Voice Cloning Incidents - Olympic champions such as Quan Hongchan, Sun Yingsha, and Wang Chuqin have had their voices cloned by AI to promote products like eggs, resulting in over 21,000 sales driven by consumer trust in these athletes [3][5]. - Quan Hongchan's family has publicly stated that the AI-generated content is misleading, as her brother confirmed that her voice was cloned to sell honey previously, leading to ongoing legal challenges [3][5]. - Other celebrities' voices have also been cloned for promotional purposes, with altered video clips featuring fabricated endorsements [5]. Group 2: AI Technology and Market Dynamics - The rise of AI voice cloning is attributed to the abundance of voice samples from celebrities and the decreasing barriers to entry for AI voice synthesis technology, with multiple AI voice synthesis software released in June alone [7]. - Tutorials for AI voice cloning are being sold online at prices ranging from 0.5 yuan to 400 yuan, with some claiming to provide perfect voice cloning in just one minute [7]. Group 3: Regulatory and Industry Response - The National Internet Information Office and other departments have introduced the "Identification Measures for AI-Generated Synthetic Content," mandating that AI-generated content must include identification markers [13]. - Experts emphasize that content publishers bear primary responsibility for the misuse of cloned voices, which infringes on the rights of the individuals being mimicked and constitutes consumer fraud [15]. - Recommendations for addressing the issue include establishing a warning mechanism for suspicious videos and creating a legal framework to hold accountable those responsible for harmful AI-generated content [15].