AI拟声
Search documents
“耳听为虚” AI拟声骗局已让多名老人上当受骗
Yang Shi Wang· 2025-12-14 18:45
央视网消息:近日,湖北省黄石市黄石港区人民法院审理了一起电诈案件。该案中,黄石市的三名老人,在接到以其孙子名义打来的求助电话 后陷入恐慌,被诈骗分子诱导骗取数万元现金,三名老人无一例外地声称,打来的求助电话里,确实传来的是自己孙子的声音。 "高薪工作"只需取钱 "车手"因诈骗罪获刑 近日,湖北省黄石市黄石港区人民法院审理案件并作出判决,依法判处该案中负责收取现金的"车手"吴某有期徒刑二年一个月,并处罚金人民 币15000元。 黄石市黄石港区人民法院刑事审判庭负责人 向淑青:吴某明知他人实施诈骗犯罪,仍然受上线的安排,到被害人处拿取诈骗资金,主观上具 有犯罪的故意,客观上为诈骗资金的转移提供了帮助,符合诈骗罪共犯的构成要件。我们根据《中华人民共和国刑法》第二百六十六条的规 定,认为其构成诈骗罪。 2025年4月,吴某的朋友给他介绍了一份所谓的"高薪工作",只需"取钱",每天就能获得至少1500元报酬。 今年4月,湖北黄石的丁奶奶接到一通来电,电话那头传来她"孙子"熟悉的声音,对方哭诉称在学校与同学发生冲突,不慎失手伤人。 听说孙子在学校出了事,丁奶奶顿时慌了神。电话那头的"孙子"就让奶奶准备好现金,说对方家长会 ...
华商基金-2025年金融教育宣传周主题知识长图--一图读懂电信网络诈骗
Xin Lang Ji Jin· 2025-09-15 09:00
Group 1 - The article discusses the rise of telecom network fraud, which involves using telecommunications technology to illegally obtain public and private property through remote and non-contact methods [1] - It highlights various types of scams, including those that exploit romantic relationships through dating platforms and social media to gain victims' trust before leading them to fraudulent investment platforms [3][5] - New AI technologies are being utilized in scams, such as AI-generated voice synthesis and deepfake technology, which can impersonate individuals to deceive victims [9][10] Group 2 - Recommendations for protecting personal information include not sharing sensitive data like ID numbers and addresses, and minimizing the exposure of personal photos and videos [11][12] - It is advised to set complex passwords for banking services and not to disclose or forward verification codes to anyone [15] - The article emphasizes the importance of verifying requests for money transfers, especially from acquaintances, through multiple methods to confirm their identity [16]
明星们,要被“假带货”玩坏了
创业邦· 2025-05-18 23:55
Core Viewpoint - The article discusses the rise of AI-generated voice scams, particularly in the context of celebrity endorsements, highlighting the ease with which these technologies can be misused for fraudulent advertising and the challenges in regulating such practices [3][11][17]. Summary by Sections AI Voice Scams - Numerous celebrities have been impersonated using AI technology to promote products without their consent, leading to widespread deception among consumers [3][9]. - The article cites specific instances, such as fake endorsements from athletes and actors, which have resulted in significant consumer confusion and financial loss [7][10]. Impact on Consumers - Consumers, especially older individuals, are particularly vulnerable to these scams, often being misled by realistic AI-generated content [11][13]. - The proliferation of these scams has created a gray market for AI voice cloning services, making it accessible to anyone with minimal investment [11][15]. Regulatory Challenges - Current platforms have failed to adequately warn users about the potential for AI-generated content, contributing to the problem [11][14]. - Legislative efforts are underway to address these issues, including the establishment of a "whitelist system" for AI-generated content and the recognition of voice rights in legal contexts [15][17]. Future Considerations - The article raises concerns about the long-term implications of AI voice cloning on authenticity and trust in media, suggesting that society may need to develop new methods to verify the authenticity of content [15][17]. - Experts warn that as technology advances, distinguishing between real and AI-generated content will become increasingly difficult, necessitating a cultural shift towards skepticism and verification [14][17].
明星们,要被“假带货”玩坏了
Hu Xiu· 2025-05-09 10:19
Core Viewpoint - The rise of AI-generated voice fraud in advertising has led to widespread misuse of celebrity likenesses and voices, creating a new wave of commercial deception that is difficult for consumers to detect [1][10][18]. Group 1: AI Voice Fraud Incidents - Numerous fake accounts have emerged on short video platforms, using AI to create deceptive advertisements featuring celebrities like Quan Hongchan, who are shown promoting unrelated products [2][3]. - High-profile figures, including Zhang Wenhong and Lei Jun, have also been victims of AI voice scams, with their likenesses used in misleading marketing campaigns [8][19]. - The technology allows for the creation of highly realistic fake videos, making it challenging for consumers to discern authenticity, as seen in the case of Zhang Xinyu, who had her voice cloned to promote weight loss products [11][12]. Group 2: The Technology Behind AI Voice Cloning - AI voice cloning technology can replicate a person's voice using minimal samples, making it accessible for anyone to create fake content [22][24]. - The proliferation of AI voice apps has made it easy for users to generate celebrity-like voices for as little as 10 yuan, leading to a surge in fraudulent activities [25][26]. - The low cost and ease of access to AI voice cloning tools have contributed to the rapid growth of this gray market, with many individuals unaware of the potential for misuse [15][27]. Group 3: Regulatory and Societal Responses - There is a growing recognition of the need for legal frameworks to address AI-generated content, with recent court rulings affirming the protection of individuals' voice rights against unauthorized use [28]. - New regulations, such as a "whitelist system," are being introduced to help identify AI-generated content, although the effectiveness of these measures remains uncertain [29]. - The societal implications of AI voice fraud raise concerns about the future of authenticity in media, necessitating a cultural shift towards skepticism and verification of content [27][29].