Workflow
AI换脸
icon
Search documents
“换脸变声”诈骗、设备偷窥偷听,如何提升防范意识保护个人隐私?
Ren Min Ri Bao· 2025-08-25 01:58
智能摄像头、音箱等智能设备便利了人们的生活,但也可能成为隐私泄露的"后门"。应留意管好家中小 设备,做好智能设备的风险防范。 "换脸变声"诈骗、设备偷窥偷听—— 如何提升防范意识保护个人隐私 利用"AI换脸"伪造身份、智能设备"偷听"、摄像头被操控……AI生成视频、智能管家、刷脸支付等新技 术让生活更便利的同时,也隐藏着个人隐私被泄露的风险。如何提升防范意识、保护个人信息安全? 深度伪造视频、音频可以轻易制造"幻象",眼见未必为实。一些违法犯罪分子利用AI技术"换脸变声", 冒充亲友、同事和领导实施违法犯罪,对此需要提高警惕。 视频涉及交易、借钱等敏感要求时,如果对方动作和表情显得有点"怪",别急着回应。请对方做一个连 贯的转头动作,或者用手掌完全遮住脸再快速移开。目前AI技术在模拟这些复杂的面部遮挡和光影变 化时,容易出现卡顿、模糊或扭曲。另外,要注意听语音细节,AI生成的语音有时会缺乏自然的呼吸 停顿、情感起伏,或者带有轻微的机械感、背景有异常杂音。如果对方声音听起来不连贯或者有明显 的"不正常",务必提高警惕。 无论视频、语音多么逼真,当对方要求进行资金转账或提供敏感信息时,务必通过其他可靠途径二次确 ...
“换脸变声”诈骗、设备偷窥偷听——如何提升防范意识保护个人隐私
Ren Min Ri Bao· 2025-08-25 00:13
购买电子设备时,优先选择有良好安全口碑的大品牌产品,此类产品的隐私保护和安全保障措施相对完 善。不用智能摄像头时,直接盖上或拔掉电源,进行物理隔绝。另外,当电子设备第一次联网时,立即 修改默认的用户名和密码,改为高强度、较为复杂的密码。同时定期检查手机APP对智能设备的控制权 限,仅授予必要的权限,关闭不必要的权限,以防不法分子"趁虚而入"。 警方提醒,通过"AI换脸"等新技术实施诈骗,与传统的诈骗行为在本质上没有区别。对于构成诈骗罪 的,要依照我国刑法第二百六十六条的规定追究刑事责任。如果通过操控摄像头等设备偷窥偷听他人隐 私,则可能构成非法侵入计算机信息系统罪或非法获取计算机信息系统数据、非法控制计算机信息系统 罪等。 在日常生活中,个人一定要提高防范意识,筑牢保护个人信息安全的第一道防线。遇到疑似信息泄露或 诈骗的,要保留好截图、录音、交易记录等证据,及时拨打110报警。(记者 张天培) [ 责编:袁晴 ] 利用"AI换脸"伪造身份、智能设备"偷听"、摄像头被操控……AI生成视频、智能管家、刷脸支付等新技 术让生活更便利的同时,也隐藏着个人隐私被泄露的风险。如何提升防范意识、保护个人信息安全? 深度伪造 ...
新政松绑?111部积压剧迎来春天——203亿投资全盘点
3 6 Ke· 2025-08-20 00:50
Core Viewpoint - The recent policy from the National Radio and Television Administration aims to enhance the supply of quality television content, particularly benefiting the drama industry by encouraging creativity and market-driven production [1][6]. Group 1: Impact on the Industry - The new measures are seen as a significant boost for the drama industry, with expectations that long-pending dramas may finally get aired [1][3]. - Companies like Baida Qiancheng, Huace Film & TV, Mango Excellent Media, and others experienced stock price surges due to the news, as they hold a substantial number of backlog dramas [1][3]. - In 2025, 71 backlog dramas are scheduled to be released, a decrease of 36 from the previous year, indicating a trend towards addressing the backlog despite overall declining numbers [3]. Group 2: Financial Implications - There are 111 dramas from 2015 to 2022 that remain unaired, with 36 of them having a combined production cost of approximately 9.071 billion, indicating significant sunk costs in the industry [3][6]. - The total estimated sunk cost for backlog dramas could reach between 16.571 billion to 20.321 billion, highlighting the financial stakes involved [3][6]. Group 3: Challenges and Opportunities - A significant portion of backlog dramas (24%) is delayed due to controversies surrounding key actors, which has led to financial losses for production companies [7][8]. - The industry is exploring various methods to revitalize backlog dramas, including the use of AI technology for actor replacement and selling to secondary platforms [25][31]. - The trend of backlog dramas being sold at low prices (2-5 million per episode) reflects the financial strain on production companies, with many opting for revenue-sharing models to recoup costs [6][29]. Group 4: Market Dynamics - The market has seen a structural transformation, with a previous surge in production leading to an oversupply of content that is now being addressed through policy changes [15][18]. - The shift towards direct collaboration between platforms and production companies is reducing the role of intermediaries, which may impact the profitability of backlog dramas [32].
“AI换脸”可以绕过人脸识别防线?
Yang Shi Wang· 2025-07-19 16:48
Core Viewpoint - The case highlights the vulnerabilities in facial recognition systems due to advancements in AI technology, specifically the use of AI face-swapping software to commit fraud [1][2][4]. Group 1: Incident Overview - A defendant named Fu illegally obtained over 1.95 million pieces of personal information and used AI face-swapping software to access the payment accounts of 23 victims [2][4]. - Fu managed to change the payment passwords and bind phone numbers of 5 victims, and fraudulently used one victim's bank card to purchase two mobile phones totaling 15,996 RMB [2][4]. Group 2: Legal Consequences - The court sentenced Fu to 4 years and 6 months in prison for multiple crimes, including violating personal information laws and credit card fraud, and ordered him to pay 15,996 RMB in damages [6]. - The case prompted the prosecution to issue a legal risk warning regarding the vulnerabilities in the financial platform used in the fraud, which has since undergone rectification [6]. Group 3: Security Implications - Experts express concerns about the security of facial recognition systems, noting that no network is completely secure and that each update may introduce new vulnerabilities [7]. - There is a consensus that while vulnerabilities are inevitable, advancements in technology can help mitigate risks associated with facial recognition attacks [8]. Group 4: Recommendations for Improvement - It is suggested that organizations using facial recognition technology should implement stricter security measures and enhance their anti-fraud capabilities [11]. - Individuals are encouraged to be more vigilant about protecting their personal information to prevent unauthorized access [11].
超10家基金提醒!金融“李鬼”出没,如何应对?
Core Viewpoint - Recent announcements from over 10 fund companies warn investors about increasing financial fraud, highlighting new deceptive tactics used by criminals, including the use of fake apps and AI technology [1][6]. Group 1: Fraud Tactics - Criminals are using phishing methods through fake apps to lure investors, with these apps becoming more sophisticated and harder to detect [3][5]. - Fund companies have reported that fraudsters are controlling clients' fund accounts to redeem money market funds and misdirect the funds for illicit gains [5][6]. - Specific cases include the impersonation of fund companies and employees through messaging platforms, promoting fake investment opportunities [2][3]. Group 2: Company Responses - Fund companies like Dachen Fund and Hongde Fund have issued clarifications stating they do not authorize any third parties to conduct investment management or consultation services [2][3]. - Multiple fund companies, including Nuon Fund and Fuyong Fund, have released similar warnings, indicating a widespread issue across the industry [3][4]. Group 3: Prevention Measures - The industry is increasing efforts to educate investors on identifying fraudulent activities, emphasizing the importance of verifying information through official channels [6][7]. - The "Three No's and Three More" principle has been proposed for investors to follow, which includes not clicking unknown links, not trusting unknown information, and not disclosing personal information [7][8]. - Investors are encouraged to verify the identity of individuals claiming to be fund company employees and to confirm the legitimacy of investment products through official regulatory websites [8].
超10家基金提醒!金融“李鬼”出没,如何应对?
券商中国· 2025-06-07 09:01
Core Viewpoint - Recent announcements from over 10 fund companies warn investors about financial fraud, highlighting the evolving tactics of scammers who use fake apps and manipulate fund accounts to deceive investors [1][2][3]. Group 1: Fraud Tactics - Scammers are increasingly using sophisticated methods, including fake apps that closely mimic legitimate fund company interfaces, to lure investors [4][5]. - Recent reports indicate that fraudsters are controlling clients' fund accounts to facilitate quick redemptions and transfers, thereby stealing funds [5]. - Fund companies have noted that scammers are utilizing AI technologies, such as deepfake, to enhance the deception [1][6]. Group 2: Company Responses - Fund companies like Dachen Fund and Hongde Fund have issued multiple warnings about impersonators using their names to promote fraudulent investment schemes [2][3]. - Several fund companies, including Nuon Fund and Fuyong Fund, have also released clarifications regarding similar fraudulent activities [3]. - The industry is actively increasing awareness and providing guidance on identifying fraudulent activities [6][7]. Group 3: Prevention Guidelines - The "Three No's and Three Many's" principle has been proposed to help investors avoid scams: do not click unknown links, do not trust unknown information, and do not disclose personal information [7][8]. - Investors are encouraged to verify the identity of individuals claiming to be fund company employees and to check product legitimacy through official regulatory websites [8]. - It is crucial for investors to confirm that any funds transferred are going to official company accounts, as personal accounts should be avoided [8].
明星们,要被“假带货”玩坏了
创业邦· 2025-05-18 23:55
Core Viewpoint - The article discusses the rise of AI-generated voice scams, particularly in the context of celebrity endorsements, highlighting the ease with which these technologies can be misused for fraudulent advertising and the challenges in regulating such practices [3][11][17]. Summary by Sections AI Voice Scams - Numerous celebrities have been impersonated using AI technology to promote products without their consent, leading to widespread deception among consumers [3][9]. - The article cites specific instances, such as fake endorsements from athletes and actors, which have resulted in significant consumer confusion and financial loss [7][10]. Impact on Consumers - Consumers, especially older individuals, are particularly vulnerable to these scams, often being misled by realistic AI-generated content [11][13]. - The proliferation of these scams has created a gray market for AI voice cloning services, making it accessible to anyone with minimal investment [11][15]. Regulatory Challenges - Current platforms have failed to adequately warn users about the potential for AI-generated content, contributing to the problem [11][14]. - Legislative efforts are underway to address these issues, including the establishment of a "whitelist system" for AI-generated content and the recognition of voice rights in legal contexts [15][17]. Future Considerations - The article raises concerns about the long-term implications of AI voice cloning on authenticity and trust in media, suggesting that society may need to develop new methods to verify the authenticity of content [15][17]. - Experts warn that as technology advances, distinguishing between real and AI-generated content will become increasingly difficult, necessitating a cultural shift towards skepticism and verification [14][17].
“AI换脸”5秒可检出?实测三款手机未识别
Nan Fang Du Shi Bao· 2025-05-14 00:58
Core Viewpoint - The rise of AI deepfake technology has led to significant financial losses due to scams, prompting mobile manufacturers to develop detection features to combat these fraudulent activities [3][4]. Group 1: Financial Impact - From 2022 to early 2024, nearly 100 cases of AI deepfake scams in China resulted in economic losses exceeding 200 million yuan [3]. - The increasing prevalence of AI deepfake scams has raised concerns among lawmakers, leading to calls for legislative action during the National People's Congress [2]. Group 2: Technological Developments - Several mobile manufacturers, including Honor, Xiaomi, and OPPO, have introduced AI detection features in their operating systems to identify potential deepfake risks during video calls [3][4]. - Honor's MagicOS 9.0, Xiaomi's HyperOS 2.0, and OPPO's ColorOS have integrated AI detection capabilities, claiming high accuracy rates of over 96% [3][4]. Group 3: Testing and Effectiveness - Recent tests conducted by reporters revealed that the AI detection features in Honor, Xiaomi, and OPPO devices failed to successfully identify AI-generated videos and audio during real-time calls [5][6]. - Despite the manufacturers' claims, the detection capabilities are still under development, with no definitive standards established for effectiveness [6][7]. Group 4: Expert Recommendations - Experts suggest that mobile manufacturers should promote AI detection features as supplementary tools rather than definitive solutions, emphasizing the importance of user vigilance [7]. - There are concerns regarding user privacy and data collection practices related to AI detection technologies, highlighting the need for transparency and user consent [7].
明星们,要被“假带货”玩坏了
Hu Xiu· 2025-05-09 10:19
Core Viewpoint - The rise of AI-generated voice fraud in advertising has led to widespread misuse of celebrity likenesses and voices, creating a new wave of commercial deception that is difficult for consumers to detect [1][10][18]. Group 1: AI Voice Fraud Incidents - Numerous fake accounts have emerged on short video platforms, using AI to create deceptive advertisements featuring celebrities like Quan Hongchan, who are shown promoting unrelated products [2][3]. - High-profile figures, including Zhang Wenhong and Lei Jun, have also been victims of AI voice scams, with their likenesses used in misleading marketing campaigns [8][19]. - The technology allows for the creation of highly realistic fake videos, making it challenging for consumers to discern authenticity, as seen in the case of Zhang Xinyu, who had her voice cloned to promote weight loss products [11][12]. Group 2: The Technology Behind AI Voice Cloning - AI voice cloning technology can replicate a person's voice using minimal samples, making it accessible for anyone to create fake content [22][24]. - The proliferation of AI voice apps has made it easy for users to generate celebrity-like voices for as little as 10 yuan, leading to a surge in fraudulent activities [25][26]. - The low cost and ease of access to AI voice cloning tools have contributed to the rapid growth of this gray market, with many individuals unaware of the potential for misuse [15][27]. Group 3: Regulatory and Societal Responses - There is a growing recognition of the need for legal frameworks to address AI-generated content, with recent court rulings affirming the protection of individuals' voice rights against unauthorized use [28]. - New regulations, such as a "whitelist system," are being introduced to help identify AI-generated content, although the effectiveness of these measures remains uncertain [29]. - The societal implications of AI voice fraud raise concerns about the future of authenticity in media, necessitating a cultural shift towards skepticism and verification of content [27][29].
手机上线AI换脸检测功能,好用吗?实测三品牌:均未识别
Nan Fang Du Shi Bao· 2025-05-09 04:10
Core Insights - The rise of AI-generated content, particularly "deepfake" technology, has led to an increase in fraud cases, prompting calls for legislative action to regulate AI face-swapping and voice imitation [1][3][4] - Several smartphone manufacturers have introduced AI detection features to combat these fraudulent activities, but initial tests show that these features have not been effective in identifying AI-generated content [5][7] Group 1: Fraud Incidents and Economic Impact - From 2022 to early 2024, nearly 100 fraud cases involving "AI face-swapping" have occurred in China, resulting in economic losses totaling approximately 200 million yuan [3] - The Ministry of Industry and Information Technology has indicated that it is collaborating with mobile device manufacturers to launch risk alert features for AI face-swapping scams [3] Group 2: AI Detection Features by Manufacturers - Multiple smartphone brands, including Honor, Xiaomi, and OPPO, have integrated AI detection capabilities into their operating systems to identify potential AI-generated content during video calls [3][4] - Honor's MagicOS 9.0 includes an AI detection module for video calls, while Xiaomi's HyperOS 2.0 provides alerts for potential AI face-swapping and voice forgery risks [3][4] Group 3: Testing and Effectiveness of AI Detection - Tests conducted on Honor, Xiaomi, and OPPO smartphones revealed that none successfully identified AI-generated videos or audio during simulated fraud scenarios [5][7] - Honor's detection feature prompted a user alert but ultimately failed to detect any AI manipulation, while Xiaomi and OPPO provided minimal feedback during the tests [7] Group 4: Expert Recommendations and User Awareness - Experts suggest that manufacturers should focus on promoting AI detection features as supplementary tools rather than definitive solutions, emphasizing the need for users to remain vigilant [8] - There are concerns regarding the responsibilities of manufacturers in ensuring user privacy and the effectiveness of their fraud detection claims, as well as the potential risks associated with ineffective detection [8]