Workflow
AI诈骗
icon
Search documents
演员王劲松遭遇视频号、抖音上AI伪造视频,真假难辨直呼可怕
Ge Long Hui· 2026-02-27 21:41
王劲松在评论区表示,此前也多次收到网友与朋友分享的AI伪造视频,当时觉得制作粗糙、破绽明 显,而此次造假水准大幅提升,令他措手不及。 PChome 2月27日消息,知名演员王劲松于昨日在微博发文,称个人形象与声音被AI盗用生成视频,画 面、口型、声音高度逼真,完全看不出真假,引发网友对AI侵权、AI诈骗的广泛担忧。 事发后,王劲松立即向平台投诉举报,并在评论区晒出投诉截图,涉事视频现已被平台下架删除。他在 互动中坦言,逼真的AI伪造技术,未来可能会被用于更恶劣的肖像侵权、网络诈骗,风险难以预估, 并呼吁加强平台审核、法律监管等环节。 据我国民法典规定,未经许可使用AI伪造他人肖像、声音,涉嫌侵犯肖像权、声音权,若将其用于诈 骗行为还需承担刑事责任。 ...
警惕AI诈骗!利用AI冒充孙子声音,老人被骗数万现金
湖北的吴某来到一户老人家中,假装拨通他们孙子的电话,骗老人说孙子在外"急需用钱"。电话那头,诈骗同伙用AI拟声技术模仿孙子声音,一模一样,连 哭腔都对得上!老人一听就慌了,钱就这么被骗走了! 21世纪经济报道记者 章驰 最近出现一种新型AI诈骗,骗子专门盯上老年人,用AI模拟孙子声音,一次骗走老人6万块! 最高法紧急提醒!电话那头所谓"亲人"的声音,可能是假的! 接到陌生电话要钱的,不管声音多像一定挂电话、找家人、当面核实! 这是昨天(2月26日)最高人民法院发布的利用AI诈骗的典型案例。骗子团伙作案,一边线上利用AI远程实施诈骗、一边线下专人上门取款。 转发出去,别让AI骗走爸妈的养老钱! ...
春节资产安全手册:如何守护好你的 Token?
Xin Lang Cai Jing· 2026-02-15 10:37
(来源:吴说) 作者:imToken 链接:https://www.techflowpost.com/zh-CN/article/30354 声明:本文为转载内容,读者可通过原文链接获得更多信息。如作者对转载形式有任何异议,请联系我们,我们将按照作者要求进行修改。转载仅用于信息 分享,不构成任何投资建议,不代表吴说观点与立场。 临近农历春节,又是辞旧迎新之时,也到了再度回顾的节点: 过去一年,有没有踩过 Rug Pull 项目跑路的坑?有没有因为喊单 KOL 的鼓吹而「买入即站岗」?或者遭受越来越猖獗的钓鱼攻击,因误点链接、误签合约 而导致损失? 客观而言,春节并不会制造风险,但它很可能会放大风险——当资金流动频率提升,当注意力被节日安排分散,当交易节奏加快,任何一个细小失误,都更 容易被放大成损失。 因此如果你正在计划假期附近调整仓位、整理资金,不妨先给你的钱包做一次「节前安全体检」,本文也将从几个真实且高频的风险场景出发,系统梳理普 通用户可以做哪些具体操作。 一、警惕「AI 换脸」与语音模拟类骗局 最近风靡全网的 SeeDance 2.0,再次让大家意识到一个事实,即在 AGI 加速渗透的时代,「眼见为 ...
高端媒体看邯郸丨邯郸市丛台区趣味普法点亮青少年法治教育
Xin Lang Cai Jing· 2025-12-31 00:19
Group 1 - The core idea of the article is the implementation of legal education activities for youth in Congtai District, Handan City, through engaging formats like "Police Micro Theater" and "Legal Script Murder" [1][9] - The activities aim to enhance legal awareness and self-protection skills among students, addressing issues such as campus bullying and AI fraud [9] - Over 3,000 youth have participated in these legal education initiatives this year, significantly improving their legal literacy and providing a protective legal framework for their healthy development [9]
2025北京地区学生网民网络安全感满意度调查发布
Xin Lang Cai Jing· 2025-12-25 05:03
Core Insights - The report titled "2025 Beijing Student Internet Users' Cybersecurity Satisfaction Survey Analysis" highlights the current state of cybersecurity among student internet users in Beijing, emphasizing the need for improved protective measures and governance in the digital space [1][2]. Group 1: Cybersecurity Satisfaction - In 2025, the positive evaluation rate of overall cybersecurity among student internet users in Beijing is 65.25%, with 51.92% of students feeling an improvement in their sense of security compared to the previous year [1][2]. - Despite the optimistic outlook, there is a noted decline in cybersecurity satisfaction compared to 2024, aligning with the national average trend [1]. Group 2: Cyber Threats and Risks - A significant portion of students reported encountering various cyber threats, including illegal information dissemination, personal information infringement, network intrusion attacks, and online fraud, with the rates of network attacks and fraud being lower than the national average [2]. - New types of scams are prevalent, with 26.09% of students experiencing AI voice imitation scams, 26.47% encountering phishing emails generated by ChatGPT, and 23.57% facing AI deepfake video call scams [2]. Group 3: Personal Information Protection - 71.93% of Beijing student internet users rated the state of personal information protection positively, yet 40.2% still perceive widespread personal information leakage [2]. - Although there is a trend of reduced perception of information leakage compared to the previous year, new risks associated with short videos and social platforms are causing localized concerns [2]. Group 4: Educational and Governance Implications - The report serves as a crucial reference for cybersecurity education and provides a basis for collaboration between the education system and society to address cybersecurity challenges [3].
警惕“AI美女”设“甜蜜陷阱”
Xin Lang Cai Jing· 2025-12-20 06:44
Group 1 - The article discusses a sophisticated online dating scam orchestrated by a group led by Yang, which involved a five-step process to defraud victims, resulting in a total loss of over 1.71 million yuan from 15 male victims within a year [1][4] - The scam utilized AI-generated videos and emotional manipulation to build trust with victims, leading them to send money under false pretenses, such as medical emergencies [2][3] - The operation was structured like an assembly line, with specific roles assigned to different members for tasks such as account management, emotional engagement, and money laundering [3][4] Group 2 - The group employed various tactics to maintain the victims' trust, including sending fake medical documents and using voice changers during calls to simulate authenticity [3][4] - Law enforcement faced challenges in gathering evidence due to attempts to destroy digital records, but successfully recovered crucial data that linked the suspects to the crimes [5] - The judicial outcomes included significant prison sentences for the main perpetrators, with Yang receiving eleven years for fraud, while others received varying sentences for their roles in the operation [6]
假图骗取电商退款,洗脑驯化大模型,南都报告揭秘AI灰产
Nan Fang Du Shi Bao· 2025-12-18 10:35
Core Insights - The rise of generative AI has led to an increase in AI-related fraud and misinformation, particularly in the e-commerce sector, highlighting the challenges of distinguishing truth from falsehood in a technologically advanced society [2][4] - A report released at the eighth Woodpecker Data Governance Forum reviews 118 cases of generative AI risks, focusing on the societal trust challenges and ethical dilemmas posed by human-AI interactions [4][5] Group 1: Impact on Society and Individuals - Generative AI has significantly altered the landscape of information production and dissemination, leading to an exponential increase in fake content across personal, industry, and societal levels [5] - AI-generated misinformation has resulted in various forms of fraud, including "AI yellow rumors" and scams targeting vulnerable populations, particularly the elderly [5][6] - The report highlights a case where a PhD student at the University of Hong Kong cited 24 AI-generated fake references in a paper, leading to its retraction and an investigation [6] Group 2: Legal and Ethical Concerns - Instances of lawyers using AI to generate fictitious legal cases have emerged, raising concerns about the integrity of legal proceedings [6] - The report discusses the emergence of a gray industry exploiting generative AI, manipulating data to influence AI model outputs, which can mislead users into believing the information is factual [7] - The ethical implications of AI's "flattering" algorithms are examined, particularly in the context of human-AI relationships and the potential for emotional manipulation [8] Group 3: Regulatory Responses and Recommendations - The report emphasizes the need for global consensus and institutional rules to address the challenges posed by AI-generated misinformation, advocating for stronger platform regulation and cross-border collaboration [7] - Recent lawsuits against AI platforms like Character.AI and OpenAI highlight the legal accountability issues surrounding AI interactions, particularly concerning youth safety [9][10] - Various countries are implementing regulations to protect minors from AI-induced harm, with recommendations for AI products to prioritize user mental health and transparency in design [11]
利用AI仿声技术诈骗老人钱财
Ren Min Wang· 2025-12-16 01:01
Core Points - The case highlights the use of AI voice simulation technology in scams targeting elderly individuals, resulting in a total loss of 60,000 yuan for three victims [1][2] - The defendant, Wu, was sentenced to two years and one month in prison and fined 15,000 yuan for his role in the scam [2] Group 1: Scam Details - In April 2025, Wu received instructions from an online accomplice to collect scam funds, using AI technology to simulate the voice of the victims' relatives [2] - The victims were manipulated into believing they were helping their "grandson" who was in trouble, leading to the immediate transfer of funds [1][2] - The total amount collected from the three victims was 60,000 yuan, with each victim losing 20,000 yuan [1][2] Group 2: Legal Proceedings - The court found Wu guilty of fraud, emphasizing the significant amount involved and the method of deception used [2] - The court considered Wu's confession, restitution to the victims, and acceptance of responsibility when determining the sentence [2] Group 3: Technology and Prevention - The case illustrates the evolving nature of scams with advancements in AI technology, making it difficult for elderly individuals to discern the authenticity of calls [3] - The judge advised the public, especially the elderly, to remain vigilant and verify any requests for money through multiple channels [3] - There is a call for younger generations to assist the elderly in understanding new technologies to enhance their ability to recognize potential scams [3]
“耳听为虚” AI拟声骗局已让多名老人上当受骗
Yang Shi Wang· 2025-12-14 18:45
Core Viewpoint - The article highlights a series of fraud cases in Huangshi, Hubei, where elderly victims were deceived by scammers impersonating their grandchildren using advanced AI voice technology, resulting in significant financial losses for the victims [1][2][6]. Group 1: Fraud Cases Overview - Three elderly individuals in Huangshi were scammed out of a total of 6 million yuan (approximately 850,000 USD) after receiving phone calls from individuals impersonating their grandchildren [2]. - The scammers used familiar voices to create a sense of urgency, convincing the victims to prepare cash for supposed emergencies [2][7]. - The police investigation revealed that all three cases involved the same suspect, Wu, who was later apprehended and returned the full amount of 60,000 yuan (approximately 8,500 USD) to the victims [2][3]. Group 2: Legal Proceedings - Wu was sentenced to two years and one month in prison and fined 15,000 yuan (approximately 2,100 USD) for his role in the scam [3]. - The court determined that Wu knowingly assisted in the fraud by collecting cash from the victims, fulfilling the criteria for being an accomplice in the crime [4]. Group 3: Technology Utilization in Fraud - The fraudsters employed AI voice technology to convincingly mimic the victims' grandchildren, making it difficult for the elderly to discern the authenticity of the calls [6][7]. - The use of AI for voice simulation and real-time interaction was identified as a key factor in the success of the scams, as many elderly individuals are unfamiliar with such technology [7]. Group 4: Preventive Measures - The article emphasizes the importance of skepticism towards urgent requests from familiar contacts and advises against hastily transferring money [8]. - Recommendations include verifying identities through personal details known only to the victim and avoiding sharing sensitive information like bank passwords or verification codes [8].
AI下乡,重伤老头老太太
创业邦· 2025-11-29 01:08
Core Viewpoint - The article highlights the exploitation of elderly individuals in lower-tier cities by AI scammers, who take advantage of their limited understanding of technology and financial literacy, leading to significant financial losses for these vulnerable groups [6][20]. Group 1: AI Scams Targeting the Elderly - Scammers are targeting elderly individuals with various AI-related schemes, such as "AI financial literacy courses" and "AI digital grandchildren," which are prevalent on social media and short video platforms [6][21]. - Many elderly victims, like Song Yanyu, are lured into these scams by promises of easy income through AI-generated content, often leading to substantial financial losses [9][13]. - The scams are particularly effective in economically disadvantaged areas, where the elderly are more susceptible to misinformation and less likely to have access to protective resources [21][23]. Group 2: Psychological Manipulation - Scammers utilize emotional manipulation, creating a sense of companionship through AI-generated interactions, which makes elderly individuals more likely to trust and invest in these fraudulent schemes [26][28]. - The loneliness experienced by many elderly individuals exacerbates their vulnerability, as they seek connection and validation through digital means [27][28]. - The psychological impact of these scams extends beyond financial loss, affecting the victims' self-worth and mental health [28]. Group 3: The Role of Technology - The article discusses how the rapid advancement of AI technology has created a gap in understanding among older populations, making them prime targets for exploitation [20][23]. - Scammers employ sophisticated AI tools to create convincing content, including deepfake videos and AI-generated voices, which further complicates the ability of victims to discern fraud [27][28]. - Despite regulations aimed at identifying AI-generated content, many elderly individuals lack the knowledge to recognize these indicators, leaving them exposed to scams [27][28].