Workflow
AI诈骗
icon
Search documents
华泰证券|诈骗分子的千层套路:AI时代下投资诈骗的“科技与狠活”
Xin Lang Ji Jin· 2025-09-24 09:21
专题:2025金融教育宣传周:保障金融权益 助力美好生活 基金行业在行动 科技改变生活,随着AI大模型渗透式发展, 不少朋友已经开启了"万物皆可AI"的模式。 但在投资中,AI却更像一把双刃剑,在带来 便利的同时,也孕育着更深层次的风险! 当下,AI应用已经渗透入生活的方方面面,而 在大众认知里,凭借出色的信息搜集处理能 力,AI或代表着"高效、专业、便捷、权威"。 Alph Grand States 不少朋友已习惯遇到各种问题都向 Al"提问",在投资活动中也不例外, 而这却也是诈骗分子的全新"据点"。 不法分子在各社交平台上,声称利用AI工具 可精准预测股票走势,并发布诱导性投资信 息,如宣称某股票短期内有较大上涨潜力。为 证明AI选股的盈利能力,部分账号展示证券 软件交易截图,显示高额收益和盈利状态。然 而,这些截图背后往往是虚拟盘。不法分子通 过伪造证券软件和虚拟盘来制造虚假盈利, 诱导投资者上当受骗。 无论是在直播间、视频乃至正在进行的视频 通话中,你看到的那个人,可能不是"本人", 而是用Al合成的"虚假面孔"。 不法分子利用AI技术伪造券商员工证件、合 成分析师视频,甚至仿冒专业投资机构官方 十 ...
从三合会到AI换脸:亚洲黑产第一次换主角
Hu Xiu· 2025-09-22 09:05
大家好,我是花总。我有过很多身份标签,2020年起开始拍摄纪录片,如今主要以独立调查者和纪录片 导演的身份活动。这次主办方为我安排了两场交流:今天这场是30分钟,我想和大家分享近三年来基于 一线调研的一些思考。 我想从电信诈骗切入,谈谈东亚与东南亚黑灰产业的宏观全景。我的观点也有可能是完全错误的,仅供 大家作为一个参考的视角。 先做一个名词解释:黑产,是指通过非法、违法甚至犯罪手段获取暴利的有组织、规模化、产业化的犯 罪行为。我今天的核心观点是,在黑产的三百六十行里,这一波电诈产业的大发展相当于是英国十八世 纪开始的工业革命,它正在剧烈冲击并且必将颠覆亚洲传统的地下社会。奇点时刻与东升西降是这个进 程的外在体现。 什么是亚洲的传统地下社会,想必大家都在影视作品里看过很多。早在68年前,香港警务处就成立了O 记,全称是"有组织罪案及三合会调查科"。三合会算得是传统地下社会的一个典型标本。三合会的前身 是陈近南当总舵主的那个天地会。它还有一个赫赫有名的名字叫"洪门",孙中山、黄兴都是会员,广州 黄花岗起义72烈士有68人出身洪门。洪门里有一个致公堂,是我们今天八个参政党里致公党的祖宗。 近现代的东亚地下社会,就是 ...
当AI诈骗,正在与AI反诈“魔法对轰”?
Hu Xiu· 2025-09-03 02:49
Core Insights - The article discusses the escalating battle between AI scams and AI anti-fraud measures, highlighting the sophistication of AI-generated deception and the response from technology in combating these threats [1] Group 1: AI Scams - Merchants on platforms like Taobao are using AI-generated images to deceive consumers, showcasing the increasing complexity of AI fraud techniques [1] - Recent statistics indicate that Taobao has intercepted 100,000 fake AI images, reflecting the scale of the issue [1] Group 2: AI Anti-Fraud Measures - In response to traditional anti-fraud methods being ineffective, AI anti-fraud agents have emerged as a new technological countermeasure [1] - The article emphasizes that AI can now perform tasks such as face-swapping, simulating friends and family, and employing persuasive AI-generated language to manipulate consumers [1]
AI诈骗新套路!警惕高科技外衣下的陷阱!
Xin Lang Cai Jing· 2025-08-24 05:23
Core Viewpoint - The article highlights the emergence of new AI-related scams, urging the public to be cautious of deceptive practices disguised under advanced technology [1] Group 1: AI Scams - The article discusses various types of AI scams, including fraudulent phone calls claiming family emergencies [1] - It questions the credibility of AI-generated recommendations that promise guaranteed profits, indicating a rise in such deceptive claims [1] - The piece emphasizes the importance of awareness and education to avoid falling victim to these high-tech traps [1]
“AI但斌”出没!投资者需“擦亮眼睛”
Core Viewpoint - The financial industry is facing a surge in fraudulent activities, particularly involving impersonation and illegal stock recommendation schemes using AI-generated content [1][2][4]. Group 1: Fraudulent Activities - Numerous new accounts have been registered on internet platforms that utilize AI technology to create images or videos of Dan Bin, engaging in illegal stock recommendation activities [2][3]. - Fraudsters have been using Dan Bin's personal information to impersonate him and promote various investment schemes, leading to significant financial losses for victims [3][4]. Group 2: Regulatory Warnings - Multiple regional securities regulatory bodies have issued warnings about the rise of financial fraudsters impersonating legitimate financial institutions and professionals [4][6]. - Specific cases include fraudsters posing as private equity staff to lure investors into stock trading groups, promising unrealistic returns and using fake apps to facilitate scams [4]. Group 3: Investor Awareness - Investors are urged to remain vigilant and verify the authenticity of investment opportunities through official channels, as fraudulent entities often exploit social media and messaging platforms [5][6]. - It is recommended that investors collect evidence of fraudulent activities and report them to relevant authorities promptly [6].
“AI恋人”正在网络收割真心与捞金
3 6 Ke· 2025-08-04 07:57
Core Viewpoint - The article discusses a growing online scam involving AI-generated personas that deceive individuals into emotional relationships, ultimately leading to financial exploitation. Group 1: The Nature of the Scam - The scam involves AI-generated images and scripted interactions that create the illusion of a romantic relationship, leading victims to emotionally invest and eventually send money [1][4][11] - Victims, like the interviewee Xiao Wang, often believe they are engaging with a real person, only to discover that the persona is a product of a sophisticated system designed to extract money [9][10] Group 2: Mechanisms of Operation - The process begins with the creation of attractive AI-generated images and profiles, which are then used to engage potential victims through social media platforms [5][11] - Operators of these scams utilize a standardized script for interactions, gradually building emotional connections before making subtle requests for money or gifts, especially around significant dates like holidays [6][8][11] Group 3: Emotional Manipulation - The emotional manipulation is profound, as victims often feel genuine affection and connection, leading to feelings of betrayal when they realize the truth [9][10] - The article highlights that the scam does not just rob victims of money but also erodes their trust in real relationships and their expectations of love [10][12] Group 4: Legal and Ethical Implications - The article raises questions about the legal responsibilities of platforms hosting these scams, as current laws may not adequately address the nuances of AI-generated interactions [11][12] - There is a lack of clarity on who should be held accountable in these scenarios, whether it be the platform, the creators of the AI personas, or the operators of the scam [12][13]
没有智能全是人工!印度AI,超级骗骗骗
Jin Tou Wang· 2025-07-11 09:32
Core Insights - Builder.ai, once valued at $1.5 billion, has filed for bankruptcy after being exposed as a fraudulent operation that relied on manual coding rather than AI technology [1][9][10] - The founder, Dugal, leveraged the AI hype to attract significant investments, creating a facade of an AI-driven software development platform [3][6][10] Company Overview - Builder.ai was founded by Dugal in 2016, aiming to standardize software development using AI and crowdsourced labor [3][6] - The company claimed to have developed "Natasha," the world's first AI product manager, which was later revealed to be a front for manual coding by a team of Indian programmers [4][6] Investment Journey - Builder.ai raised $29.5 million in its Series A round, marking one of the largest funding rounds in Europe at the time [4] - Subsequent funding rounds included $65 million in Series B and $100 million in Series C, with major investors like SoftBank and Microsoft participating [6][7] Financial Misrepresentation - An audit revealed that Builder.ai's reported revenue for 2024 was inflated by 300%, with actual revenue only $55 million instead of the claimed $220 million [9][10] - The company's financial troubles led to a $37 million seizure by creditors, culminating in its bankruptcy filing on May 20, 2023 [9][10] Industry Implications - The collapse of Builder.ai highlights the vulnerability of investors in the tech sector, particularly in the AI space, where technology can often be opaque and difficult to verify [10][12] - The incident reflects a broader trend of fraudulent practices in the AI industry, where companies may use low-cost labor and open-source models to create the illusion of advanced technology [12]
AI防诈,鸿蒙筑底:华为Pura 80系列重构数字安全边界
第一财经· 2025-06-29 00:30
Core Viewpoint - The article highlights the increasing threat of AI-driven scams, particularly through deepfake technology and voice cloning, which have significantly outpaced traditional fraud prevention methods [1][2][4]. Group 1: AI Fraud Trends - The amount involved in AI fraud cases in China surged from 0.2 thousand yuan in 2020 to 16.7 million yuan in 2023, with a compound annual growth rate of 1928.8% [2]. - In the first half of 2024, the amount involved in AI fraud cases exceeded 185 million yuan, more than ten times higher than the previous year [2]. - AI-based deepfake fraud increased by 3000% in 2023, while phishing emails grew by 1000% [6]. Group 2: Public Concerns and Responses - Public anxiety regarding personal privacy and security is escalating, especially as AI technology is misused for scams [2][4]. - A significant 92% of surveyed victims expressed fear over the extent of personal information that scammers possess [7]. - The need for effective identification and prevention of AI-driven scams has become a focal point for society [2]. Group 3: Technological Countermeasures - Huawei's Pura 80 series, equipped with HarmonyOS 5.1, introduces AI privacy protection features aimed at addressing these security concerns [9][18]. - The AI anti-peeping feature alerts users when someone is looking at their screen, enhancing privacy in public spaces [10]. - The AI anti-fraud protection can identify deepfake video calls and alert users to potential scams during phone calls [12]. Group 4: Security Architecture - The security features of Huawei's Pura 80 series are supported by the HarmonyOS 5.1 Star Shield security architecture, which has received the CC EAL6+ certification [16]. - The architecture includes a "pure ecology" that creates a full lifecycle security loop, blocking unreasonable permission requests and malicious app installations [17]. - Cross-device encryption ensures data security during interactions between devices, preventing unauthorized access [17]. Group 5: Industry Implications - The advancements in AI privacy solutions signify a shift in how technology companies approach user privacy, positioning Huawei as a leader in this domain [18]. - The article emphasizes that privacy protection should be an inherent capability of smart devices rather than a burden on users [18].
防非宣传月 | 守住钱袋子,这份指南请收好!
中泰证券资管· 2025-06-11 10:30
Core Viewpoint - The article highlights the increasing diversity and sophistication of illegal financial activities, emphasizing the need for public awareness and preventive measures against such scams [2]. Group 1: Types of Illegal Financial Activities - Illegal financial activities encompass all unlawful financial operations, including those conducted by legitimate financial institutions and those outside the financial system [3]. - Specific forms include: 1) Illegal absorption of public deposits or disguised public deposit absorption, promising high returns and capital protection [3]. 2) Unauthorized fundraising from unspecified individuals under false pretenses, such as claiming government support or backing from well-known enterprises [3][4]. 3) Illegal loan issuance and other financial services, including unauthorized settlement, bill discounting, and trust investments [4]. 4) Fraudulent financial pyramid schemes that rely on recruiting new participants to sustain operations [4]. Group 2: Consequences of Illegal Financial Activities - Participation in these illegal activities can lead to significant economic losses for individuals, with severe cases resulting in total financial ruin, while also disrupting normal economic and financial order [5]. Group 3: Responding to Suspected Illegal Financial Activities - Upon suspecting involvement in illegal financial activities, immediate and informed action is crucial for protecting personal and others' financial safety. Recommended measures include: 1) Collecting evidence such as transaction records, contracts, promotional materials, and chat logs to substantiate claims of illegal activities [6]. 2) Reporting through designated channels, including national hotlines for illegal fundraising and financial supervision [7][8]. 3) On-site reporting to local law enforcement or financial regulatory bodies [9].
Bitget 反诈骗报告显示,2024 年因 AI 相关诈骗造成的加密货币损失高达 46 亿美元
Globenewswire· 2025-06-11 09:45
Core Insights - The report highlights a significant increase in global cryptocurrency fraud losses, reaching $4.6 billion in 2024, with deepfake technology and social engineering being the primary methods behind these high-value thefts [2][3] - Bitget has launched a month-long initiative called "Anti-Fraud Month" aimed at enhancing security education and fraud awareness across the ecosystem [2][3] Fraud Trends - AI-driven scams have evolved from phishing emails to more sophisticated forms such as fake Zoom calls, synthetic videos of public figures, and job scams that carry malware [2][3] - The report identifies three main types of scams that are critical contributors to user losses: deepfake impersonation, social engineering scams, and Ponzi schemes disguised as DeFi or NFT projects [2][3] Money Laundering Tactics - Stolen funds are often transferred through cross-chain bridges and obfuscation tools before entering mixers or exchanges, complicating law enforcement and recovery efforts [2][3] Case Studies and Observations - The report includes analysis of significant fraud cases in Hong Kong and notes that platforms like Telegram and X (formerly Twitter) are increasingly becoming entry points for phishing attacks [2][3] - It also discusses the ongoing expansion of cross-border professional fraud syndicates [2][3] Company Initiatives - Bitget is actively utilizing its Anti-Fraud Center, innovative detection systems, and a protection fund exceeding $500 million to mitigate user risks [3][4] - The collaboration with SlowMist and Elliptic aims to enhance the understanding of evolving threats and provide users with self-protection tools [4] Recommendations - The report concludes with practical advice for users and institutions, covering warning signs of scams and best practices to avoid common pitfalls in DeFi, NFT, and Web3 environments [4]