Workflow
AI欺诈
icon
Search documents
威胁猎人:2025年全球电商业务欺诈风险研究报告
Sou Hu Cai Jing· 2026-02-06 11:32
今天分享的是:威胁猎人:2025年全球电商业务欺诈风险研究报告 报告共计:28页 2025年全球电商业务欺诈风险研究报告总结 2025年,全球电商黑灰产风险呈现爆发式增长态势,威胁猎人监测数据显示,全年电商黑灰产风险线索量达1500 万条,同比增长226%,捕获相关作恶账号160万个,同比增长55%,风险规模显著扩大。从区域分布来看,欧 洲、中国及美国三大区域贡献了全球70%以上的电商风险线索,成为风险核心集中区,且风险分布与电商业务体 量高度契合。 黑灰产运作模式呈现"全球渠道引流+本地化渠道成交"的显著特征,通过全球性社交平台获客导流,再借助本地 即时通讯工具、论坛及交易平台完成交易闭环,同时形成了分工明确的跨区域攻击链条,在不同国家和地区分别 完成信息收集、技术研发、攻击实施与资金变现等环节。账号交易市场中,卖家账号与买家账号定价差异明显, 价格高低与平台地区属性、等级体系、经营权限及风控强度直接相关,稀缺性越高、风控越严的账号定价越高。 2025年电商黑灰产呈现三大核心演化趋势:一是AI驱动的"证据工业化",生成式AI技术让身份材料、申诉证据、 物流凭证等实现模板化、规模化生成,大幅提升了黑灰产在审核 ...
AI对决AI!金融科技打响AI欺诈攻防战
经济观察报· 2025-11-07 09:08
Core Viewpoint - The article discusses the ongoing battle between financial institutions and criminals using advanced AI techniques for fraud, highlighting the need for financial institutions to enhance their defenses in response to evolving threats [1][3]. Group 1: Fraud Techniques - A case study illustrates how criminals exploited AI to bypass security measures, using a technique called "injection attack" to manipulate a victim's phone camera and create a realistic video for identity verification [2][3]. - The evolution of fraud methods has shifted from simple presentation attacks to more sophisticated AI-generated images and videos, making detection increasingly challenging [5][6]. Group 2: AI Countermeasures - Financial institutions are developing AI algorithms to detect signs of AI-generated content, focusing on identifying algorithmic traces left by AI tools [5][6]. - Multi-dimensional defense strategies are necessary, combining image analysis with system-level checks to prevent injection attacks [5][6]. Group 3: Application of AI in Fraud Prevention - AI anti-fraud technologies are being integrated into various sectors requiring electronic identity verification, including banking, insurance, and e-commerce [9]. - The Hong Kong Monetary Authority is facilitating a sandbox program for banks to test AI fraud prevention technologies, promoting the use of AI to combat AI-generated fraud [10][11]. Group 4: Training and Data Utilization - Continuous training of AI models using historical transaction data is essential for improving fraud detection accuracy and minimizing false positives [14][15]. - Financial institutions are focusing on targeted training and knowledge acquisition to enhance their AI systems' responsiveness to new fraud scenarios [14][15].
AI对决AI!金融科技打响AI欺诈攻防战
Jing Ji Guan Cha Bao· 2025-11-07 01:53
Core Insights - The rapid development of AI technology has led to the emergence of deepfake fraud techniques, posing significant risks to individuals and financial institutions [2][3] - Ant Group's digital technology team has identified new fraudulent methods involving phishing attacks that exploit personal information to bypass security measures [2][5] - Financial institutions are engaged in a continuous "AI vs. AI" battle, developing advanced algorithms to counteract increasingly sophisticated fraud techniques [3][6] Fraud Techniques - Fraudsters use phishing traps to impersonate banks, tricking victims into providing sensitive information [2][5] - New injection attacks allow criminals to hijack mobile devices and use deepfake images or videos to bypass identity verification [2][5] - Traditional fraud methods have evolved from simple presentations to more complex AI-generated manipulations [5][7] Defense Mechanisms - Financial technology companies are implementing defensive strategies by simulating fraud techniques to better understand and counteract them [6][7] - Algorithms are being developed to detect AI-generated images and assess their authenticity based on technical traces left by AI tools [7][8] - Multi-dimensional defense strategies are necessary, combining image recognition with system-level checks to prevent injection attacks [7][8] Application Scenarios - AI anti-fraud technologies are being integrated into various sectors requiring electronic identity verification, including banking, insurance, and e-commerce [8][9] - The Hong Kong Monetary Authority is facilitating AI fraud testing programs to help banks combat deepfake scams [8][9] - AI models are being trained using historical transaction data to enhance real-time fraud detection capabilities [12][13] Industry Collaboration - Financial institutions are collaborating with regulatory bodies to create a cross-bank fraud data exchange platform to share information on fraudulent activities [10][12] - The integration of AI in identity verification processes is being expanded to government services, enhancing security for public applications [11][12] - Companies like Dyna.AI are focusing on refining their models through compliance-driven data analysis to improve fraud detection accuracy [13]