Workflow
AI对抗AI
icon
Search documents
AI对决AI!金融科技打响AI欺诈攻防战
Jing Ji Guan Cha Bao· 2025-11-07 01:53
Core Insights - The rapid development of AI technology has led to the emergence of deepfake fraud techniques, posing significant risks to individuals and financial institutions [2][3] - Ant Group's digital technology team has identified new fraudulent methods involving phishing attacks that exploit personal information to bypass security measures [2][5] - Financial institutions are engaged in a continuous "AI vs. AI" battle, developing advanced algorithms to counteract increasingly sophisticated fraud techniques [3][6] Fraud Techniques - Fraudsters use phishing traps to impersonate banks, tricking victims into providing sensitive information [2][5] - New injection attacks allow criminals to hijack mobile devices and use deepfake images or videos to bypass identity verification [2][5] - Traditional fraud methods have evolved from simple presentations to more complex AI-generated manipulations [5][7] Defense Mechanisms - Financial technology companies are implementing defensive strategies by simulating fraud techniques to better understand and counteract them [6][7] - Algorithms are being developed to detect AI-generated images and assess their authenticity based on technical traces left by AI tools [7][8] - Multi-dimensional defense strategies are necessary, combining image recognition with system-level checks to prevent injection attacks [7][8] Application Scenarios - AI anti-fraud technologies are being integrated into various sectors requiring electronic identity verification, including banking, insurance, and e-commerce [8][9] - The Hong Kong Monetary Authority is facilitating AI fraud testing programs to help banks combat deepfake scams [8][9] - AI models are being trained using historical transaction data to enhance real-time fraud detection capabilities [12][13] Industry Collaboration - Financial institutions are collaborating with regulatory bodies to create a cross-bank fraud data exchange platform to share information on fraudulent activities [10][12] - The integration of AI in identity verification processes is being expanded to government services, enhancing security for public applications [11][12] - Companies like Dyna.AI are focusing on refining their models through compliance-driven data analysis to improve fraud detection accuracy [13]
看AI攻防博弈:技术升级、人才仍缺
Zhong Guo Xin Wen Wang· 2025-09-29 10:10
Group 1 - The 22nd National Cybersecurity Publicity Week revealed the first real-world testing results for AI large models, identifying 281 security vulnerabilities, with over 60% being unique to large models, including risks like prompt injection and information leakage [1] - Attackers are studying AI learning preferences and deliberately feeding false information, with organized efforts to "data poison" AI by fabricating expert identities and creating fake research reports to manipulate AI outputs [1] - The regulatory framework is evolving, with the release of the 2.0 version of the "Artificial Intelligence Security Governance Framework" on September 15 [1] Group 2 - Ant Group's consumer finance division utilizes multimodal perception and collaboration between large and small models to accurately identify counterfeit documents and synthetic voices, achieving a 98% accuracy rate in fake document recognition [2] - New security assessment systems from Green Alliance Technology enable automated deep scanning of over 140 mainstream models, identifying risks related to content safety, adversarial attacks, data leakage, and component vulnerabilities [2] - The "AI Era Cybersecurity Talent Development Report (2025)" indicates a projected global cybersecurity talent gap of 4.8 million by 2025, with a 19% year-on-year increase, and highlights the need for cybersecurity professionals in the U.S. and China [2]
AI攻防博弈升级:2025国家网络安全宣传周呈现“智防”新布局
Huan Qiu Wang· 2025-09-18 01:57
Core Insights - The rise of AI technology has led to unprecedented challenges in cybersecurity, with AI-generated scams and vulnerabilities becoming more prevalent [1][8] - A significant portion of vulnerabilities identified in AI models are unique to them, highlighting the need for enhanced security measures [3][8] Group 1: AI Vulnerabilities and Risks - Over 60% of the 281 security vulnerabilities identified in AI models are unique to these models, with 177 specific vulnerabilities reported [1][3] - Common risks include improper outputs, information leakage, prompt injection, and traditional security vulnerabilities [1][3] Group 2: Security Governance and Collaboration - Experts emphasize the need for continuous improvement in security measures and the establishment of classification standards for AI vulnerabilities [3][4] - The release of the 2.0 version of the "Artificial Intelligence Security Governance Framework" aims to enhance global cooperation in AI security governance [4] Group 3: Technological Advancements in Cybersecurity - The cybersecurity industry is actively adopting AI technologies to create robust defenses against emerging threats, marking the transition to an "intelligent defense era" [3][8] - Companies like Huawei and Ant Group are developing comprehensive security solutions that leverage AI to enhance protection across various sectors [5][7] Group 4: Practical Applications and Innovations - Ant Group's gPass aims to create a secure digital ecosystem for AI devices, focusing on user identity verification and data security [5][6] - Huawei's AI network security solution offers a zero-trust protection system for enterprises, significantly improving threat detection and response capabilities [7]