Workflow
星河AI网络安全解决方案
icon
Search documents
看AI攻防博弈:技术升级、人才仍缺
Zhong Guo Xin Wen Wang· 2025-09-29 10:10
Group 1 - The 22nd National Cybersecurity Publicity Week revealed the first real-world testing results for AI large models, identifying 281 security vulnerabilities, with over 60% being unique to large models, including risks like prompt injection and information leakage [1] - Attackers are studying AI learning preferences and deliberately feeding false information, with organized efforts to "data poison" AI by fabricating expert identities and creating fake research reports to manipulate AI outputs [1] - The regulatory framework is evolving, with the release of the 2.0 version of the "Artificial Intelligence Security Governance Framework" on September 15 [1] Group 2 - Ant Group's consumer finance division utilizes multimodal perception and collaboration between large and small models to accurately identify counterfeit documents and synthetic voices, achieving a 98% accuracy rate in fake document recognition [2] - New security assessment systems from Green Alliance Technology enable automated deep scanning of over 140 mainstream models, identifying risks related to content safety, adversarial attacks, data leakage, and component vulnerabilities [2] - The "AI Era Cybersecurity Talent Development Report (2025)" indicates a projected global cybersecurity talent gap of 4.8 million by 2025, with a 19% year-on-year increase, and highlights the need for cybersecurity professionals in the U.S. and China [2]