大模型安全评估系统
Search documents
看AI攻防博弈:技术升级、人才仍缺
Zhong Guo Xin Wen Wang· 2025-09-29 10:10
Group 1 - The 22nd National Cybersecurity Publicity Week revealed the first real-world testing results for AI large models, identifying 281 security vulnerabilities, with over 60% being unique to large models, including risks like prompt injection and information leakage [1] - Attackers are studying AI learning preferences and deliberately feeding false information, with organized efforts to "data poison" AI by fabricating expert identities and creating fake research reports to manipulate AI outputs [1] - The regulatory framework is evolving, with the release of the 2.0 version of the "Artificial Intelligence Security Governance Framework" on September 15 [1] Group 2 - Ant Group's consumer finance division utilizes multimodal perception and collaboration between large and small models to accurately identify counterfeit documents and synthetic voices, achieving a 98% accuracy rate in fake document recognition [2] - New security assessment systems from Green Alliance Technology enable automated deep scanning of over 140 mainstream models, identifying risks related to content safety, adversarial attacks, data leakage, and component vulnerabilities [2] - The "AI Era Cybersecurity Talent Development Report (2025)" indicates a projected global cybersecurity talent gap of 4.8 million by 2025, with a 19% year-on-year increase, and highlights the need for cybersecurity professionals in the U.S. and China [2]
网络安全企业加速AI创新 新产品竞相落地
Zhong Guo Zheng Quan Bao· 2025-09-23 20:26
Core Insights - Multiple cybersecurity companies are actively investing in AI technology development, enhancing their product capabilities and operational efficiency [1][2][3] - The integration of AI in cybersecurity is seen as a double-edged sword, presenting both new security risks and opportunities for improved efficiency [1][4] Group 1: Company Developments - Green Alliance Technology plans to launch AI security products, including an AI security integrated machine and a large model security assessment system [1] - North Trust has developed an AI capability platform that integrates large models and development tools, with applications delivered in finance and energy sectors [1][2] - Deepin Technology has incorporated large model technology into its cybersecurity products, including a security GPT and AI firewall, with plans for further investment in AI R&D [2] - Ant Group has released innovative products that combine cybersecurity and AI technology, including a trusted connection framework for smart glasses [2] - Starry Sky Technology's AI model has been applied in security operations and threat detection, significantly enhancing product capabilities [3] - AsiaInfo reported significant growth in AI model applications and deliveries in the first half of the year, focusing on AI model applications, 5G private networks, and intelligent operations [3] Group 2: Industry Trends and Challenges - Gartner's report indicates a shift in focus towards securing AI systems in cybersecurity, with expectations that 60% of large Chinese enterprises will adopt exposure management technology by 2027 [4] - The need for companies to be aware of risks associated with AI model applications, such as prompt injection and model manipulation, is emphasized [4][5] - The importance of supply chain security in AI applications is highlighted, with calls for enhanced version vulnerability management and code security audits [5] - The rapid adoption of AI models is expected to create significant security risks, necessitating a dynamic defense system and cross-departmental collaboration [5][6] Group 3: Recommendations for AI Security - Experts suggest mandatory registration for AI models to identify risks early and ensure comprehensive understanding of their security and usability [6] - Companies are encouraged to conduct compliance assessments and deploy specialized protections, such as AI security barriers, to defend against new types of attacks [6] - Establishing trust through security measures is seen as essential for promoting data flow and maximizing the value of AI applications across various industries [6]
网络安全企业加速AI创新新产品竞相落地
Zhong Guo Zheng Quan Bao· 2025-09-23 20:16
Core Insights - Multiple cybersecurity companies are actively investing in AI technology development, leading to innovative products and solutions in the cybersecurity sector [1][2][3] - The integration of AI in cybersecurity is seen as a double-edged sword, presenting both new security risks and opportunities for efficiency and product enhancement [1][3] Group 1: Company Developments - Green Alliance Technology plans to launch a series of AI security products aimed at protecting large models, including an AI security integrated machine and an AI security fence [1] - North Trust has developed an AI capability platform that integrates large models and tools, which has been deployed in sectors like finance and energy [1][2] - Deepin Technology has incorporated large model technology into its cybersecurity products, including a security GPT and AI firewall, and plans to increase R&D investment in AI [2] - Ant Group has introduced innovative products that merge cybersecurity with AI technology, including a trusted connection framework for smart glasses [2] - Starry Sky Technology's AI model has been applied in security operations and threat detection, enhancing product capabilities and service efficiency [3] - AsiaInfo reported significant growth in AI model applications and deliveries in the first half of the year, focusing on AI model applications, 5G private networks, and intelligent operations as growth engines [3] Group 2: Industry Trends and Challenges - According to Gartner, the focus of cybersecurity in China is shifting towards ensuring the safety of AI, with expectations that by 2027, 60% of large enterprises will adopt exposure management technologies [3][4] - The risks associated with AI model applications include prompt injection and model manipulation, which require careful monitoring and preventive measures [3][4] - The importance of supply chain security in AI applications is emphasized, as vulnerabilities and configuration errors can lead to significant data leaks [4] - The rapid adoption of AI models is likened to the early days of website proliferation, but it also brings a surge in security risks due to the extensive permissions these models may have [4][5] Group 3: Recommendations for AI Security - Experts suggest mandatory registration for AI models to identify risks early and enhance user understanding of their safety and usability [5] - Companies are encouraged to build protective systems for AI applications, including compliance assessments and the deployment of AI security technologies [5] - Establishing trust through security measures is seen as essential for promoting data flow and maximizing the value of AI across various industries [5]