360重磅发布《大模型安全白皮书》 推动AI应用“安全、向善、可信、可控”

Core Insights - The white paper systematically outlines five key risks threatening the security of large models, including infrastructure security risks, content security risks, data and knowledge base security risks, agent security risks, and user-end security risks [1][3] - The proposed dual governance strategy combines "external security" and "platform-native security" to create a comprehensive protection network for AI applications [1][3] Group 1: Key Risks - The first category of risks is infrastructure security risks, which include device control, supply chain vulnerabilities, denial-of-service attacks, and misuse of computing resources [1] - The second category is content security risks, involving non-compliance with core values, false or illegal content, model hallucinations, and prompt injection attacks [1] - The third category focuses on data and knowledge base security risks, highlighting issues such as data breaches, unauthorized access, privacy abuse, and intellectual property concerns [1] - The fourth category addresses agent security risks, where the increasing autonomy of agents blurs the security boundaries in areas like plugin invocation, computing resource scheduling, and data flow [1] - The fifth category is user-end security risks, which encompass permission control, API call monitoring, execution of malicious scripts, and security during MCP execution [1] Group 2: Security Solutions - The white paper emphasizes a dual governance strategy: "external security" acts as a flexible response to real-time risks, while "platform-native security" builds a robust security foundation from the ground up [1] - 360's products, including enterprise-level knowledge bases and agent construction platforms, are designed to embed security deeply within the platform, ensuring compliance with national and industry standards [2] - The three main platform products work together to address inherent security challenges, such as data leakage, uncontrolled agent behavior, and terminal misuse, thereby establishing a stable foundation for AI applications [2] - 360 has implemented these capabilities across various sectors, including government, finance, and manufacturing, transforming theoretical security into practical solutions [2] - The company aims to collaborate with academia and industry to promote security standards and technology sharing, contributing to a safer and more trustworthy AI ecosystem [2]