Core Insights - The rapid development of generative AI brings efficiency and model innovation, but also amplifies security risks such as model abuse and data leakage, necessitating higher demands for AI research, deployment, and risk management [2] Policy Section - The white paper identifies two core trends: the establishment of a global AI governance framework and the intensifying regulatory competition over open-source models. It predicts that 2025 will mark a turning point where AI governance shifts from "principle advocacy" to "institutional implementation," making compliance capabilities a core competitive barrier for enterprises [3] - The global AI compliance framework is accelerating collaboration and implementation, with China, the US, and the EU forming differentiated yet aligned governance frameworks. These frameworks emphasize "auditable and accountable" requirements, predicting that this capability will become a core threshold for AI systems entering critical sectors like finance and government [3] Risk Section - The white paper outlines three main challenges in AI security: the complexity of attack methods, the diversification of risk scenarios, and the expansion of harm impacts. It highlights that attackers are utilizing systematic methods across multiple modalities, leading to security issues being elevated to "complex system robustness" [4] - The report indicates that malicious instructions rewritten in various forms have a success rate exceeding 90% against multiple mainstream models, suggesting traditional filtering techniques are inadequate [4] Trend Section - AI security governance is transitioning from passive protection to proactive construction, with a focus on full lifecycle governance to establish a solid security foundation. The report emphasizes that the native security architecture is becoming a standard requirement [5] - The governance framework is evolving towards full lifecycle trustworthiness, with international efforts to cover the entire process from design to deployment through frameworks like NIST and the EU's AI Act [5] - The report highlights the importance of AI alignment research as a key to addressing security challenges, noting that this research is shifting from academic exploration to engineering practice, directly impacting the safety and societal acceptance of AI systems [6] - Content authenticity governance is becoming a foundational order in the digital society, with countries advancing legislation and technological traceability to combat deep forgery [6] - The expansion of computing power is driving the "AI-energy coupling" to become a national security issue, with a consensus on developing "green computing" and enabling mutual empowerment between AI and energy systems [6]
政策、风向与风险,AI安全十大趋势发布
Nan Fang Du Shi Bao·2026-01-06 09:07