Core Viewpoint - The forum focused on "AI Security Boundaries: Technology, Trust, and Governance New Order," emphasizing the need for a structured approach to AI governance that balances innovation with safety [2][4]. Group 1: Technology as a Foundation - Technology is the foundational support for AI safety governance, requiring continuous innovation and iteration to ensure security [5]. - Enhancements in AI safety include improving model robustness through adversarial training and employing differential privacy for data protection [5]. - There is a need for proactive security measures to be embedded in AI development from the outset, rather than applying fixes post-implementation [5]. Group 2: Trust as a Bridge - The proliferation of AI is fundamentally a process of gaining societal trust, which is essential for deep applications in critical sectors like healthcare and education [6]. - Building trust involves increasing transparency in algorithmic decision-making and addressing issues like privacy and fairness [6]. - Public trust is vital for the successful integration of AI technologies into everyday life, particularly in sensitive areas [6]. Group 3: Institutional Framework - A robust governance framework combining laws, standards, and ethical guidelines is necessary to safeguard AI development [6]. - The governance approach should be tiered, with stricter regulations for high-risk applications like autonomous driving and smart healthcare, while allowing innovation in lower-risk areas [6]. - Collaboration across departments and international cooperation is essential to tackle global AI safety challenges [6][7]. Group 4: Agility in Governance - Governance must remain agile and adapt to the rapid evolution of AI technologies, necessitating dynamic risk assessment mechanisms [7]. - The industry association is committed to actively participating in AI safety governance efforts, recognizing the challenges posed by new technologies [7].
中国网络空间安全协会卢卫:AI治理应分类,严管高风险场景
Nan Fang Du Shi Bao·2025-12-20 15:36