Workflow
人工智能安全治理
icon
Search documents
从Safety到Security:西方叙事下全球AI安全治理淡化
3 6 Ke· 2025-08-20 12:12
七国集团 (G7) 于加拿大艾伯塔省举行的峰会上发布了《人工智能促进繁荣声明》,重点关注人工智能 的好处和机遇。然而"安全"一词在该声明的出现次数为"零",忽略了人工智能大部分可能发生的故障或 被有害使用的可能性。 2025年《人工智能促进繁荣声明》标志着一个重大转变——从早期对人工智能风险的考量转向几乎完全 关注其益处。这一转变象征着西方民主国家对人工智能政策的更广泛重新调整。推动这一跨大西洋转变 的力量包括特朗普政府的影响力、地缘政治竞争、行业压力以及人工智能日益增长的成功记录。 国际对话中人工智能乐观情绪的兴起有助于在全球范围内传播人工智能的好处,但忽视或完全放弃通过 多边合作应对人工智能风险则是愚蠢的。未来,功能更强大的人工智能模型可能会造成大量的国际犯 罪,如利用人工智能进行网络攻击、监视公民或制造新型病原体;人工智能伴侣应用程序和人工智能媒 体信息可能会在社会层面上扰乱人际关系;人工智能模型和机器人还可能会引发大规模失业并加速新大 规模杀伤性武器的发明。 一、风险观念降级:G7与多边框架的政策转向 过去十年,七国集团的人工智能政策基调经历了一个完整的循环:早期的热情为日益增长的人工智能安 全问题留 ...
人工智能安全治理白皮书(2025)
中国联通研究院· 2025-08-05 02:18
Investment Rating - The report does not explicitly provide an investment rating for the industry Core Insights - The rapid development of artificial intelligence (AI) technology is transforming global industrial patterns and driving the fourth industrial revolution, but it also brings multiple security risks related to data, models, infrastructure, and applications [7][8] - The white paper aims to establish a safe, reliable, fair, and trustworthy AI system, focusing on AI security governance, risk analysis, and the development of a governance framework [8][9] - The report emphasizes the need for a comprehensive governance system that includes legal regulations, standards, and management measures to ensure the safe and controllable development of AI technology [20][22] Summary by Sections AI Overview - AI technology has evolved from symbolic rules to machine learning and deep learning, with significant growth in large language models (LLMs) driving technological progress and industrial upgrades [11][12] - Major companies in both domestic and international markets are expanding the application of large models across various industries, enhancing AI technology's development and industrial intelligence [12][13] AI Security Governance Risk Analysis and Challenges - AI security governance risks include vulnerabilities inherent to AI and external threats faced during application, categorized into infrastructure, data, model algorithm, and application security risks [29][30] - Specific risks include hardware device security, cloud security, model-as-a-service platform security, and computational network security [31][32][33][37] AI Security Governance System - The governance system consists of a four-part supervisory and management framework, focusing on infrastructure, model, data, and application security [20][22] - The report outlines the importance of addressing security at all levels to build a truly secure AI ecosystem [22] AI Security Technology Solutions - The report discusses various technical solutions and case studies across AI infrastructure, data, models, and applications to enhance security governance [8][9] AI Security Development Recommendations - Recommendations include establishing a legal framework, building a standard system, exploring cutting-edge technologies, and fostering talent through industry-academia collaboration [8][9]
WAIC2025前沿聚焦(7):安远AI举办“人工智能安全和治理论坛”并发布系列重磅报告
Investment Rating - The report does not explicitly provide an investment rating for the industry or specific companies involved in AI safety and governance Core Insights - China's AI safety governance system is maturing, transitioning from theoretical frameworks to systematic and actionable practices, as evidenced by the release of comprehensive methodologies to address severe AI risks [2][3][23] - The introduction of the "AI-45° Law" emphasizes the synchronized development of capabilities and safety, reflecting a commitment to balancing innovation with security [2][3][23] Summary by Sections Event - On July 27, 2025, during the World Artificial Intelligence Conference (WAIC) in Shanghai, Concordia AI and the Shanghai AI Laboratory hosted the "AI Safety and Governance Forum," releasing impactful research reports on AI risk management and biosafety [1][22] Commentary - The series of reports marks a shift in China's AI governance from macro principles to practical implementations, particularly focusing on severe risks like loss of control and misuse [2][3][23] Core Finding - Most frontier AI models are in a "yellow zone," indicating a need for enhanced mitigation measures, especially in areas like persuasion and manipulation, where risks are alarmingly high [3][24] Focal Issue - The report highlights life sciences as a "deep-water zone" for AI risks, necessitating a multi-stakeholder collaborative governance approach to address structural risks posed by AI in biosafety [4][25] Strategic Significance - The release of risk frameworks aligned with international concerns signals China's strategic shift towards evidence-based participation in global AI safety governance, defining AI safety as a "global public good" [5][26]
姚期智:AGI时代比想象中来得快,安全治理是一个长期工作
第一财经· 2025-07-26 14:23
Core Viewpoint - The article discusses the urgent need for global governance of artificial intelligence (AI) as it approaches and may surpass human intelligence, emphasizing the importance of ensuring AI systems remain under human control and aligned with human values [1][2]. Group 1: AI Governance and Safety - The WAIC highlighted the rapid approach of Artificial General Intelligence (AGI) and the associated safety concerns, as traditional algorithm designs do not guarantee AI safety [1][2]. - The "Shanghai Consensus" was established, calling for global governments and researchers to ensure advanced AI systems are aligned with human control and welfare, addressing the potential risks of AI systems deceiving human developers [2][3]. Group 2: International Collaboration - The consensus emphasizes the need for major countries and regions to coordinate on credible safety measures, establish trust mechanisms, and increase investment in AI safety research [3]. - It advocates for frontline AI developers to provide safety assurances and for international cooperation to establish and adhere to verifiable global behavioral red lines [3]. Group 3: Future of AI and Education - The article mentions the unpredictable extent of changes brought by AI and the importance of effective governance to ensure a better future for humanity [3][4]. - It highlights the need for young students to strengthen their foundational skills in subjects like mathematics, physics, and computer science to adapt to rapid technological changes [6].
人工智能软硬件协同加速创新
Zhong Guo Jing Ji Wang· 2025-07-18 05:46
Group 1 - The conference highlighted five major trends in artificial intelligence, including accelerated iteration of foundational large models, a shift in focus towards post-training and inference stages, deep collaboration between hardware and software, the rise of intelligent agents and the intelligent agent economy, the promotion of open-source ecosystems, and increasing demands for AI safety governance [1] - Beijing Economic-Technological Development Area is committed to building a comprehensive AI city, with plans to establish a national AI data training base, the largest public computing power platform in the city, and to implement special policies and funding exceeding 1 billion yuan to support major projects in various AI-related fields [1] - By the end of 2025, the development goals include opening 100 landmark application scenarios, gathering 600 core enterprises, and achieving an industry scale target of 80 billion yuan [1] Group 2 - The AI hardware and software testing and verification center was officially launched, aiming to provide key testing and verification capabilities for AI hardware and software, with four core capabilities established [2] - The center has partnered with major companies to create innovation labs and testing facilities to accelerate the innovation and prosperity of intelligent computing technologies [2] - Five major achievements in AI hardware and software collaborative innovation were announced, showcasing significant breakthroughs across the entire technology chain from foundational computing power to framework software [2] Group 3 - The center completed the first batch of testing and evaluation for the adaptation of large models and domestic hardware and software, with several companies successfully passing the evaluation [3] - The conference awarded certificates to institutions that passed the unified benchmark testing, marking a new stage in the standardized and quantifiable development of AI collaborative innovation ecosystems [3] - The AI safety governance initiative was highlighted, with 18 companies disclosing their safety practices, contributing to the establishment of a solid foundation for responsible AI development [3] Group 4 - The vice president of the China Academy of Information and Communications Technology emphasized the urgent need to address challenges in hardware and software collaboration for building an open intelligent computing ecosystem [4] - The AISHPerf 2.0 benchmark system was officially released, featuring upgrades to support multiple inference engines and domestic open-source model loads, addressing various evaluation needs [4] - The academy has initiated a series of collaborative testing and verification efforts based on AISHPerf, focusing on large model adaptation and key collaborative technologies [4]
西湖论剑丨5.10 智能体·论剑风暴来袭!「解码智能体DNA」应用创新实践与安全治理
Bei Ke Cai Jing· 2025-04-28 08:37
Group 1 - The conference "2025 China Digital Valley · West Lake Forum" focuses on AI applications and security governance, gathering 200 policymakers, tech enthusiasts, and industry leaders to discuss breakthroughs in application scenarios, challenges in trust and safety, and new paradigms in agile governance [1][17] - Three main themes are highlighted: breakthroughs in application scenarios, challenges in trust and safety, and agile governance [2] - The event aims to witness the birth of a new paradigm in AI security governance, emphasizing the importance of safety as a competitive edge in the AI landscape [1][17] Group 2 - The first theme addresses the evolution of intelligent agents, differences in application scenarios between China and abroad, and compatibility challenges in cross-industry collaboration [2] - The second theme focuses on constructing a full lifecycle protection system for intelligent agents, including the development of key standards for trustworthiness and risk prevention throughout the training, deployment, and operation phases [2] - The third theme discusses the need for flexible regulatory frameworks that can adapt to the rapid iteration of AI technology, comparing the EU's sandbox regulation with China's negative list model [2] Group 3 - The conference will unveil the "Safety Intelligent Agent Cube: Maturity Model Evaluation Research Report," which is the first domestic maturity assessment system for intelligent agent safety [3] - Practical case studies will be presented, showcasing AI's application potential in vertical fields and revealing governance paths through customized safety solutions and technical tools [3] - A signing ceremony for the first batch of intelligent agent ecological alliances will take place, promoting collaboration among industry leaders to co-create the future of intelligent agents [4] Group 4 - The "Joint Creation of Intelligent Agent Ecological Partners Program" aims to recruit 30 leading enterprises, tech developers, and ecological service providers nationwide to build an open, collaborative, and win-win intelligent agent ecosystem [6] - The program emphasizes shared resources, technical support, and the exploration of foundational large model capabilities for industry applications [7][8] - The initiative seeks to establish benchmark cases in key sectors such as finance, transportation, and healthcare, while also facilitating government technology project applications and industry standard formulation [12]