Workflow
人工智能安全治理
icon
Search documents
人工智能安全治理白皮书(2025)
中国联通研究院· 2025-08-05 02:18
Investment Rating - The report does not explicitly provide an investment rating for the industry Core Insights - The rapid development of artificial intelligence (AI) technology is transforming global industrial patterns and driving the fourth industrial revolution, but it also brings multiple security risks related to data, models, infrastructure, and applications [7][8] - The white paper aims to establish a safe, reliable, fair, and trustworthy AI system, focusing on AI security governance, risk analysis, and the development of a governance framework [8][9] - The report emphasizes the need for a comprehensive governance system that includes legal regulations, standards, and management measures to ensure the safe and controllable development of AI technology [20][22] Summary by Sections AI Overview - AI technology has evolved from symbolic rules to machine learning and deep learning, with significant growth in large language models (LLMs) driving technological progress and industrial upgrades [11][12] - Major companies in both domestic and international markets are expanding the application of large models across various industries, enhancing AI technology's development and industrial intelligence [12][13] AI Security Governance Risk Analysis and Challenges - AI security governance risks include vulnerabilities inherent to AI and external threats faced during application, categorized into infrastructure, data, model algorithm, and application security risks [29][30] - Specific risks include hardware device security, cloud security, model-as-a-service platform security, and computational network security [31][32][33][37] AI Security Governance System - The governance system consists of a four-part supervisory and management framework, focusing on infrastructure, model, data, and application security [20][22] - The report outlines the importance of addressing security at all levels to build a truly secure AI ecosystem [22] AI Security Technology Solutions - The report discusses various technical solutions and case studies across AI infrastructure, data, models, and applications to enhance security governance [8][9] AI Security Development Recommendations - Recommendations include establishing a legal framework, building a standard system, exploring cutting-edge technologies, and fostering talent through industry-academia collaboration [8][9]
WAIC2025前沿聚焦(7):安远AI举办“人工智能安全和治理论坛”并发布系列重磅报告
Investment Rating - The report does not explicitly provide an investment rating for the industry or specific companies involved in AI safety and governance Core Insights - China's AI safety governance system is maturing, transitioning from theoretical frameworks to systematic and actionable practices, as evidenced by the release of comprehensive methodologies to address severe AI risks [2][3][23] - The introduction of the "AI-45° Law" emphasizes the synchronized development of capabilities and safety, reflecting a commitment to balancing innovation with security [2][3][23] Summary by Sections Event - On July 27, 2025, during the World Artificial Intelligence Conference (WAIC) in Shanghai, Concordia AI and the Shanghai AI Laboratory hosted the "AI Safety and Governance Forum," releasing impactful research reports on AI risk management and biosafety [1][22] Commentary - The series of reports marks a shift in China's AI governance from macro principles to practical implementations, particularly focusing on severe risks like loss of control and misuse [2][3][23] Core Finding - Most frontier AI models are in a "yellow zone," indicating a need for enhanced mitigation measures, especially in areas like persuasion and manipulation, where risks are alarmingly high [3][24] Focal Issue - The report highlights life sciences as a "deep-water zone" for AI risks, necessitating a multi-stakeholder collaborative governance approach to address structural risks posed by AI in biosafety [4][25] Strategic Significance - The release of risk frameworks aligned with international concerns signals China's strategic shift towards evidence-based participation in global AI safety governance, defining AI safety as a "global public good" [5][26]
姚期智:AGI时代比想象中来得快,安全治理是一个长期工作
第一财经· 2025-07-26 14:23
Core Viewpoint - The article discusses the urgent need for global governance of artificial intelligence (AI) as it approaches and may surpass human intelligence, emphasizing the importance of ensuring AI systems remain under human control and aligned with human values [1][2]. Group 1: AI Governance and Safety - The WAIC highlighted the rapid approach of Artificial General Intelligence (AGI) and the associated safety concerns, as traditional algorithm designs do not guarantee AI safety [1][2]. - The "Shanghai Consensus" was established, calling for global governments and researchers to ensure advanced AI systems are aligned with human control and welfare, addressing the potential risks of AI systems deceiving human developers [2][3]. Group 2: International Collaboration - The consensus emphasizes the need for major countries and regions to coordinate on credible safety measures, establish trust mechanisms, and increase investment in AI safety research [3]. - It advocates for frontline AI developers to provide safety assurances and for international cooperation to establish and adhere to verifiable global behavioral red lines [3]. Group 3: Future of AI and Education - The article mentions the unpredictable extent of changes brought by AI and the importance of effective governance to ensure a better future for humanity [3][4]. - It highlights the need for young students to strengthen their foundational skills in subjects like mathematics, physics, and computer science to adapt to rapid technological changes [6].
人工智能软硬件协同加速创新
Zhong Guo Jing Ji Wang· 2025-07-18 05:46
Group 1 - The conference highlighted five major trends in artificial intelligence, including accelerated iteration of foundational large models, a shift in focus towards post-training and inference stages, deep collaboration between hardware and software, the rise of intelligent agents and the intelligent agent economy, the promotion of open-source ecosystems, and increasing demands for AI safety governance [1] - Beijing Economic-Technological Development Area is committed to building a comprehensive AI city, with plans to establish a national AI data training base, the largest public computing power platform in the city, and to implement special policies and funding exceeding 1 billion yuan to support major projects in various AI-related fields [1] - By the end of 2025, the development goals include opening 100 landmark application scenarios, gathering 600 core enterprises, and achieving an industry scale target of 80 billion yuan [1] Group 2 - The AI hardware and software testing and verification center was officially launched, aiming to provide key testing and verification capabilities for AI hardware and software, with four core capabilities established [2] - The center has partnered with major companies to create innovation labs and testing facilities to accelerate the innovation and prosperity of intelligent computing technologies [2] - Five major achievements in AI hardware and software collaborative innovation were announced, showcasing significant breakthroughs across the entire technology chain from foundational computing power to framework software [2] Group 3 - The center completed the first batch of testing and evaluation for the adaptation of large models and domestic hardware and software, with several companies successfully passing the evaluation [3] - The conference awarded certificates to institutions that passed the unified benchmark testing, marking a new stage in the standardized and quantifiable development of AI collaborative innovation ecosystems [3] - The AI safety governance initiative was highlighted, with 18 companies disclosing their safety practices, contributing to the establishment of a solid foundation for responsible AI development [3] Group 4 - The vice president of the China Academy of Information and Communications Technology emphasized the urgent need to address challenges in hardware and software collaboration for building an open intelligent computing ecosystem [4] - The AISHPerf 2.0 benchmark system was officially released, featuring upgrades to support multiple inference engines and domestic open-source model loads, addressing various evaluation needs [4] - The academy has initiated a series of collaborative testing and verification efforts based on AISHPerf, focusing on large model adaptation and key collaborative technologies [4]
西湖论剑丨5.10 智能体·论剑风暴来袭!「解码智能体DNA」应用创新实践与安全治理
Bei Ke Cai Jing· 2025-04-28 08:37
Group 1 - The conference "2025 China Digital Valley · West Lake Forum" focuses on AI applications and security governance, gathering 200 policymakers, tech enthusiasts, and industry leaders to discuss breakthroughs in application scenarios, challenges in trust and safety, and new paradigms in agile governance [1][17] - Three main themes are highlighted: breakthroughs in application scenarios, challenges in trust and safety, and agile governance [2] - The event aims to witness the birth of a new paradigm in AI security governance, emphasizing the importance of safety as a competitive edge in the AI landscape [1][17] Group 2 - The first theme addresses the evolution of intelligent agents, differences in application scenarios between China and abroad, and compatibility challenges in cross-industry collaboration [2] - The second theme focuses on constructing a full lifecycle protection system for intelligent agents, including the development of key standards for trustworthiness and risk prevention throughout the training, deployment, and operation phases [2] - The third theme discusses the need for flexible regulatory frameworks that can adapt to the rapid iteration of AI technology, comparing the EU's sandbox regulation with China's negative list model [2] Group 3 - The conference will unveil the "Safety Intelligent Agent Cube: Maturity Model Evaluation Research Report," which is the first domestic maturity assessment system for intelligent agent safety [3] - Practical case studies will be presented, showcasing AI's application potential in vertical fields and revealing governance paths through customized safety solutions and technical tools [3] - A signing ceremony for the first batch of intelligent agent ecological alliances will take place, promoting collaboration among industry leaders to co-create the future of intelligent agents [4] Group 4 - The "Joint Creation of Intelligent Agent Ecological Partners Program" aims to recruit 30 leading enterprises, tech developers, and ecological service providers nationwide to build an open, collaborative, and win-win intelligent agent ecosystem [6] - The program emphasizes shared resources, technical support, and the exploration of foundational large model capabilities for industry applications [7][8] - The initiative seeks to establish benchmark cases in key sectors such as finance, transportation, and healthcare, while also facilitating government technology project applications and industry standard formulation [12]