Workflow
人工智能安全治理
icon
Search documents
中国信通院发布《人工智能安全治理研究报告(2025年)》
人民财讯1月9日电,近日,中国信息通信研究院(简称"中国信通院")正式发布《人工智能安全治理研究 报告(2025年)——推进人工智能安全治理产业实践框架》。其中指出,人工智能产业面临技术、应用、 管理、协同共治的多重挑战。一是技术发展扩大内生安全新敞口。模型固有特性诱发安全可控复杂难 题,内生安全形势愈发严峻。与传统网络安全相比,人工智能安全攻防非对称性加剧,技术安全凸 显"易攻难守"新形势。二是应用延展引发衍生安全新难题。模型应用既带来应用形态迭代、开源生态滥 用、软件供应链漏洞等外部性安全难题,也引发个人、群体、社会多层面次生风险传导放大。三是组织 管理体系构建面临新卡点。人工智能技术的黑箱属性、应用的不确定性和产业链条的多样性愈发凸显, 给人工智能模型研发、系统部署、应用运行等不同组织主体以及同时拥有多重身份的组织主体带来管理 挑战。四是多元共治协同机制尚待健全完善。当前行业在核心治理环节普遍存在共建合力不足的问题, 统一标准尚未形成、协同机制仍需完善。 ...
粤港澳大湾区生成式人工智能安全发展联合实验室福田服务站启用
Zhong Guo Jing Ji Wang· 2026-01-07 07:26
1月6日,由粤港澳大湾区生成式人工智能安全发展联合实验室、深圳市福田区委主办的粤港澳大湾区生 成式人工智能安全发展联合实验室福田服务站启用暨AI出海研讨会在深圳举行。据悉,这是该联合实 验室将人工智能安全治理力量下沉至基层的创新之举,也是深圳构建"安全与发展并重"AI产业生态中迈 出的关键一步。 据了解,联合实验室是适应人工智能时代敏捷、弹性、高效的创新治理联合体,由广东省委网信办和国 家互联网应急中心广东分中心联合牵头,大湾区相关部门、企业、高校、科研机构等共同参与建设,运 行主体设在河套深港科技创新合作区深圳园区。福田服务站将承担大模型和算法备案辅导、安全评测、 合规培训、政策宣贯等职能,为企业提供"一站式、全周期、零距离"的专业支持。 "数据隐私问题不仅仅出现在大语言模型,也出现在视觉大模型。"吴保元表示,作为特殊的数据载体和 重要数据资产,AI模型及其相关数据属于跨境数据规定的监管范畴,而AI生成内容是否合规,与法律 法规、历史文化、宗教习俗、地域特点高度相关,需要根据出境目的地要求设置个性化的AI生成安全 护栏。 "福田服务站并非简单的'政策窗口',而是为AI硬件企业破解发展痛点、打通商业化链路的关 ...
人工智能安全治理力量下沉至基层 一站式赋能企业
Nan Fang Du Shi Bao· 2026-01-06 23:10
Core Insights - The Guangdong-Hong Kong-Macao Greater Bay Area has launched the Generative AI Safety Development Joint Laboratory, with a focus on enhancing AI safety and facilitating cross-border AI services [2][4][10]. Group 1: Event Overview - The Guangdong-Hong Kong-Macao Greater Bay Area Generative AI Safety Development Joint Laboratory's Futian Service Station was inaugurated alongside an AI overseas expansion seminar in Shenzhen [2][10]. - The event marked the beginning of APEC "China Year" with the theme "Seizing APEC Opportunities, Setting Sail for New Blue Oceans" [2]. Group 2: Regional Advantages - Zhuhai, as a key location due to its proximity to the Hong Kong-Zhuhai-Macao Bridge, is positioned to lead in the AI sector, ranking third in the province for generative AI model registrations [5][6]. - The establishment of the Zhuhai service station aims to leverage local advantages to create a cross-border AI safety service hub, reinforcing the security framework for the AI industry in the region [5][6]. Group 3: AI Safety Governance - Zhuhai has developed a comprehensive AI governance system characterized by policy guidance, technical breakthroughs, and enterprise aggregation, with a focus on safety [8]. - The joint laboratory has issued certificates to seven enterprises for generative AI model registrations, highlighting the region's commitment to fostering a safe AI environment [8]. Group 4: Strategic Focus Areas - The Zhuhai center and service station will concentrate on three main areas: strengthening cross-border safety collaboration, empowering the development of specialized industries, and building an open innovation ecosystem [9]. - Specific initiatives include exploring mutual recognition of AI regulatory rules between Guangdong and Macao, and establishing a platform for AI safety and industry scenario integration [9]. Group 5: AI Safety Trends - The joint laboratory released the "2026 Annual AI Safety Top Ten Trends" report, emphasizing the shift from passive protection to proactive governance in AI safety [14][18]. - Key trends identified include the acceleration of global AI compliance frameworks, the increasing complexity of attack methods, and the need for a full lifecycle governance approach to AI safety [15][16][18].
中央网信办:强化网络安全防护、网络数据安全管理和人工智能安全治理
Mei Ri Jing Ji Xin Wen· 2026-01-06 16:07
每经AI快讯,1月5日至6日,全国网信办主任会议在京召开。会议强调,守牢网上阵地,坚决维护网上 政治安全、意识形态安全和社会大局稳定。深化综合施策,整治网上各类乱象,切实提高网络生态治理 效能。筑牢安全屏障,强化网络安全防护、网络数据安全管理和人工智能安全治理,全面推进国家网络 安全体系和能力现代化。注重赋能增效,着力推进网信领域科技创新、网信产业生态建设、信息基础设 施建设、信息化应用等工作,以信息化助力高质量发展。夯实法治根基,统筹推进网络领域立法执法普 法,深入推进网络空间法治建设。深化互利共赢,积极拓展网络空间国际交流合作,构建网络空间命运 共同体。聚焦全面从严,持续加强网信系统党的建设和干部队伍建设。 ...
中国网络空间安全协会卢卫:AI治理应分类,严管高风险场景
Nan Fang Du Shi Bao· 2025-12-20 15:36
"比如,当前具身智能已从生产阶段进入到现实生活中,随着机器人逐步进入家庭,相关安全问题亟待 重视。"卢卫还提到,信任是安全治理的桥梁纽带,为人工智能发展凝聚社会共识。 12月18日,由南方都市报社、南都数字经济治理研究中心主办的"第九届啄木鸟数据治理论坛"在京举 行,主题聚焦 "AI安全边界:技术、信任与治理新秩序"。中国网络空间安全协会副理事长、人工智能 安全治理专业委员会主任卢卫为论坛致辞。他表示,AI治理应坚持"分类分级",比如对自动驾驶、智慧 医疗等高风险场景严格监管,对低风险应用则留出一定创新空间。 中国网络空间安全协会副理事长卢卫致辞。 随着大模型应用场景渗透到社会的每一个角落,曾经高居云端的AI智能已步入"寻常百姓家",越来越多 的普通人开始使用和掌控它。与此同时,新应用的不断涌现,也给网络生态治理带来全新挑战。卢卫认 为,建立健全面向未来的人工智能安全治理生态,需要依靠技术创新筑牢防线、依靠信任夯实凝聚共 识、依靠制度完善保驾护航,让技术、信任、制度三者形成合力。 首先,技术是安全治理的基础支撑,为人工智能发展筑牢安全底线。卢卫表示,人工智能技术日新月 异,发展离不开技术的创新和迭代,安全最终 ...
人工智能发展主线有变,中国信通院给出这些研判
AI正向价值创造核心环节渗透 2025年,人工智能行业应用持续深化落地,同时也暴露出应用深水区的结构性挑战。魏凯表示,根据对数百个大模型在工业中 的应用案例的分析,其在价值链中的分布仍呈现"两端高、中间低"的微笑曲线态势,这反映出研发设计与营销服务环节更易获 得AI赋能。但一个积极的信号是,今年生产制造环节已展现出明显抬高趋势,案例占比由去年的19.9%增长至25.9%。 2026中国信通院深度观察报告会于2025年12月12日至13日在北京举行。(冉黎黎/图) 2026中国信通院深度观察报告会于2025年12月12日至13日在北京举行,报告会以"面向'十五五'人工智能浪潮下的新质生产力发 展"为主题。对于人工智能,报告会上提到,技术的不断迭代,为大模型实用化打下坚实基础,而智能体成为大模型应用落地的 主要形式,展现出"数字劳动力"的雏形。同时,AI正在向价值创造的核心环节渗透,但其渗透速度仍受限于工业数据的获取难 度、工艺知识的封装水平以及对可靠性的极致要求。另外,具身智能作为大模型与机器人结合的产物,已经取得认知智能与物 理智能的双线突破,但模型路线、数据范式以及最佳机器人形态仍未定型,大规模落地仍处于早 ...
专家献策AI敏捷治理:要重视生成式数据,提前“预埋”标识
Nan Fang Du Shi Bao· 2025-12-08 05:14
"人工智能创新与治理"圆桌交流。(图片来源:活动主办方) 交流中,四位专家针对人工智能的"敏捷治理"这一概念分享了见解。中国工程院院士、鹏城实验室主任 高文指出,以AlphaGo下围棋为例,其训练数据中80%都是生成式数据,正是通过生成数据与既有数据 的结合训练,才实现了对人类的超越。这一现象让我们必须重视生成式数据的治理问题,核心是要引导 其"向善"发展。他强调,"敏捷治理"绝非事后补救,而应注重事前预埋治理逻辑。例如对生成式数据的 管理上,需要进行标识,仅靠显式标识还不够——这类标识可能被人为移除,因此还需配套隐式标识, 确保其难以被随意篡改。基于此,相关立法机构与技术部门应提前介入治理过程,而非等问题发生后再 被动应对。 中国科学技术大学党委常委、副校长吴枫结合校园实践分享了看法。他提到,当前学校的信息化建设成 效显著,基础设施完备,但人工智能技术的落地应用仍存在较大空白。为此,校方曾尝试推动人工智能 在校园管理等场景中的应用,却发现各部门对数据安全高度敏感,普遍持谨慎态度,这让他意识到:人 工智能在校园的应用,必须先从后台场景起步,而非直接推进前端落地。随后,学校用了一年多时间, 重点推进数据治理工作 ...
张林山:强化AI技术优势与产业根基深度耦合
Jing Ji Ri Bao· 2025-11-18 00:02
Core Insights - The 20th Central Committee of the Communist Party of China emphasizes accelerating high-level technological self-reliance and innovation as a key driver for building a modern industrial system, highlighting the importance of integrating technological and industrial innovation [1][2] - China's approach to transforming artificial intelligence (AI) into real productive forces is characterized by a strong connection between technological advantages and industrial foundations, creating a unique competitive edge [2][3] Industry Overview - China's AI core industry is projected to exceed 900 billion yuan by 2024, with over 5,000 AI companies operating in the country, showcasing the rapid growth and integration of AI technologies in various sectors [1] - The transformation of AI into practical applications is evident in sectors such as manufacturing, logistics, and agriculture, where AI-driven systems have significantly improved efficiency and reduced costs [1][2] Strategic Initiatives - The government is urged to strengthen the foundational computing power infrastructure, including the establishment of a national integrated computing network to ensure accessible and cost-effective high-performance computing resources for various industries [2] - Emphasis is placed on the need for a cross-disciplinary talent cultivation system that combines AI technology with industry knowledge, facilitating talent flow between universities, research institutions, and enterprises [3] Governance and Regulation - The development of regulations, ethical guidelines, and standards tailored to AI's growth is crucial for balancing innovation and risk management, with a call for active participation in global AI governance [3]
系好人工智能发展“安全带”
Jing Ji Ri Bao· 2025-10-17 21:41
Core Insights - The release of the 2.0 version of the "Artificial Intelligence Security Governance Framework" aims to provide clear guidelines for managing AI risks across different industries, enhancing the operability of AI safety governance and contributing to global AI governance with a Chinese solution [1][2] Industry Overview - The AI industry in China has become a significant driver of economic growth, with its scale exceeding 900 billion yuan in 2024, representing a 24% year-on-year increase, and the number of enterprises surpassing 5,300, forming a relatively complete industrial system [1] - AI is profoundly transforming traditional industries, with widespread applications in manufacturing, finance, and healthcare, showcasing significant potential in cost reduction, efficiency enhancement, and resource optimization [1] Risks and Challenges - Despite the benefits, AI also poses risks such as data breaches, model defects, and ethical issues, with approximately 74% of AI-related risk events from 2019 to 2024 directly linked to safety concerns [1] - From June 2024 to July 2025, there were 59 publicly reported safety incidents globally, involving issues like forgery fraud, algorithmic discrimination, and autonomous driving decision errors, highlighting the urgent need for a scientific AI governance system [1] Governance Principles - The governance approach should focus on three key areas: governance principles, risk classification, and collaborative governance [2] - The principle of inclusive prudence emphasizes the need for trustworthy AI that actively prevents uncontrolled risks, ensuring that AI remains under human control and aligns with fundamental human interests [2] Risk Classification - AI risks can be categorized into three types: inherent technical defects, interference during usage (e.g., hacking), and cascading effects (e.g., job market disruption) [2] - Targeted governance measures should be implemented based on risk types, clarifying obligations for stakeholders at each stage [2] Collaborative Governance - A comprehensive governance strategy involves participation from government, enterprises, research institutions, and the public, utilizing regulations, technological safeguards, and ethical guidance for full-chain management of AI [3] - Existing regulatory frameworks, such as the "Interim Measures for the Management of Generative AI Services," and academic proposals like the "AI Model Law 3.0," represent significant steps toward establishing a governance system with Chinese characteristics [3] Conclusion - Ensuring safety is a prerequisite for development, and governance is essential for innovation, as AI safety governance impacts social security, industrial development, and economic growth [4] - A systematic and effective governance framework is necessary for AI to become a safe and vital engine for high-quality economic development [4]
人工智能监管应因时而变(微观)
Ren Min Ri Bao· 2025-10-15 22:17
Group 1 - The core viewpoint of the articles emphasizes the urgent need for governance and regulation of generative artificial intelligence (AI) technologies due to their rapid development and the associated risks of misuse and misinformation [1][2][3] - As of 2024, the user base for generative AI products in China has reached 249 million, indicating a significant growth in the adoption of AI-generated content across various platforms [1] - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" mandates explicit and implicit labeling of AI-generated content, which is crucial for user awareness and content traceability [1] Group 2 - The 20th Central Committee's Third Plenary Session proposed establishing a regulatory system for AI safety, highlighting the importance of legal frameworks such as the Cybersecurity Law and the Interim Measures for the Management of Generative AI Services [2] - The Supreme People's Court's 2024 judicial interpretation on antitrust civil litigation aims to regulate competitive behaviors by internet platforms using AI, showcasing the judiciary's role in refining legislative principles [2] - The establishment of an AI regulatory sandbox in Beijing aims to explore flexible governance and risk compensation rules, which could facilitate the industrial application of AI while managing compliance costs [3] Group 3 - The articles stress that AI governance should not merely focus on restriction but should also foster an environment where technology can thrive within a well-defined regulatory framework [3] - Future advancements in AI require a comprehensive rule system, ethical constraints, and enhanced governance effectiveness to ensure that the benefits of technological development are shared widely [3]