人工智能安全治理
Search documents
亚太人工智能安全研究学会(APAIS)成立,聚焦大模型安全治理与金融智能化风险防控
Cai Jing Wang· 2026-02-26 03:44
在生成式人工智能和大模型技术加速落地应用的背景下,亚太人工智能安全研究学会(Asia-Pacific Artificial Intelligence Security Research Association,简称APAIS)近日正式宣布成立。学会将围绕人工智 能安全治理、模型风险控制与数据合规体系建设开展系统性研究与交流,推动区域人工智能安全生态健 康发展。 近年来,大模型与智能体(Agent)技术在金融、政务、医疗、工业等领域快速应用,在提升效率与创新 能力的同时,也带来了模型训练数据安全、隐私推断风险、算法偏差、自动化决策责任界定以及工具调 用安全等新型挑战。如何在促进人工智能创新发展的同时,构建系统化风险防控机制,已成为行业与监 管关注的重要议题。 据介绍,APAIS的筹备工作始于2025年初,由多位长期从事人工智能安全研究、企业应用实践与合规管 理工作的专家联合发起。筹备期间,发起团队围绕大模型安全风险、智能体工具链安全、数据治理体系 建设等问题开展了多轮闭门研讨,并形成阶段性研究报告。在充分调研亚太地区人工智能安全发展现状 的基础上,最终决定成立专门研究机构,推动跨区域协作与标准建设。 亚太人工智 ...
人工智能面对多重安全风险,协同治理机制有待健全
第一财经· 2026-02-04 05:13
Core Viewpoint - The article discusses the ongoing challenges and risks associated with artificial intelligence (AI), including data security, algorithmic bias, and model hallucination, emphasizing the need for effective governance and collaboration across the industry to address these issues [2][4][5]. Group 1: AI Security Risks - Major challenges facing large models include the generation of inappropriate content, which can lead to compliance risks and reputational damage [3] - There is a risk of unauthorized guidance through carefully designed prompts that can bypass safety boundaries, leading to sensitive information being disclosed [3] - Compliance risks arise from training data that may contain copyright, privacy, and intellectual property issues, complicating legal responsibilities [3] - The unpredictability and lack of explainability in model outputs pose significant challenges for safety and control [3] - Multi-modal models face risks when processing various input types, potentially creating security blind spots [3] - Over-reliance on computational resources can lead to service delays and increased costs, posing operational risks [3] Group 2: AI Governance and Regulation - The China Academy of Information and Communications Technology (CAICT) highlights the need for a systematic approach to AI risk management, emphasizing that no single entity can address these challenges alone [5] - The Chinese government is accelerating the establishment of an AI safety governance framework, focusing on enhancing security capabilities across algorithms, data resources, and application systems [6][7] - Recent government initiatives aim to promote responsible innovation and ethical management in AI, including guidelines for AI-human interaction services [7] Group 3: Ethical Considerations in AI - Key ethical issues in AI include explainability and transparency, ensuring alignment with human values, and establishing a safety governance framework for responsible model iteration [8] - The practice of AI explainability is still in its early stages, and there is a call for industry self-regulation and competitive improvement [8] - Future AI safety governance may involve implementing measures that ensure AI systems align with human values while providing transparency to users [8]
人工智能面对多重安全风险 协同治理机制有待健全
Di Yi Cai Jing· 2026-02-04 04:50
Core Insights - The meeting highlighted the unresolved issues in AI, including data security, algorithmic bias, model hallucination, emotional dependency, and data pollution [1][3] Group 1: AI Security Risks - Major challenges include the generation of inappropriate content, which can lead to compliance risks and reputational damage [2] - There is a risk of unauthorized guidance through carefully designed prompts that can bypass safety boundaries [2] - Compliance risks arise from training data that may contain copyright, privacy, and intellectual property issues, leading to legal ambiguities [2] - The unpredictability and instability of model outputs pose challenges for safety protection, as similar questions can yield inconsistent results [2] - Multi-modal input risks occur when models process various types of data, potentially creating security blind spots [2] - Excessive reasoning and computational demands can lead to service delays and resource depletion [2] Group 2: AI Governance and Regulation - The China Academy of Information and Communications Technology (CAICT) emphasizes the need for a collaborative approach to AI risk management across the industry [4] - The Chinese government is accelerating the establishment of an AI safety governance framework, with a focus on enhancing security capabilities [4] - The State Council's directive in August 2025 aims to improve safety levels in model algorithms, data resources, and application systems [4] - New regulations are being proposed to address ethical risks in AI, including guidelines for AI companionship services [5] Group 3: Ethical Considerations in AI - The understanding of AI capabilities among the public is lagging behind the rapid advancements in AI technology [5] - Key ethical issues include explainability and transparency, value alignment, safety governance frameworks, and the moral implications of AI [5][6] - The future of AI safety governance may involve measures that ensure AI aligns with human values and enhances human-AI collaboration [6]
中国信通院发布《人工智能安全治理研究报告(2025年)》
Zheng Quan Shi Bao Wang· 2026-01-09 06:49
Core Viewpoint - The report by the China Academy of Information and Communications Technology highlights multiple challenges faced by the artificial intelligence industry, including technical, application, management, and collaborative governance issues [1] Group 1: Technical Challenges - The development of technology has expanded the internal security vulnerabilities, making the situation increasingly severe. The inherent characteristics of models lead to complex security control issues, with asymmetric attack and defense in AI security becoming more pronounced [1] - Compared to traditional cybersecurity, AI security presents a new situation characterized by "easy to attack, hard to defend" [1] Group 2: Application Challenges - The extension of applications has led to new security challenges, including iterative application forms, misuse of open-source ecosystems, and vulnerabilities in the software supply chain. These external security issues also amplify secondary risks at individual, group, and societal levels [1] Group 3: Management Challenges - The construction of organizational management systems faces new bottlenecks due to the black-box nature of AI technology, application uncertainties, and the diversity of the industry chain. This creates management challenges for different organizational entities involved in model development, system deployment, and application operation [1] Group 4: Collaborative Governance Challenges - There is a lack of sufficient collaborative efforts in core governance areas within the industry. Currently, unified standards have not been established, and collaborative mechanisms still need improvement [1]
粤港澳大湾区生成式人工智能安全发展联合实验室福田服务站启用
Zhong Guo Jing Ji Wang· 2026-01-07 07:26
Core Viewpoint - The establishment of the Guangdong-Hong Kong-Macao Greater Bay Area Generative Artificial Intelligence Safety Development Joint Laboratory and its service station in Futian marks a significant step in promoting AI safety governance and development in Shenzhen, aiming to create a balanced AI industry ecosystem focused on safety and growth [1]. Group 1: Joint Laboratory and Service Station - The joint laboratory is a collaborative governance entity involving various stakeholders, including government departments, enterprises, universities, and research institutions, aimed at providing agile and efficient governance in the AI era [1]. - The Futian service station will offer comprehensive support services, including model and algorithm filing guidance, safety assessments, compliance training, and policy dissemination, ensuring a "one-stop, full-cycle, zero-distance" professional support for enterprises [1]. Group 2: AI Empowerment Center - The Futian AI Empowerment Center has established a closed-loop service system that includes pre-review, guidance, connection, and tracking, having served over 130 enterprises and assisted in initiating 71 filing processes [2]. - The center aims to enhance compliance and safety services, facilitate cross-border collaboration with Hong Kong and Macau, and leverage local resources to validate and promote cutting-edge AI technologies [2]. Group 3: Industry Support and Collaboration - The Futian service station serves as a critical support platform for AI hardware companies, helping them navigate compliance requirements and reduce trial-and-error costs, thus fostering innovation and market entry [3]. - The district has developed a comprehensive service network, integrating various resources to support AI enterprises in technology upgrades, capital connections, and application scenarios, promoting a virtuous cycle of technology research, scenario validation, and commercial landing [4]. Group 4: International AI Governance - The current phase of the large model industry is crucial for scaling and global expansion, with discussions focusing on international rules and data legal risks associated with AI going abroad [5]. - Experts emphasize the need for a governance approach that balances technological logic with geopolitical factors, advocating for continuous refinement of policies to enhance domestic compliance capabilities and shape international regulations [6].
人工智能安全治理力量下沉至基层 一站式赋能企业
Nan Fang Du Shi Bao· 2026-01-06 23:10
Core Insights - The Guangdong-Hong Kong-Macao Greater Bay Area has launched the Generative AI Safety Development Joint Laboratory, with a focus on enhancing AI safety and facilitating cross-border AI services [2][4][10]. Group 1: Event Overview - The Guangdong-Hong Kong-Macao Greater Bay Area Generative AI Safety Development Joint Laboratory's Futian Service Station was inaugurated alongside an AI overseas expansion seminar in Shenzhen [2][10]. - The event marked the beginning of APEC "China Year" with the theme "Seizing APEC Opportunities, Setting Sail for New Blue Oceans" [2]. Group 2: Regional Advantages - Zhuhai, as a key location due to its proximity to the Hong Kong-Zhuhai-Macao Bridge, is positioned to lead in the AI sector, ranking third in the province for generative AI model registrations [5][6]. - The establishment of the Zhuhai service station aims to leverage local advantages to create a cross-border AI safety service hub, reinforcing the security framework for the AI industry in the region [5][6]. Group 3: AI Safety Governance - Zhuhai has developed a comprehensive AI governance system characterized by policy guidance, technical breakthroughs, and enterprise aggregation, with a focus on safety [8]. - The joint laboratory has issued certificates to seven enterprises for generative AI model registrations, highlighting the region's commitment to fostering a safe AI environment [8]. Group 4: Strategic Focus Areas - The Zhuhai center and service station will concentrate on three main areas: strengthening cross-border safety collaboration, empowering the development of specialized industries, and building an open innovation ecosystem [9]. - Specific initiatives include exploring mutual recognition of AI regulatory rules between Guangdong and Macao, and establishing a platform for AI safety and industry scenario integration [9]. Group 5: AI Safety Trends - The joint laboratory released the "2026 Annual AI Safety Top Ten Trends" report, emphasizing the shift from passive protection to proactive governance in AI safety [14][18]. - Key trends identified include the acceleration of global AI compliance frameworks, the increasing complexity of attack methods, and the need for a full lifecycle governance approach to AI safety [15][16][18].
中央网信办:强化网络安全防护、网络数据安全管理和人工智能安全治理
Mei Ri Jing Ji Xin Wen· 2026-01-06 16:07
Core Viewpoint - The meeting emphasized the importance of maintaining online political security, ideological security, and social stability while enhancing the effectiveness of internet governance [1] Group 1: Online Security and Governance - The meeting highlighted the need to strengthen online security defenses and improve management of network data security and artificial intelligence governance [1] - It called for comprehensive advancement of the national cybersecurity system and modernization of capabilities [1] Group 2: Technological Innovation and Infrastructure - There is a focus on promoting technological innovation in the internet sector and building an ecosystem for the internet industry [1] - The meeting stressed the importance of developing information infrastructure and applications to support high-quality development [1] Group 3: Legal Framework and International Cooperation - The meeting underscored the need to advance legislation and law enforcement in the online space, enhancing the rule of law in cyberspace [1] - It encouraged expanding international exchanges and cooperation in cyberspace to build a community with a shared future [1] Group 4: Party Building and Workforce Development - The meeting emphasized the continuous strengthening of the party's construction within the internet information system and the development of the workforce [1]
中国网络空间安全协会卢卫:AI治理应分类,严管高风险场景
Nan Fang Du Shi Bao· 2025-12-20 15:36
Core Viewpoint - The forum focused on "AI Security Boundaries: Technology, Trust, and Governance New Order," emphasizing the need for a structured approach to AI governance that balances innovation with safety [2][4]. Group 1: Technology as a Foundation - Technology is the foundational support for AI safety governance, requiring continuous innovation and iteration to ensure security [5]. - Enhancements in AI safety include improving model robustness through adversarial training and employing differential privacy for data protection [5]. - There is a need for proactive security measures to be embedded in AI development from the outset, rather than applying fixes post-implementation [5]. Group 2: Trust as a Bridge - The proliferation of AI is fundamentally a process of gaining societal trust, which is essential for deep applications in critical sectors like healthcare and education [6]. - Building trust involves increasing transparency in algorithmic decision-making and addressing issues like privacy and fairness [6]. - Public trust is vital for the successful integration of AI technologies into everyday life, particularly in sensitive areas [6]. Group 3: Institutional Framework - A robust governance framework combining laws, standards, and ethical guidelines is necessary to safeguard AI development [6]. - The governance approach should be tiered, with stricter regulations for high-risk applications like autonomous driving and smart healthcare, while allowing innovation in lower-risk areas [6]. - Collaboration across departments and international cooperation is essential to tackle global AI safety challenges [6][7]. Group 4: Agility in Governance - Governance must remain agile and adapt to the rapid evolution of AI technologies, necessitating dynamic risk assessment mechanisms [7]. - The industry association is committed to actively participating in AI safety governance efforts, recognizing the challenges posed by new technologies [7].
人工智能发展主线有变,中国信通院给出这些研判
2 1 Shi Ji Jing Ji Bao Dao· 2025-12-14 00:38
Group 1 - The core theme of the 2026 China Information and Communication Research Institute's deep observation report is the development of new productive forces in the context of the "15th Five-Year Plan" and the artificial intelligence wave [1] - The report highlights that the continuous iteration of technology lays a solid foundation for the practical application of large models, with intelligent agents emerging as the primary form of application, showcasing the early form of "digital labor" [2][3] - In 2024, China's core AI industry is expected to exceed 900 billion yuan, with a growth rate of 24%, and is projected to surpass 1.2 trillion yuan in 2025 [2] Group 2 - AI is penetrating the core value creation segments, with a notable increase in the manufacturing sector's share of AI applications from 19.9% to 25.9% [4][6] - The report indicates that the penetration speed of AI is still limited by challenges such as the difficulty of obtaining industrial data and the encapsulation level of process knowledge [6] - The development of intelligent economy is seen as a necessary path to explore new economic growth momentum, with the goal of making it a significant growth driver by 2030 [7][8] Group 3 - The concept of "embodied intelligence" is rapidly developing, with over 40 billion yuan in financing and more than 350 companies in the industry, although the optimal model and robot form are still not defined [9] - The report discusses the ongoing debates regarding model routes, data training paradigms, and the physical form of robots, indicating that the industry is still in the early stages of large-scale implementation [10][11] - The central economic work conference emphasizes the need for deepening and expanding the "AI+" initiative, highlighting the importance of governance alongside technological development [12][13]
专家献策AI敏捷治理:要重视生成式数据,提前“预埋”标识
Nan Fang Du Shi Bao· 2025-12-08 05:14
Group 1 - The forum focused on "AI Innovation and Governance," discussing the advantages and challenges of AI development in China, including computing power support, talent cultivation, corporate responsibility, and agile governance mechanisms [1][3] - Experts emphasized the importance of "agile governance" in AI, highlighting the need for proactive management of generative data to ensure its ethical development, rather than reactive measures after issues arise [3][4] - The forum featured a call for a new ecosystem for AI development that is "safe, trustworthy, innovative, and inclusive," aiming to align technological progress with human welfare [5][7] Group 2 - The Vice President of the University of Science and Technology of China noted significant achievements in information technology infrastructure but identified a gap in the practical application of AI, emphasizing the need for data governance before implementing AI in sensitive environments like schools [4] - Representatives from Baidu and JD.com agreed that "agile governance" should involve simultaneous development and governance of AI technologies, ensuring that governance keeps pace with technological advancements [4][5] - Baidu's CTO outlined a comprehensive AI safety governance approach that includes strict data screening and model safety measures, while JD.com focuses on three levels of security: data protection, content labeling, and monitoring of sensitive content [5][6]