Core Insights - The Chinese Artificial Intelligence Industry Development Alliance has launched the "Artificial Intelligence Safety Commitment" to enhance safety governance in AI, marking a significant step towards systematic and transparent practices in the industry [1][3] - The initiative aims to address the growing safety risks associated with rapid AI development and is aligned with the "Global AI Governance Initiative" [3] Group 1: Initiative Overview - The "Artificial Intelligence Safety Commitment" was released during the 15th plenary session of the Alliance, involving key leaders from the Ministry of Industry and Information Technology and representatives from major tech companies [1] - The commitment emphasizes a human-centered and benevolent approach to AI, contributing a Chinese solution to global AI governance [1] Group 2: Participation and Engagement - As of now, 22 companies have signed the commitment, with 18 actively disclosing their safety measures [3] - The Alliance encourages voluntary participation and self-regulation among enterprises to enhance safety practices [3] Group 3: Key Focus Areas - The initiative outlines six core commitment areas: risk management, model safety, data security, infrastructure security, transparency, and cutting-edge safety research [3] - A total of 20 key safety labels have been identified, covering aspects such as safety team organization, risk management plans, safety risk baselines, red team testing methods, and emergency response mechanisms [3] - The Alliance has publicly shared 43 typical practices from disclosing companies to promote practical actions in AI safety governance [3]
18家企业披露《人工智能安全承诺》实践成果,推动AI安全治理新进程
Huan Qiu Wang·2025-07-17 10:34