Core Viewpoint - The article discusses the evolving landscape of artificial intelligence (AI) governance, highlighting the different approaches taken by the EU, California, and China in regulating AI models, particularly general-purpose and frontier models. It emphasizes the need for innovation while ensuring safety and control over AI models. Group 1: EU Approach - The EU has established a complex risk governance framework categorizing AI systems into four risk levels: prohibited, high-risk, limited risk, and minimal risk, with stricter regulations for higher risks [2][3] - The EU's governance mechanism for models distinguishes between those with and without "systemic risk," requiring all model providers to disclose technical documentation and training summaries, while those with systemic risk must undergo model assessments and implement mitigation measures [2][3] - The EU's framework is characterized by overlapping standards for models and applications, leading to a complex and burdensome regulatory environment, prompting the EU Commission to push for simplification of related regulations [3][4] Group 2: California Approach - California's SB 53 focuses on a narrower regulatory scope, targeting "frontier developers" who train models using over 10^26 FLOPs, and imposes lighter obligations compared to the EU [4][5] - The obligations under SB 53 are limited to basic transparency requirements, such as website information and communication mechanisms, contrasting with the EU's extensive documentation requirements [4][5] - California's legislative approach aims to promote industry growth and competitiveness, avoiding excessive regulatory constraints [5] Group 3: China Approach - China's governance is application-driven, focusing on practical service applications to indirectly regulate models, rather than directly targeting the models themselves [6][7] - The Chinese regulatory framework has evolved from algorithm governance to model governance, establishing institutional constraints through various regulations that address algorithmic risks and model training [7][8] - China's approach emphasizes risk identification and management, categorizing risks into internal, application, and derivative risks, with a clear distinction between model risks and application risks [8][9] Group 4: Commonalities and Future Directions - Despite differing approaches, the EU, California, and China share a tendency towards "flexible governance" and industry-led initiatives, allowing for greater compliance autonomy [9][10] - All three regions recognize the importance of building an assessment ecosystem to address uncertainties in model capabilities, suggesting community-driven evaluation mechanisms [10][11] - Transparency is identified as a core governance tool, facilitating control while allowing for innovation, with each region developing its own transparency frameworks [11]
关于模型治理,中美欧的差异与共识
3 6 Ke·2025-11-14 11:07