Core Insights - The global attitude towards AI has shifted from "apocalyptic fears" to focusing on "releasing real industrial potential" by 2025, indicating a significant change in governance priorities [1][3] Group 1: Macro Landscape - The Paris AI Action Summit in February 2025 marked a shift from "safety anxiety" to "innovation and action," reflecting a restructuring of global governance logic [2] - The EU is adjusting its regulatory approach by introducing the "Digital Omnibus" proposal to simplify rules and delay high-risk obligations to enhance industrial competitiveness [2] - The U.S. is moving towards deregulation, with the Trump administration's focus on a unified federal framework to eliminate barriers for the industry [2] - China emphasizes a pragmatic approach, balancing specific regulatory measures with an application-oriented strategy, creating a layered governance system [2] Group 2: Data Governance - The AI industry faces a structural shortage of high-quality data, leading to a search for synthetic data as a key solution [4] - Legislative efforts in the EU and Japan are establishing frameworks for "text and data mining," while U.S. court rulings are leaning towards recognizing the use of legally acquired books for training as "fair use" [4] - Future regulations may evolve beyond simple prohibitions to create a commercially viable mechanism for balancing rights and technological advancement [4] Group 3: Model Governance - The U.S. is shifting from comprehensive coverage to targeted regulation, exemplified by California's SB 53 law focusing on transparency for only a few large-scale models [7] - The EU's complex regulatory framework is facing challenges due to high compliance costs, prompting frequent legislative adjustments [7] - China's "scene slicing" strategy involves penetrating regulation across specific AI services, creating a governance system from data to application [7] - The rise of open-source models like DeepSeek-R1 is reshaping the global AI landscape, highlighting the importance of establishing a "safe harbor" for contributors [8] Group 4: Application Scenarios - The transition of AI from cloud to real-world applications raises new privacy challenges, particularly with intelligent agents that require extensive permissions [10] - AI's evolution into emotional companions introduces risks related to emotional dependency, prompting diverse regulatory approaches to protect vulnerable groups [10] - The struggle against deepfakes highlights the limitations of watermarking technologies, suggesting a focus on high-risk scenarios for precise governance [11] Group 5: Future Outlook - The discussion around AI consciousness and welfare is evolving from philosophical debates to scientific validation, indicating a potential need for governance frameworks that address AI as a rights-bearing entity [13]
2025年AI治理报告:回归现实主义
3 6 Ke·2026-01-22 11:37