Core Viewpoint - The global attitude towards AI has shifted from "apocalyptic fear" to focusing on "releasing real industrial potential" by 2025, indicating a significant change in AI governance priorities [2]. Macro Landscape - The emphasis is on development with a "soft landing" for safety [3] - The Paris AI Action Summit in February 2025 marked a shift from "safety anxiety" to "innovation and action," reflecting a restructuring of global governance logic [4] - The EU is adjusting its regulatory stance by introducing the "Digital Omnibus" proposal to simplify rules and delay high-risk obligations to enhance industrial competitiveness [4] - The U.S. is moving towards deregulation, with the Trump administration's focus on a unified federal framework to eliminate barriers for the industry [4] - China is adopting a pragmatic approach, emphasizing application-oriented governance while maintaining specific regulatory measures [4][5]. Data Governance - The AI industry faces a severe challenge of "structural shortage" of high-quality data by 2025, leading to a search for synthetic data as a key path for technological breakthroughs [6][7] - Legislative efforts in the EU and Japan are establishing frameworks for "text and data mining," while U.S. court rulings are leaning towards recognizing the use of legally acquired books for training as "fair use" [7]. Model Governance - The U.S. is shifting from comprehensive coverage to a focus on major models, as seen in California's SB 53 bill, which reduces stringent requirements for developers [10] - The EU is attempting to create a detailed regulatory system but faces high compliance costs, necessitating frequent legislative adjustments [10] - China is implementing a "scene slicing" strategy for governance, focusing on specific services and building a layered governance system from data to application [10]. Application Scenarios - The emergence of edge AI agents poses significant privacy challenges, as they require extensive permissions that blur data boundaries and raise security concerns [12] - The evolution of AI from productivity tools to emotional companions introduces new risks related to emotional dependency, prompting diverse regulatory approaches to protect vulnerable groups [12]. - The AI watermarking technology faces challenges in effectively preventing misuse, highlighting the need for targeted governance strategies in high-risk scenarios [13]. Outlook - The discussion around AI consciousness and welfare is evolving from philosophical debates to scientific validation, raising questions about the future of human-AI relationships and governance [18].
2025年AI治理报告:回归现实主义
腾讯研究院·2026-01-22 08:44