大咖云集!第九届啄木鸟数据治理论坛前瞻,共话AI安全边界
Nan Fang Du Shi Bao·2025-12-16 03:35

Core Insights - The wave of generative artificial intelligence has transitioned from a phase of technological enthusiasm to a period of deep application and reflection on safety boundaries [1] - The upcoming "Nightingale Data Governance Forum" will address the core theme of "AI Safety Boundaries: Technology, Trust, and New Governance Order" [1] Group 1: Forum Overview - The forum will take place on December 18 in Beijing, featuring authoritative policy interpretations, cutting-edge legal and ethical discussions, and practical industry observations [1] - Keynote speeches will be delivered by prominent figures, including Lu Wei, who has previously warned about the need for proactive assessments of AI's safety and ethical risks [1][2] Group 2: Expert Contributions - Four experts will share insights during the keynote session, covering topics such as AI governance philosophy, copyright issues related to generative AI, and practical judicial experiences with AI-related disputes [2] - A report titled "Generative AI Application: Transparency Assessment and Case Analysis Report (2025)" will be released, highlighting the current state of AI applications in terms of transparency and accountability [2] Group 3: Technical Demonstrations - A live demonstration by the technical lead of GEEKCON will showcase the physical security challenges posed by AI when integrated into robots and other devices [3] - A roundtable discussion will focus on the new ethical and safety governance challenges arising from AI technology development [3] Group 4: Forum Mission - Since its inception in 2017, the "Nightingale Data Governance Forum" has aimed to create a diverse dialogue platform to promote effective governance in the digital economy [4] - The forum seeks to contribute wisdom towards building a trustworthy, accountable, and effective governance order for AI [4]

大咖云集!第九届啄木鸟数据治理论坛前瞻,共话AI安全边界 - Reportify