Core Viewpoint - The forum focused on the governance of AI, emphasizing the need for a balanced approach that aligns AI development with human values and societal norms, as articulated by Wang Jiangping's concept of "Shangshan AI" [2][10]. Group 1: AI Governance Challenges - The transition of AI systems from "technical tools" to "intelligent entities" is leading to exponential growth in both positive and negative impacts, while governance progress remains limited [5]. - AI safety risks are increasingly manifesting across various domains, including content ecology and physical safety, potentially affecting economic and social stability [3][5]. - The complexity and dynamism of human values make it challenging to establish a universal and actionable value target function for AI systems [6]. Group 2: Human-Machine Alignment - Human-machine alignment is identified as a core issue in the intelligent era, aiming to ensure AI systems' goals and outputs are consistent with human values and societal norms [5][6]. - Current mainstream models utilize techniques like Reinforcement Learning from Human Feedback (RLHF) and Retrieval-Augmented Generation (RAG) to enhance alignment with human preferences [5]. Group 3: Cultural and Value Alignment - The concept of "sovereign AI" has gained traction, highlighting the importance of aligning AI with national cultural and economic interests [7]. - Value alignment should consider a multi-structured approach, incorporating a "common baseline + diverse branches + dynamic evolution" principle [8]. Group 4: Addressing the AI Divide - The disparity in AI technology access and application among different countries and groups raises concerns about an "intelligent divide" [11]. - To bridge the AI divide, strategies such as open-source sharing, technology transfer, and capacity building are recommended [12]. Group 5: Regulatory Perspectives - The debate on whether to impose strict or lenient regulations on AI technology development is ongoing, with a call for establishing solid ethical boundaries while allowing innovation [12]. - The potential for an AI investment bubble exists, driven by concentrated capital and unclear business models, necessitating a focus on genuine societal needs to mitigate risks [13]. Group 6: Practical Implementation of Governance - Implementing the "Shangshan AI" philosophy requires collaborative efforts from developers, regulators, and society to prioritize social value and inclusivity in AI technology [13]. - The governance approach should be flexible and open, encouraging experimentation while maintaining clear ethical and safety boundaries [13].
王江平:用上善AI的东方智慧,平衡技术发展的激进与焦虑
Nan Fang Du Shi Bao·2025-12-20 05:26