Workflow
大模型量化评级
icon
Search documents
将研制大模型量化评级体系
Nan Fang Du Shi Bao· 2025-09-15 23:10
Core Viewpoint - The establishment of the Guangdong-Hong Kong-Macao Greater Bay Area Generative Artificial Intelligence Safety Development Joint Laboratory aims to balance regulation and development through a multi-party collaborative mechanism, providing a localized AI safety development paradigm with international perspectives [2][10]. Group 1: AI Safety Risks - The most pressing issue in addressing AI safety risks in the Greater Bay Area is to scientifically, accurately, and efficiently assess and continuously enhance the credibility of large model outputs [4]. - Key challenges include reducing the degree of hallucination in AI models and ensuring compliance with legal, ethical, and regulatory standards [4]. Group 2: Resources and Advantages - The Joint Laboratory leverages a unique "resource puzzle" that includes government guidance, support from leading enterprises like Tencent, and research capabilities from universities like Sun Yat-sen University [4]. - This collaborative platform facilitates high-frequency interactions and rapid iterations to tackle challenges related to AI model hallucinations and compliance [4]. Group 3: AI Safety Assessment Framework - The laboratory plans to establish a comprehensive safety testing question bank and develop a security intelligence assessment engine for large models [5]. - The assessment framework will be based on principles of inclusive prudence, risk-oriented governance, and collaborative response, integrating technical protection with governance norms [5]. Group 4: Standardization and Regulation - The Joint Laboratory aims to create a localized safety standard system covering data security, content credibility, model transparency, and emergency response [6]. - Mandatory standards will be enforced in high-risk sectors like finance and healthcare, while innovative applications will be allowed to test and iterate in controlled environments [6]. Group 5: Talent Development - Universities in the Greater Bay Area are innovating talent cultivation models by integrating AI ethics, law, and governance into their curricula [8]. - Collaborative training bases with enterprises like Tencent are being established to provide students with practical experience in addressing real-world AI safety challenges [8]. Group 6: Future Expectations - The expectation is for the Joint Laboratory to become a national benchmark for AI safety assessment, promoting China's AI governance model internationally [9]. - The laboratory aims to create a sustainable and trustworthy ecosystem that not only assesses models but also drives model iteration and industry optimization [9].