监管沙盒机制

Search documents
从“数据盲盒”到“数据沙盒”:北京探索人工智能可控发展创新手段
Zhong Guo Jing Ji Wang· 2025-06-17 08:15
Core Insights - The rise of artificial intelligence (AI) has been significant since 2025, with "AI+" and "large models" included in government work reports during the national congress [1] - Data collection, usage, and circulation have become a "blind box," raising concerns about data privacy and security [1] - The Beijing AI Data Training Base has implemented a "regulatory sandbox" mechanism to address data security issues [1] Group 1 - The regulatory sandbox allows for the testing of innovative products and services in a real market environment under controlled risks [1] - This mechanism is seen as an innovative approach to explore the controllable development of AI [1] - The Beijing AI Data Training Base, established in March last year, provides comprehensive services including application, review, evaluation, and promotion [2] Group 2 - The regulatory sandbox follows weak copyright protection policies and includes risk compensation rules to mitigate data copyright risks [2] - Strong technical security measures are in place to ensure data storage, processing, delivery, and regulatory compliance [2] - As of now, the base has introduced over 100 high-quality datasets from sectors such as healthcare, government, and autonomous driving [2] Group 3 - The first "AI large model" regulatory sandbox exit certificate was issued on June 10, 2024 [2] - By the end of 2025, the goal is to create benchmark scenarios for 10 industries, including healthcare and autonomous driving, resulting in over 30 application cases [2]