Group 1 - The core viewpoint emphasizes the need for responsible innovation in artificial intelligence (AI) to address emerging risks associated with human-like interactions, which could threaten citizen rights and social ethics [1][2] - The draft regulation aims to promote orderly development of humanized AI services and establish a governance framework centered on human welfare [1][2] Group 2 - The regulation anchors on national strategic directions, highlighting the principles of responsible innovation, scientific legislation, and a multi-dimensional governance system to ensure healthy AI development [2][5] - It identifies key risks stemming from the characteristics of "humanization" and "emotional interaction," focusing on the blurred boundaries between humans and machines [3][4] Group 3 - A transparent identity system is proposed to mitigate risks such as cognitive confusion and trust erosion, ensuring users' rights to information and choice [3][4] - Special attention is given to protecting vulnerable groups, such as minors and the elderly, from potential emotional dependency and ensuring equitable access to technological benefits [4][5] Group 4 - The regulation promotes a comprehensive governance approach, integrating responsibility across all stages of humanized AI service development, from design to operation [5][6] - It encourages collaborative governance involving government, industry organizations, and the public to create a balanced environment for innovation and risk prevention [6][7] Group 5 - The introduction of a "regulatory sandbox" allows for flexible regulatory frameworks that support innovation while managing risks effectively [7] - Overall, the regulation translates the concept of responsible innovation into actionable legislative rules, providing a stable expectation for the healthy development of humanized AI services in China [7]
《办法》:建立身份透明制度,划定拟人化安全红线
Xin Lang Cai Jing·2025-12-27 11:31