Workflow
基于人工神经网络的机器学习
icon
Search documents
“中国确实认真对待,你能信美国?还是信扎克伯格?”
Guan Cha Zhe Wang· 2025-09-06 11:32
Core Viewpoint - Geoffrey Hinton, known as the "father of AI," emphasizes the importance of AI safety and expresses concerns about the rapid development of AI technologies, advocating for international cooperation to mitigate risks [3][10][11]. Group 1: AI Safety Concerns - Hinton's resignation from Google was interpreted as a move to raise awareness about AI risks, although he clarifies that his retirement was primarily due to age and personal reasons [3][4]. - He warns against the dangers of AI, likening it to raising a tiger cub that could turn dangerous as it matures, stressing the need for careful management of AI development [11]. - Hinton believes that while countries may have conflicting interests regarding AI risks, there is potential for collaboration if one nation develops effective solutions [11]. Group 2: Perspectives on Global AI Development - Hinton criticizes the U.S. government's lack of regulatory will regarding AI, contrasting it with his observations of China's serious approach to AI safety [4][6]. - He acknowledges that China has made significant strides in AI and possesses a strong talent pool, suggesting that the U.S. attempts to suppress China's AI development may inadvertently accelerate it [9][15]. - During his visit to China, Hinton engaged with local experts and found that the narrative of China neglecting AI safety in favor of technological advancement is misleading [15]. Group 3: International Collaboration and Governance - Hinton advocates for a global dialogue on AI safety, calling for a consensus on aligning advanced AI systems with human control to ensure human welfare [11][15]. - He participated in the signing of the "AI Safety International Dialogue Shanghai Consensus," which urges governments and researchers to prioritize AI alignment with human values [11][15]. - Hinton's discussions in China highlighted the potential for the country to play a significant role in international AI governance and safety measures [15].