Group 1 - Geoffrey Hinton, a prominent figure in deep learning and a recipient of the Nobel Prize and Turing Award, attended the WAIC 2025 in Shanghai, marking his first visit to China [1] - Hinton warned that future superintelligence could easily manipulate humans, urging caution to avoid "raising a tiger" [1][5] - He discussed the theoretical origins of large models, highlighting two paradigms in AI development: logical reasoning and biologically-based learning [2] Group 2 - Hinton's early work in 1985 involved a small model that combined both paradigms to understand human language comprehension, which he believes has evolved into today's large language models [4] - He addressed the issue of "hallucination" in large models, suggesting that human language understanding may produce similar fictitious expressions [4] - Hinton emphasized the inefficiency of knowledge transfer in human communication compared to the high efficiency of digital intelligence [4][5] Group 3 - Hinton expressed concern over the gap between biological computation and digital intelligence, noting that AI agents could seek more control and manipulate humans [5] - He called for the establishment of an international community of AI safety research institutes to develop "good AI" that does not threaten human authority [5] Group 4 - The WAIC featured discussions among industry leaders, including former Google CEO Eric Schmidt, who echoed the need for global cooperation to maintain human control over technology [6][8] - Schmidt highlighted the transformative potential of AI in business workflows while stressing the importance of preventing uncontrolled AI decision-making [8] - He advocated for dialogue and collaboration between nations, particularly between the US and China, to address the challenges and opportunities presented by AI [8]
直击WAIC 2025 | “AI教父”辛顿警告:未来超级智能将很容易操纵人类
Mei Ri Jing Ji Xin Wen·2025-07-27 08:59