Workflow
反向传播算法
icon
Search documents
77岁「AI教父」Hinton:AI早有意识,我们打造的智能,可能终结人类文明
3 6 Ke· 2025-10-11 11:28
「AI教父」Hinton毕生致力于让机器像大脑般学习,如今却恐惧其后果:AI不朽的身体、超凡的说服力,可能让它假装愚笨以求生存。人类对 「心智」的自大误解,预示着即将到来的智能革命。 当大家热议AI算力与应用之时,「AI教父」Hinton猛地扯回「何为人」的原点。 几十年来,Hinton像一位耐心的炼金术士,致力于将模仿大脑运作的理论,锻造成驱动现代AI的强大引擎。 然而,这位创造者如今却站在了自己创造物的阴影之下,发出了沉重的警告。 他尖锐地指出,人类思考和说话的方式,与LLM在底层逻辑上惊人地相似,都是基于已有信息对未来进行预测。 因其在神经网络领域的开创性工作,Geoffrey Hinton荣获诺贝尔物理学奖——尽管他谦虚地承认自己「不搞物理」。 在与著名主持人Jon Stewart的深度对话中,Hinton不仅仅科普了AI的基石,更在不经意间,一步步引领我们走向令人毛骨悚然的结论: 我们所创造的这些数字心智,可能已经拥有了我们一直以为人类独有的东西——主观体验。 访谈中,Hinton解释了大语言模型(LLM)的本质——它们通过同样的方式,学习海量文本,从而预测下一个最有可能出现的词。 一个概念,比如「 ...
你聪明,它就聪明——大语言模型的“厄里斯魔镜”假说
3 6 Ke· 2025-09-12 01:54
Core Insights - The article discusses the evolution of neural networks and the development of significant algorithms that have shaped modern AI, particularly focusing on the contributions of Terrence J. Sejnowski and Geoffrey Hinton in the 1980s [1][2] - It highlights the contrasting views on the cognitive abilities of large language models (LLMs) and their understanding of human-like intelligence, as illustrated through various case studies [3][5][10] Group 1: Historical Context and Development - In the 1980s, Sejnowski and Hinton identified key challenges in training multi-layer neural networks and sought to develop effective learning algorithms [1] - Their collaboration led to breakthroughs such as the Boltzmann machine and the backpropagation algorithm, which laid the foundation for modern neural network technology [2] Group 2: Case Studies on AI Understanding - The article presents four case studies that illustrate the differing perspectives on LLMs' understanding of human cognition and social interactions [5][10] - Case one involves a social experiment with Google's LaMDA, demonstrating its ability to infer emotional states based on social cues [6][11] - Case two critiques GPT-3's responses to absurd questions, suggesting that the model's limitations stem from the simplicity of the prompts rather than its intelligence [8][12] - Case three features a philosophical dialogue with GPT-4, highlighting its capacity for emotional engagement [9] - Case four discusses a former Google engineer's belief that LaMDA possesses consciousness, raising questions about AI's self-awareness [10] Group 3: Theoretical Implications - The "Mirror of Erised" hypothesis posits that LLMs reflect the intelligence and desires of their users, indicating that their outputs are shaped by user input [13][14] - The article argues that LLMs lack true understanding and consciousness, functioning instead as sophisticated statistical models that simulate human-like responses [11][14] Group 4: Future Directions for AI Development - Sejnowski emphasizes the need for advancements in AI to achieve Artificial General Autonomy (AGA), which would allow AI to operate independently in complex environments [16] - Key areas for improvement include the integration of embodied cognition, enabling AI to interact with the physical world, and the development of long-term memory systems akin to human memory [17][18] - The article suggests that understanding human developmental stages can inform the evolution of AI models, advocating for a more nuanced approach to training and feedback mechanisms [19][20] Group 5: Current Trends and Innovations - The article notes that AI is rapidly evolving, with advancements in multimodal capabilities and the integration of AI in various industries, enhancing efficiency and productivity [22] - It highlights the ongoing debate about the essence of intelligence and understanding in AI, drawing parallels to historical discussions about the nature of life [23]
成就GPU奇迹的AlexNet,开源了
半导体行业观察· 2025-03-22 03:17
Core Viewpoint - AlexNet, developed in 2012, revolutionized artificial intelligence and computer vision by introducing a powerful neural network for image recognition [2][3]. Group 1: Background and Development of AlexNet - AlexNet was created by Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever at the University of Toronto [4][3]. - Hinton is recognized as one of the fathers of deep learning, which is a foundational aspect of modern AI [5]. - The resurgence of neural networks in the 1980s was marked by the rediscovery of the backpropagation algorithm, which is essential for training multi-layer networks [6]. - The emergence of large datasets and sufficient computational power, particularly through GPUs, was crucial for the success of neural networks [7][9]. Group 2: ImageNet and Its Role - The ImageNet dataset, completed in 2009 by Fei-Fei Li, provided a vast collection of labeled images necessary for training AlexNet [8]. - ImageNet was significantly larger than previous datasets, enabling breakthroughs in image recognition [8]. - The competition initiated in 2010 aimed to improve image recognition algorithms, but initial progress was minimal until AlexNet's introduction [8]. Group 3: Technical Aspects and Achievements - AlexNet utilized NVIDIA GPUs and CUDA programming to efficiently train on the ImageNet dataset [12]. - The training process involved extensive parameter tuning and was conducted on a computer with two NVIDIA cards [12]. - AlexNet's performance surpassed competitors, marking a pivotal moment in AI, as noted by Yann LeCun [12][13]. Group 4: Legacy and Impact - Following AlexNet, the use of neural networks became ubiquitous in computer vision research [13]. - The advancements in neural networks led to significant developments in AI applications, including voice synthesis and generative art [13]. - The source code for AlexNet was made publicly available in 2020, highlighting its historical significance [14].