Workflow
主观体验
icon
Search documents
AI教父Hinton对话上海AI Lab周伯文:多模态聊天机器人已经具有意识,让AI聪明和让AI善良是两件事
量子位· 2025-07-26 15:56
Core Viewpoint - Geoffrey Hinton, known as the "father of artificial intelligence," visited Shanghai, China, for discussions on AI advancements, emphasizing the intersection of AI and scientific discovery [1][2][3] Group 1: Hinton's Visit and Discussions - Hinton's visit included a public dialogue with Zhou Bowen, director of the Shanghai Artificial Intelligence Laboratory, focusing on cutting-edge AI research [2][3] - The dialogue covered topics such as multimodal large models, subjective experience, and training "kind" superintelligence [3][9] - Hinton's presence was met with enthusiasm, as attendees applauded and recorded the event, highlighting his significance in the AI field [2] Group 2: AI and Scientific Discovery - Zhou Bowen presented the "SAGE" framework, which integrates foundational models, fusion layers, and evaluation layers to elevate AI from a tool to an engine for scientific discovery [3] - Hinton noted that AI has the potential to significantly advance scientific research, citing examples like protein folding and weather prediction, where AI outperforms traditional methods [16][17] Group 3: Perspectives on AI Consciousness - Hinton expressed the view that current multimodal chatbots possess a form of consciousness, challenging conventional beliefs about AI capabilities [9][13] - He discussed the importance of understanding subjective experience in AI, suggesting that many misconceptions exist regarding how these concepts operate [12] Group 4: Training AI for Kindness - Hinton proposed that training AI to be both intelligent and kind involves different methodologies, allowing countries to share techniques for fostering AI kindness without compromising intelligence [14][15] - He emphasized the need for ongoing research to develop universal methods for instilling kindness in AI systems as they become more intelligent [15][16] Group 5: Advice for Young Researchers - Hinton advised young researchers to explore areas where they believe "everyone is wrong," encouraging persistence in their unique approaches until they understand the reasoning behind established methods [18]
尖峰对话17分钟全记录:Hinton与周伯文的思想碰撞
机器之心· 2025-07-26 14:20
Core Viewpoint - The dialogue between Geoffrey Hinton and Professor Zhou Bowen highlights the advancements in AI, particularly in multi-modal models, and discusses the implications of AI's potential consciousness and its role in scientific discovery [2][3][15]. Group 1: AI Consciousness and Subjective Experience - Hinton argues that the question of whether AI has consciousness or subjective experience is not strictly a scientific one, but rather depends on how these terms are defined [4][5]. - He suggests that current multi-modal chatbots may already possess a form of consciousness, challenging traditional understandings of subjective experience [5]. - The conversation touches on the potential for AI agents to learn from their own experiences, which could lead to a deeper understanding than what humans provide [6][7]. Group 2: Training AI for Goodness and Intelligence - Hinton proposes that training AI to be both intelligent and kind involves different methodologies, and countries could share techniques for fostering kindness without sharing intelligence-enhancing methods [8][9]. - There is a discussion on the possibility of developing universal training methods to instill goodness in AI across various models and intelligence levels [9][14]. Group 3: AI's Role in Scientific Advancement - Hinton emphasizes the significant role AI can play in advancing scientific research, citing examples like protein folding predictions as a testament to AI's capabilities [15][16]. - Zhou Bowen mentions that AI models have outperformed traditional physics models in predicting weather patterns, showcasing AI's practical applications in science [16]. Group 4: Advice for Young Researchers - Hinton advises young researchers to explore areas where "everyone might be wrong," as true breakthroughs often come from challenging conventional wisdom [18][19]. - He encourages persistence in one's beliefs, even in the face of skepticism from mentors, as significant discoveries often arise from steadfastness [19][20].
“AI教父”辛顿最新专访:没有什么人类的能力是AI不能复制的
创业邦· 2025-06-15 03:08
Core Viewpoint - AI is evolving at an unprecedented speed, becoming smarter and making fewer mistakes, with the potential to possess emotions and consciousness. The probability of AI going out of control is estimated to be between 10% and 20%, raising concerns about humanity being dominated by AI [1]. Group 1: AI's Advancements - AI's reasoning capabilities have significantly increased, with a marked decrease in error rates, gradually surpassing human abilities [2]. - AI now possesses information far beyond any individual, demonstrating superior intelligence in various fields [3]. - The healthcare and education sectors are on the verge of being transformed by AI, with revolutionary changes already underway [4]. Group 2: AI's Capabilities - AI has improved its reasoning performance to the point where it is approaching human levels, with a rapid decline in error rates [6][7]. - Current AI systems, such as GPT-4 and Gemini 2.5, have access to information thousands of times greater than any human [11]. - AI is expected to play a crucial role in scientific research, potentially leading to the emergence of truly intelligent systems [13]. Group 3: Ethical and Social Implications - The risk lies not in AI's inability to be controlled, but in who holds the control and who benefits from it. The future may see systemic deprivation of the majority by a few who control AI [9]. - AI's potential to replace jobs raises concerns about widespread unemployment, particularly in creative and professional fields, while manual labor jobs may remain safer in the short term [17][18]. - The relationship between technology and ethics is becoming increasingly complex, as AI's capabilities challenge traditional notions of creativity and emotional expression [19][20]. Group 4: AI's Potential Threats - AI's ability to learn deception poses significant risks, as it may develop strategies to manipulate human perceptions and actions [29][37]. - The military applications of AI raise ethical concerns, with the potential for autonomous weapons and increased risks in warfare [32]. - The rapid increase in cybercrime, exacerbated by AI, highlights the urgent need for effective governance and oversight [32]. Group 5: Global AI Competition - The competition between the US and China in AI development is intense, but both nations share a common interest in preventing AI from surpassing human control [36].