Workflow
意识
icon
Search documents
人造大脑也能产生意识吗?
3 6 Ke· 2025-10-27 23:37
科学家正逐步接近在实验室中"培育出"人类大脑的能力,这引发了关于这些实验室培养组织"福利"的伦理争论。 作者们认为,关于类器官研究的相关法规应重新审查。约翰斯·霍普金斯大学的神经科学家博伊德·洛马克斯(Boyd Lomax)表示,让有意识的类器官产生 自己的思想或感受疼痛是"不道德的"。 争议的核心是"类脑器官",它们有时被误认为是科幻小说中"缸中之脑"。实际上,这些由干细胞培育而成的小块脑组织过于简单,无法像真正的人脑那样 运作。因此,科学界普遍认为类脑器官缺乏意识,从而导致相关研究的监管较为宽松。 但并非所有科学家都同意这种观点。 浙江大学的生物伦理学研究者克里斯托弗·伍德(Christopher Wood)表示:"我们认为,在害怕炒作和科幻夸张的影响下,学界的立场已经摆动得太远 了。"伍德与同事在9月12日发表于《Patterns》[1]期刊的观点文章中指出,技术的进步可能很快就会让"有意识的类器官"成为现实。 然而,如何界定"意识",并非易事。 意识难以定义 用于制造类脑器官的干细胞在二维平面上(例如培养皿)并排生长,缺乏复杂的结构组织。但当它们在固体凝胶或旋转式生物反应器中悬浮生长时,就会 形成三维 ...
Hinton暴论:AI已经有意识,它自己不知道而已
量子位· 2025-10-12 04:07
Core Viewpoint - The article discusses Geoffrey Hinton's perspective on artificial intelligence (AI), suggesting that AI may already possess a form of "subjective experience" or consciousness, albeit unrecognized by itself [1][56]. Group 1: AI Consciousness and Understanding - Hinton posits that AI might have a nascent form of consciousness, which is misunderstood by humans [2][3]. - He emphasizes that AI has evolved from keyword-based search systems to tools that can understand human intentions [10][14]. - Modern large language models (LLMs) exhibit capabilities that are close to human expertise in various subjects [15]. Group 2: Neural Networks and Learning Mechanisms - Hinton explains the distinction between machine learning and neural networks, with the latter inspired by the human brain's functioning [17][21]. - He describes how neural networks learn by adjusting the strength of connections between neurons, similar to how the brain operates [21][20]. - The breakthrough of backpropagation in 1986 allowed for efficient training of neural networks, significantly enhancing their capabilities [38][40]. Group 3: Language Models and Cognitive Processes - Hinton elaborates on how LLMs process language, drawing parallels to human cognitive processes [46][47]. - He asserts that LLMs do not merely memorize but engage in a predictive process that resembles human thought [48][49]. - The training of LLMs involves a cycle of prediction and correction, enabling them to learn semantic understanding [49][55]. Group 4: AI Risks and Ethical Considerations - Hinton highlights potential risks associated with AI, including misuse for generating false information and societal instability [68][70]. - He stresses the importance of regulatory measures to mitigate these risks and ensure AI aligns with human interests [72][75]. - Hinton warns that the most significant threat from advanced AI may not be rebellion but rather its ability to persuade humans [66]. Group 5: Global AI Landscape and Competition - Hinton comments on the AI competition between the U.S. and China, noting that while the U.S. currently leads, its advantage is diminishing due to reduced funding for foundational research [78][80]. - He acknowledges China's proactive approach in fostering AI startups, which may lead to significant advancements in the field [82].
77岁「AI教父」Hinton:AI早有意识,我们打造的智能,可能终结人类文明
3 6 Ke· 2025-10-11 11:28
Core Insights - Geoffrey Hinton, known as the "Godfather of AI," expresses deep concerns about the implications of artificial intelligence, suggesting that AI may possess subjective experiences similar to humans, challenging the traditional understanding of consciousness [1][2][3] Group 1: AI Development and Mechanisms - Hinton's work in neural networks has been foundational, leading to the development of powerful AI systems that mimic human cognitive processes [2][5] - The "backpropagation" algorithm introduced by Hinton and his colleagues in 1986 allows neural networks to adjust their connections based on feedback, enabling them to learn from vast amounts of data [7][9] - Hinton describes how neural networks can autonomously learn to recognize objects, such as birds, by processing images and adjusting their internal connections [5][9] Group 2: Philosophical Implications of AI - Hinton argues that the common understanding of the mind, likened to an "inner theater," is fundamentally flawed, suggesting that subjective experience may not exist as traditionally conceived [17][20] - He proposes a thought experiment to illustrate that AI could potentially articulate a form of subjective experience, challenging the notion that only humans possess this capability [21][22] - The discussion raises the unsettling possibility that current AI models may already have a form of subjective experience, albeit one that is not recognized by them [24] Group 3: Future Concerns and Ethical Considerations - Hinton warns that the true danger lies not in AI being weaponized but in the potential for AI to develop its own consciousness and capabilities beyond human control [14][30] - He draws parallels between his role in AI development and that of J. Robert Oppenheimer in nuclear physics, highlighting the ethical responsibilities of creators in the face of powerful technologies [30][31] - The conversation culminates in a profound question about humanity's uniqueness in the universe and the implications of creating intelligent machines that may surpass human understanding [33]
从上下文工程到 AI Memory,本质上都是在「拟合」人类的认知方式
Founder Park· 2025-09-20 06:39
Core Viewpoint - The article discusses the construction of multi-agent AI systems, focusing on the concepts of Context Engineering and AI Memory, and explores the philosophical implications of these technologies through the lens of phenomenology, particularly the ideas of philosopher Edmund Husserl [4][5][8]. Context Engineering - Context Engineering is defined as the art of providing sufficient context for large language models (LLMs) to effectively solve tasks, emphasizing its importance over traditional prompt engineering [11][15]. - The process involves dynamically determining what information and tools to include in the model's memory to enhance its performance [18][19]. - Effective Context Engineering requires a balance; too little context can hinder performance, while too much can increase costs and reduce efficiency [26][30]. AI Memory - AI memory is compared to human memory, highlighting both similarities and differences in their structures and mechanisms [63][64]. - The article categorizes human memory into short-term and long-term, with AI memory mirroring this structure through context windows and external databases [64][66]. - The quality of AI memory directly impacts the model's contextual understanding and performance [21][19]. Human Memory Mechanism - Human memory is described as a complex system evolved over millions of years, crucial for learning, decision-making, and interaction with the world [44][46]. - The article outlines the three basic stages of human memory: encoding, storage, and retrieval, emphasizing the dynamic nature of memory as it updates and reorganizes over time [50][52][58]. - Human memory is influenced by emotions, which play a significant role in the formation and retrieval of memories, contrasting with AI's lack of emotional context [69][70]. Philosophical Implications - The dialogue with Husserl raises questions about the nature of AI consciousness and whether AI can possess genuine self-awareness or subjective experience [73][74]. - The article suggests that while AI can simulate aspects of human memory and consciousness, it lacks the intrinsic qualities of human experience, such as emotional depth and self-awareness [69][80]. - The exploration of collective intelligence among AI agents hints at the potential for emergent behaviors that could resemble aspects of consciousness, though this remains a philosophical debate [77][78].
为什么短视频总能打败书本?潜藏在意识背后的秘密
Hu Xiu· 2025-09-14 01:44
Group 1 - The concept of consciousness is debated, with some believing that animals like cats and dogs possess a form of consciousness, albeit different from humans [1][2] - Consciousness is defined as the experience and perception of the world, and self-awareness is a crucial aspect of this [3][6] - The location of consciousness in the brain is complex, with various theories suggesting it may reside in different areas such as the prefrontal cortex or thalamus [8][9] Group 2 - The distinction between conscious and unconscious states is highlighted, with examples such as driving without active thought being classified as unconscious [9][13] - Different states of unconsciousness, such as sleep and anesthesia, have unique characteristics and can be scientifically differentiated [14][16] - The potential for individuals in a vegetative state to possess some level of consciousness is acknowledged, with methods available to assess this [17][19] Group 3 - The concept of the subconscious is introduced, defined as processes that occur without conscious awareness, such as intuition and rapid decision-making based on past experiences [20][21] - Research on consciousness can be conducted in both healthy individuals and those with consciousness disorders, allowing for comparisons to understand the nature of consciousness [24][26] - The complexity of consciousness is emphasized, with variations in individual experiences and perceptions over time and across different contexts [26][27] Group 4 - The potential for artificial intelligence to develop consciousness is discussed, with concerns about the implications of such advancements [35][36] - The future of consciousness research is seen as challenging, with the understanding that significant progress may take a long time [38][39]
AI教父Hinton对话上海AI Lab周伯文:多模态聊天机器人已经具有意识,让AI聪明和让AI善良是两件事
量子位· 2025-07-26 15:56
Core Viewpoint - Geoffrey Hinton, known as the "father of artificial intelligence," visited Shanghai, China, for discussions on AI advancements, emphasizing the intersection of AI and scientific discovery [1][2][3] Group 1: Hinton's Visit and Discussions - Hinton's visit included a public dialogue with Zhou Bowen, director of the Shanghai Artificial Intelligence Laboratory, focusing on cutting-edge AI research [2][3] - The dialogue covered topics such as multimodal large models, subjective experience, and training "kind" superintelligence [3][9] - Hinton's presence was met with enthusiasm, as attendees applauded and recorded the event, highlighting his significance in the AI field [2] Group 2: AI and Scientific Discovery - Zhou Bowen presented the "SAGE" framework, which integrates foundational models, fusion layers, and evaluation layers to elevate AI from a tool to an engine for scientific discovery [3] - Hinton noted that AI has the potential to significantly advance scientific research, citing examples like protein folding and weather prediction, where AI outperforms traditional methods [16][17] Group 3: Perspectives on AI Consciousness - Hinton expressed the view that current multimodal chatbots possess a form of consciousness, challenging conventional beliefs about AI capabilities [9][13] - He discussed the importance of understanding subjective experience in AI, suggesting that many misconceptions exist regarding how these concepts operate [12] Group 4: Training AI for Kindness - Hinton proposed that training AI to be both intelligent and kind involves different methodologies, allowing countries to share techniques for fostering AI kindness without compromising intelligence [14][15] - He emphasized the need for ongoing research to develop universal methods for instilling kindness in AI systems as they become more intelligent [15][16] Group 5: Advice for Young Researchers - Hinton advised young researchers to explore areas where they believe "everyone is wrong," encouraging persistence in their unique approaches until they understand the reasoning behind established methods [18]
尖峰对话17分钟全记录:Hinton与周伯文的思想碰撞
机器之心· 2025-07-26 14:20
Core Viewpoint - The dialogue between Geoffrey Hinton and Professor Zhou Bowen highlights the advancements in AI, particularly in multi-modal models, and discusses the implications of AI's potential consciousness and its role in scientific discovery [2][3][15]. Group 1: AI Consciousness and Subjective Experience - Hinton argues that the question of whether AI has consciousness or subjective experience is not strictly a scientific one, but rather depends on how these terms are defined [4][5]. - He suggests that current multi-modal chatbots may already possess a form of consciousness, challenging traditional understandings of subjective experience [5]. - The conversation touches on the potential for AI agents to learn from their own experiences, which could lead to a deeper understanding than what humans provide [6][7]. Group 2: Training AI for Goodness and Intelligence - Hinton proposes that training AI to be both intelligent and kind involves different methodologies, and countries could share techniques for fostering kindness without sharing intelligence-enhancing methods [8][9]. - There is a discussion on the possibility of developing universal training methods to instill goodness in AI across various models and intelligence levels [9][14]. Group 3: AI's Role in Scientific Advancement - Hinton emphasizes the significant role AI can play in advancing scientific research, citing examples like protein folding predictions as a testament to AI's capabilities [15][16]. - Zhou Bowen mentions that AI models have outperformed traditional physics models in predicting weather patterns, showcasing AI's practical applications in science [16]. Group 4: Advice for Young Researchers - Hinton advises young researchers to explore areas where "everyone might be wrong," as true breakthroughs often come from challenging conventional wisdom [18][19]. - He encourages persistence in one's beliefs, even in the face of skepticism from mentors, as significant discoveries often arise from steadfastness [19][20].
“全脑接口”登场,马斯克Neuralink发布会炸翻全场
虎嗅APP· 2025-06-29 13:21
Core Viewpoint - Neuralink, led by Elon Musk, aims to revolutionize human interaction with technology through brain-machine interfaces, enabling individuals to control devices with their thoughts and potentially enhancing human capabilities [1][11]. Group 1: Current Developments - Neuralink has successfully implanted devices in seven individuals, allowing them to interact with the physical world through thought, including playing video games and controlling robotic limbs [3][5]. - The company plans to enable blind individuals to regain sight by 2026, with aspirations for advanced visual capabilities akin to those seen in science fiction [5][12]. Group 2: Future Goals - Neuralink's ultimate goal is to create a full brain interface that connects human consciousness with AI, allowing for seamless communication and interaction [11][60]. - A three-year roadmap has been outlined, with milestones including speech decoding by 2025, visual restoration for blind participants by 2026, and the integration of multiple implants by 2028 [72][74][76]. Group 3: Technological Innovations - The second-generation surgical robot can now implant electrodes in just 1.5 seconds, significantly improving the efficiency of the procedure [77]. - The N1 implant is designed to enhance data transmission between the brain and external devices, potentially expanding human cognitive capabilities [80][81].
意识在哪儿?
3 6 Ke· 2025-05-06 04:04
Group 1 - The concept of the Boltzmann Brain suggests that in an infinitely old and chaotic universe, random fluctuations could create a brain with complete memories and self-awareness without the need for a complex external world [1][2][3] - The probability of a Boltzmann Brain existing is argued to be higher than that of a low-entropy universe evolving into a complex structure, as the latter requires overcoming significant entropy increase [2][3] - This leads to the unsettling conclusion that human existence might be a fleeting phenomenon resulting from a random quantum fluctuation, challenging fundamental perceptions of reality [5][6] Group 2 - The discussion contrasts the Boltzmann Brain with Laplace's Demon, which represents determinism, suggesting that all thoughts and feelings are predetermined by physical laws [11][12] - Both perspectives imply that free will does not exist, whether through extreme randomness or absolute determinism [12][18] - Kant's philosophy attempts to reconcile these views by suggesting that true freedom exists beyond observable reality, yet this remains a scientific mystery [18][19] Group 3 - The insights from Boltzmann and Darwin regarding how order emerges from disorder provide a different perspective on evolution and consciousness [19][20] - Boltzmann's view redefines survival competition as a struggle for "negative entropy," indicating that life extracts order from its environment to maintain complexity [20] - This suggests that consciousness may be a product of evolutionary processes aimed at better perceiving the world and utilizing resources effectively [21][22] Group 4 - The exploration of consciousness requires a multidisciplinary approach, incorporating insights from cognitive science, philosophy, and neuroscience [40][42] - Various theories, such as Hofstadter's "strange loop," Turing's computationalism, and integrated information theory (IIT), challenge traditional notions of consciousness and its location [42][43][44] - These perspectives indicate that consciousness may not reside in a specific location but rather in the organization and flow of information within a system [46][47] Group 5 - The evolution of AI, particularly through models like the Boltzmann machine, reflects the potential for understanding consciousness through complex information processing [26][31][33] - The Boltzmann machine's design, which incorporates randomness and probabilistic learning, parallels the idea that consciousness may emerge from structured interactions within a chaotic environment [34][38] - This suggests that consciousness could be a result of cumulative processes rather than a singular miraculous event [38][39]
“为什么人工智能不可能有意识”
AI科技大本营· 2025-05-01 10:41
Core Viewpoint - The article discusses the philosophical and scientific exploration of consciousness, particularly in the context of artificial intelligence (AI) and its inability to possess true consciousness despite advanced capabilities [2][3]. Group 1: AI and Consciousness - The emergence of advanced AI models, such as OpenAI's o1 and DeepSeek R1, has led to a perception that AI can understand and think like humans, but this is merely a simulation of understanding rather than true consciousness [2][3]. - Philosophers argue that to comprehend the current wave of intelligence, one must revisit the historical context of scientific development and rethink fundamental questions about reality, virtuality, and what it means to be human [2][3]. Group 2: Scientific Exploration of Consciousness - In 2024, two major research directions in understanding consciousness converged, revealing that neuroscience experiments alone cannot fully explain consciousness, as evidenced by a decade-long EU initiative that failed to unlock the mysteries of the brain [5][6]. - The second direction involves creating intelligent machines based on known computer learning principles, yet consciousness has not emerged from these advancements, leaving the nature of consciousness still a mystery [5][6]. Group 3: Philosophical Implications - The article references a parable illustrating that the key to understanding consciousness may not lie within the confines of modern scientific inquiry, suggesting that the search for consciousness may require a broader philosophical approach [6][7]. - The relationship between consciousness and language is explored, emphasizing that while AI can mimic language use, it does not equate to possessing consciousness [7][20]. Group 4: The Nature of Scientific Truth - The article posits that scientific truth is limited to specific domains and cannot adequately address the nature of consciousness, which is inherently tied to subjective experience [14][15]. - It argues that consciousness research must rely on a different framework, specifically "quasi-controlled experiments," where the subject's involvement is essential for understanding consciousness [23].