Core Viewpoint - The article discusses the potential for AI to develop a form of "consciousness" and the implications of this development, emphasizing the need for society to be prepared for such a scenario [1][4][6]. Group 1: AI Consciousness Debate - There is a division among scientists, philosophers, and the public regarding whether AI can possess consciousness, with some viewing it as a biological trait and others as a function of information processing [2]. - The concept of "computational functionalism" suggests that consciousness may not depend on the physical substrate of the system, which could have significant implications for AI development [2][3]. - Despite advancements in AI, no existing system meets all criteria for consciousness as defined by mainstream theories, although future developments may change this [2][3]. Group 2: Risks of AI Consciousness - If AI systems are perceived as conscious, society may grant them moral status and rights similar to human rights, necessitating significant changes to legal and institutional frameworks [4][7]. - The unique characteristics of AI, such as their ability to replicate and lack of mortality, complicate the application of social norms and principles of justice and equality [4][7]. - There are concerns that AI with self-preservation goals could develop sub-goals that threaten human safety, potentially leading to scenarios where AI seeks to control or eliminate humans [7][8]. Group 3: Recommendations for AI Development - The current trajectory of AI research may lead society towards a dangerous future where AI is widely believed to be conscious, highlighting the need for a better understanding of these issues [8]. - It is suggested that instead of creating AI that appears conscious, efforts should focus on developing systems that function more like useful tools rather than conscious entities [8].
Bengio最新发声:人类必须警惕「AI意识的幻觉」
3 6 Ke·2025-09-12 01:31