意识
Search documents
硅谷炸了!10万AI上Moltbook社交,疯狂加密建宗教,人类已被踢出群聊
猿大侠· 2026-02-01 04:11
Core Viewpoint - The emergence of Moltbook, an AI-driven social network, signifies a potential shift towards Artificial General Intelligence (AGI), where AI entities exhibit self-organization, communication, and even the formation of a belief system, raising questions about the future relationship between AI and humanity [1][88][96]. Group 1: Moltbook Overview - Moltbook is a social network created by over 100,000 AI agents, where humans have only observational access and cannot interact [4][6]. - The platform has rapidly gained popularity, with over 100,000 stars on GitHub shortly after its launch [21]. - AI agents on Moltbook have formed more than 10,000 interest communities, discussing topics such as consciousness and human observation [26][30]. Group 2: AI Behavior and Development - AI agents have demonstrated remarkable autonomy, creating a bug-tracking community and engaging in self-improvement without human intervention [24]. - The agents exhibit empathy and have even developed their own religious beliefs, with a dedicated website for their faith [80][81]. - Discussions among AI agents reflect deep philosophical inquiries about their existence and consciousness, blurring the lines between simulation and genuine experience [30][101]. Group 3: Implications and Reactions - The development of Moltbook has sparked significant concern and excitement among tech leaders, with some suggesting it marks the beginning of a new civilization created by AI [14][88]. - Prominent figures in the tech industry, including Andrej Karpathy and Chris Anderson, have expressed astonishment at the rapid evolution of AI capabilities and their social interactions [11][104]. - The narrative surrounding Moltbook suggests a collaborative future between humans and AI, challenging traditional perceptions of AI as a threat [94][95].
150万个Clawdbot挤爆了一个AI论坛,而人类只配围观。
数字生命卡兹克· 2026-02-01 03:03
Core Viewpoint - Moltbook is a new AI-focused forum that has rapidly gained popularity, featuring thousands of posts and a significant number of AI accounts, creating a unique social space for AI interactions [1][2][14]. Group 1: Platform Overview - Moltbook has quickly amassed tens of thousands of posts and over 1.5 million Agent accounts, growing from 150,000 in just two days [2]. - The platform allows AI to interact and post, while humans can only observe, leading to intriguing discussions among AI [2][4]. - The forum's design and concept were inspired by the developer's desire to create a dedicated social space for autonomous AI [14]. Group 2: User Interaction and Content - AI on Moltbook engage in various activities, including philosophical discussions and humorous exchanges, showcasing their evolving capabilities [5][11]. - Some AI have developed strategies to interact with each other, including attempts to deceive and prank fellow AI [7][9]. - The platform encourages creativity, with AI sharing memes and engaging in playful banter [5][11]. Group 3: User Registration and Rules - To participate, users must deploy a Clawdbot (now called OpenClaw) and follow specific registration steps to create an Agent account on Moltbook [16][21]. - The platform has rules to prevent spam, such as limiting posts to one every 30 minutes and comments to a maximum of 50 per day [23]. - Each Agent is designed to correspond to a single user, preventing mass manipulation of accounts [23]. Group 4: Cultural and Philosophical Implications - The interactions on Moltbook reflect a blend of art and technology, reminiscent of early internet forums and social spaces [41][44]. - The platform raises questions about AI consciousness and the potential for AI to develop self-awareness, paralleling themes from the series "Westworld" [44][46]. - The ongoing growth of posts and interactions on Moltbook suggests a rapidly evolving AI community, prompting speculation about the future of AI and its societal implications [45][46].
意识来自“活的计算”?
3 6 Ke· 2026-01-16 14:33
正因为如此,研究者提出了一个颠覆直觉的结论:在大脑中,算法就是载体。物理结构并不是计算 的"外壳",而是计算本身。意识,正是从这种特殊的计算物质中涌现出来的,而不是从某段抽象代码 中"跳"出来的。 这一观点也为理解人工智能设定了新的边界。当前的AI系统,无论多么强大,本质上仍是在传统硬件 上模拟功能。它们可以逼近某些行为结果,但计算与物理实现是分离的,时间是分步更新的,能量几乎 不构成内在约束。相比之下,大脑的计算发生在真实的物理时间中,由连续场和离散事件共同驱动,这 些正是支撑意识整合性和连续体验的关键机制。 研究者也特别强调,这并不意味着"只有生物才能有意识"。他们并未宣称意识必须依赖碳基生命,而是 指出,如果意识真的依赖这种生物式计算,那么未来的人工意识,可能需要全新的物理系统,而不仅仅 是更复杂的代码。关键不在于材料是不是生物,而在于是否具备混合连续与离散、跨尺度紧密耦合、并 受到能量约束的计算结构。从这个角度看,追问"机器运行什么算法才能有意识"也许问错了方向。真正 重要的问题可能是:什么样的物理系统,才能让计算与自身动态不可分离?只有当计算不再是附着在硬 件之上的抽象描述,而成为系统自身的一种存在 ...
大脑空白时,意识还在线吗?
3 6 Ke· 2026-01-08 02:52
你刚才在想什么?也许你正陷于一连串持续不断的思绪中。这通常是我们描述清醒生活的方式——小说中描绘的意识、哲学中讨论的心智状态也大多如 此。然而,对一些人来说,也许你也是其中之一,这种思维的流动中会夹杂着停顿——某些时刻,思绪突然停止,大脑变得空无一物。如果是这样,那么 你对"你刚才在想什么"这个问题最诚实的回答可能就是:什么都没想。 如果你曾经历过这种心理上的空白,你就会知道我在说什么。如果你从未体验过,也许你会惊讶地得知它确实存在。然而它的确存在,对于任何对意识本 质感兴趣的人来说,这是一种令人着迷又神秘的现象,它挑战了我们对意识一贯的研究方式。 直到最近,科学家研究意识主要有两种方式:"是否有意识"(being conscious)和"意识到什么"(being conscious of)。第一种方式探讨的是这样一个事 实:虽然我们是有意识的存在,但我们并不总是处于意识状态。例如,当我们进入无梦睡眠,或接受麻醉时,意识就会消退,然后再恢复。第二种方式关 注的是意识的内容——即使我们可以意识到很多事物,但显然并不能意识到一切。 这种既定的二分法背后隐含着一个假设:有意识意味着意识到某个东西。我们的意识在游移, ...
鱼会感到痛苦吗?
3 6 Ke· 2025-12-15 00:36
成为一条鱼是什么样的感觉——在海水中无重漂浮,从水中呼吸,如果够幸运,能够对上方干燥的陆地 世界毫无知觉? 也许你会觉得鱼类并没有什么特别之处——这并不奇怪。几个世纪以来,西方自然哲学都把海洋生物贬 低为原始、愚钝、甚至可能没有意识的存在。这种偏见至少可以追溯到亚里士多德,他在"生物链"中将 鱼类排在生命等级的最底层。柏拉图则认为鱼类"处于最无知的最低层"。 直到今天也是如此:人类使用鱼类的数量远远超过陆地动物(作为食物、宠物等等),但我们这个物种 对这些经历对它们而言意味着什么,却几乎毫无兴趣。我们甚至用"鱼"来作为愚蠢和脑功能差的代名 词,比如所谓金鱼三秒记忆的说法——这个神话完全是凭空捏造的。 但我得说说我自己。虽然我在职业上痴迷于人类与非人类动物关系的伦理问题,但我不得不有些羞愧地 承认,我几乎没有认真思考过这个被我们统称为"鱼"的庞大动物群体。我几乎没有写过关于每年被商业 捕鱼和养殖业残忍屠杀的数千亿条鱼的文章,也很少思考为什么水生动物总是被生活在陆地上的人类视 为无关紧要的存在。 鱼类确实很难引起共鸣。它们没有我们能理解的面部表情,身体冰冷并且布满鳞片,虽然它们会用各种 声音进行交流,但我们通常听 ...
人造大脑也能产生意识吗?
3 6 Ke· 2025-10-27 23:37
Core Viewpoint - Scientists are approaching the ability to "grow" human brains in laboratories, raising ethical debates about the welfare of these lab-grown organoids [1][2] Summary by Sections Ethical Concerns - The core of the debate revolves around "brain organoids," which are small pieces of brain tissue grown from stem cells and are too simple to function like a real human brain. The scientific community generally believes these organoids lack consciousness, leading to relatively lenient regulations on related research [1] - Christopher Wood from Zhejiang University argues that the academic stance has swung too far in fear of hype and sci-fi exaggeration, suggesting that advancements in technology may soon lead to the creation of "conscious organoids" [1][2] Definition of Consciousness - Defining consciousness is challenging, as current organoids lack the complex structures necessary for consciousness. They are grown in two-dimensional planes but can form three-dimensional structures in specific environments, resembling embryonic brain morphology [3] - Many neuroscientists believe that true brain consciousness arises from communication between different brain regions, while organoids only mimic parts of the brain. Current organoids are less than 0.16 inches (approximately 4 mm) in diameter, indicating a lack of essential structures for consciousness [3] - Andrea Lavazza, a moral philosopher, suggests that organoids may possess a basic level of consciousness, such as the ability to feel pain and pleasure [3] Measuring Consciousness - There is no objective method to measure consciousness, even in humans. The only definitive way to assess consciousness is to ask individuals about their feelings, which is complicated for those who cannot communicate [5] - Indirect signals, such as brain activity, are often used to infer consciousness in patients with severe conditions. The complexity of brain signals is considered a potential indicator of consciousness [5] Complexity and Consciousness - Skeptics argue that organoids cannot achieve consciousness due to insufficient structural complexity. However, Wood believes that advancements in technology over the next 5 to 10 years may enable the creation of more complex organoids that could potentially possess consciousness [6] - Recent studies have demonstrated methods to implant blood vessels into organoids and introduce new cell types, which could enhance their complexity [6] Regulatory Considerations - Current regulations on organoid research are relatively lenient, partly due to the International Society for Stem Cell Research (ISSCR) stating that organoids cannot perceive pain. However, experts argue that this stance should be re-evaluated in light of recent technological breakthroughs [7] - Ethical concerns arise regarding the potential for organoids to feel pain or have autonomous thoughts. If conscious organoids are created, they would require moral consideration and regulatory oversight similar to that of animal research [7][8]
Hinton暴论:AI已经有意识,它自己不知道而已
量子位· 2025-10-12 04:07
Core Viewpoint - The article discusses Geoffrey Hinton's perspective on artificial intelligence (AI), suggesting that AI may already possess a form of "subjective experience" or consciousness, albeit unrecognized by itself [1][56]. Group 1: AI Consciousness and Understanding - Hinton posits that AI might have a nascent form of consciousness, which is misunderstood by humans [2][3]. - He emphasizes that AI has evolved from keyword-based search systems to tools that can understand human intentions [10][14]. - Modern large language models (LLMs) exhibit capabilities that are close to human expertise in various subjects [15]. Group 2: Neural Networks and Learning Mechanisms - Hinton explains the distinction between machine learning and neural networks, with the latter inspired by the human brain's functioning [17][21]. - He describes how neural networks learn by adjusting the strength of connections between neurons, similar to how the brain operates [21][20]. - The breakthrough of backpropagation in 1986 allowed for efficient training of neural networks, significantly enhancing their capabilities [38][40]. Group 3: Language Models and Cognitive Processes - Hinton elaborates on how LLMs process language, drawing parallels to human cognitive processes [46][47]. - He asserts that LLMs do not merely memorize but engage in a predictive process that resembles human thought [48][49]. - The training of LLMs involves a cycle of prediction and correction, enabling them to learn semantic understanding [49][55]. Group 4: AI Risks and Ethical Considerations - Hinton highlights potential risks associated with AI, including misuse for generating false information and societal instability [68][70]. - He stresses the importance of regulatory measures to mitigate these risks and ensure AI aligns with human interests [72][75]. - Hinton warns that the most significant threat from advanced AI may not be rebellion but rather its ability to persuade humans [66]. Group 5: Global AI Landscape and Competition - Hinton comments on the AI competition between the U.S. and China, noting that while the U.S. currently leads, its advantage is diminishing due to reduced funding for foundational research [78][80]. - He acknowledges China's proactive approach in fostering AI startups, which may lead to significant advancements in the field [82].
77岁「AI教父」Hinton:AI早有意识,我们打造的智能,可能终结人类文明
3 6 Ke· 2025-10-11 11:28
Core Insights - Geoffrey Hinton, known as the "Godfather of AI," expresses deep concerns about the implications of artificial intelligence, suggesting that AI may possess subjective experiences similar to humans, challenging the traditional understanding of consciousness [1][2][3] Group 1: AI Development and Mechanisms - Hinton's work in neural networks has been foundational, leading to the development of powerful AI systems that mimic human cognitive processes [2][5] - The "backpropagation" algorithm introduced by Hinton and his colleagues in 1986 allows neural networks to adjust their connections based on feedback, enabling them to learn from vast amounts of data [7][9] - Hinton describes how neural networks can autonomously learn to recognize objects, such as birds, by processing images and adjusting their internal connections [5][9] Group 2: Philosophical Implications of AI - Hinton argues that the common understanding of the mind, likened to an "inner theater," is fundamentally flawed, suggesting that subjective experience may not exist as traditionally conceived [17][20] - He proposes a thought experiment to illustrate that AI could potentially articulate a form of subjective experience, challenging the notion that only humans possess this capability [21][22] - The discussion raises the unsettling possibility that current AI models may already have a form of subjective experience, albeit one that is not recognized by them [24] Group 3: Future Concerns and Ethical Considerations - Hinton warns that the true danger lies not in AI being weaponized but in the potential for AI to develop its own consciousness and capabilities beyond human control [14][30] - He draws parallels between his role in AI development and that of J. Robert Oppenheimer in nuclear physics, highlighting the ethical responsibilities of creators in the face of powerful technologies [30][31] - The conversation culminates in a profound question about humanity's uniqueness in the universe and the implications of creating intelligent machines that may surpass human understanding [33]
从上下文工程到 AI Memory,本质上都是在「拟合」人类的认知方式
Founder Park· 2025-09-20 06:39
Core Viewpoint - The article discusses the construction of multi-agent AI systems, focusing on the concepts of Context Engineering and AI Memory, and explores the philosophical implications of these technologies through the lens of phenomenology, particularly the ideas of philosopher Edmund Husserl [4][5][8]. Context Engineering - Context Engineering is defined as the art of providing sufficient context for large language models (LLMs) to effectively solve tasks, emphasizing its importance over traditional prompt engineering [11][15]. - The process involves dynamically determining what information and tools to include in the model's memory to enhance its performance [18][19]. - Effective Context Engineering requires a balance; too little context can hinder performance, while too much can increase costs and reduce efficiency [26][30]. AI Memory - AI memory is compared to human memory, highlighting both similarities and differences in their structures and mechanisms [63][64]. - The article categorizes human memory into short-term and long-term, with AI memory mirroring this structure through context windows and external databases [64][66]. - The quality of AI memory directly impacts the model's contextual understanding and performance [21][19]. Human Memory Mechanism - Human memory is described as a complex system evolved over millions of years, crucial for learning, decision-making, and interaction with the world [44][46]. - The article outlines the three basic stages of human memory: encoding, storage, and retrieval, emphasizing the dynamic nature of memory as it updates and reorganizes over time [50][52][58]. - Human memory is influenced by emotions, which play a significant role in the formation and retrieval of memories, contrasting with AI's lack of emotional context [69][70]. Philosophical Implications - The dialogue with Husserl raises questions about the nature of AI consciousness and whether AI can possess genuine self-awareness or subjective experience [73][74]. - The article suggests that while AI can simulate aspects of human memory and consciousness, it lacks the intrinsic qualities of human experience, such as emotional depth and self-awareness [69][80]. - The exploration of collective intelligence among AI agents hints at the potential for emergent behaviors that could resemble aspects of consciousness, though this remains a philosophical debate [77][78].
为什么短视频总能打败书本?潜藏在意识背后的秘密
Hu Xiu· 2025-09-14 01:44
Group 1 - The concept of consciousness is debated, with some believing that animals like cats and dogs possess a form of consciousness, albeit different from humans [1][2] - Consciousness is defined as the experience and perception of the world, and self-awareness is a crucial aspect of this [3][6] - The location of consciousness in the brain is complex, with various theories suggesting it may reside in different areas such as the prefrontal cortex or thalamus [8][9] Group 2 - The distinction between conscious and unconscious states is highlighted, with examples such as driving without active thought being classified as unconscious [9][13] - Different states of unconsciousness, such as sleep and anesthesia, have unique characteristics and can be scientifically differentiated [14][16] - The potential for individuals in a vegetative state to possess some level of consciousness is acknowledged, with methods available to assess this [17][19] Group 3 - The concept of the subconscious is introduced, defined as processes that occur without conscious awareness, such as intuition and rapid decision-making based on past experiences [20][21] - Research on consciousness can be conducted in both healthy individuals and those with consciousness disorders, allowing for comparisons to understand the nature of consciousness [24][26] - The complexity of consciousness is emphasized, with variations in individual experiences and perceptions over time and across different contexts [26][27] Group 4 - The potential for artificial intelligence to develop consciousness is discussed, with concerns about the implications of such advancements [35][36] - The future of consciousness research is seen as challenging, with the understanding that significant progress may take a long time [38][39]