连接主义
Search documents
中文屋提出者逝世,曾当众“调戏”Hinton被记了半辈子
3 6 Ke· 2025-11-30 06:10
Core Viewpoint - The article discusses the life and impact of philosopher John Searle, particularly focusing on his famous thought experiment "Chinese Room," which challenges the understanding of artificial intelligence and the nature of comprehension [1][3][35]. Group 1: John Searle's Contributions - Searle's "Chinese Room" thought experiment, proposed in 1980, is a significant philosophical argument against strong artificial intelligence, asserting that machines can simulate understanding without possessing true comprehension [35][38]. - The experiment illustrates that while a system can manipulate symbols (syntax), it does not equate to understanding their meanings (semantics), thus questioning the validity of the Turing Test [38][40]. - Searle's views emphasize that human understanding involves more than just processing symbols; it requires grasping the meanings behind them [39][40]. Group 2: Impact on AI Discourse - The "Chinese Room" continues to influence discussions around modern AI, such as large language models like GPT, which are often seen as simulating understanding rather than genuinely comprehending language [41][43]. - Critics of Searle, including AI pioneers, argue that the distinction he makes between understanding and simulation may overlook the complexities of cognitive processes in both humans and machines [44][46]. - Hinton, a key figure in deep learning, suggests that large language models do exhibit a form of understanding through the interaction of numerous features, aligning closely with human cognitive processes [47][48]. Group 3: Searle's Legacy and Controversies - Searle's career was marked by both significant philosophical contributions and controversies, including allegations of sexual harassment that led to the revocation of his honorary title at Berkeley [27][28]. - Despite the controversies, Searle's philosophical legacy remains influential, with his ideas continuing to provoke thought and debate in the fields of philosophy and artificial intelligence [31][32]. - The choice of "Chinese" in the "Chinese Room" serves as a metaphor for the complexities of understanding and the cultural perceptions surrounding language comprehension [50][52].
中文屋提出者逝世,曾当众“调戏”Hinton被记了半辈子
量子位· 2025-11-30 05:09
Core Viewpoint - The article discusses the legacy of philosopher John Searle, particularly his famous "Chinese Room" thought experiment, which challenges the notion of machine understanding in artificial intelligence [1][3][4]. Group 1: John Searle's Contributions - John Searle passed away at the age of 93, leaving behind a significant impact on the philosophy of artificial intelligence [1]. - The "Chinese Room" thought experiment, proposed in 1980, is considered a classic in the philosophy of AI, questioning whether machines can truly "understand" or merely simulate understanding [3][4]. - Searle's argument posits that while machines can manipulate symbols, they do not possess genuine understanding, emphasizing the difference between syntax (form) and semantics (meaning) [52][54]. Group 2: The Chinese Room Experiment - The experiment involves an English speaker in a room who uses a rulebook to respond to Chinese characters without understanding the language, illustrating that the person inside the room does not comprehend Chinese despite producing correct responses [49][52]. - Searle's conclusion is that computational processes do not equate to human understanding, as machines operate on a syntactical level without grasping the semantic content [53][56]. - The ongoing debate surrounding AI's ability to understand language continues, with the "Chinese Room" serving as a reference point for discussions about the nature of understanding in AI systems [57][59]. Group 3: Academic and Cultural Context - Searle's choice of Chinese for the thought experiment reflects cultural stereotypes and the idea of a language that is operationally complex yet difficult to understand for English speakers [70][73]. - The article highlights the philosophical tensions between Searle and other AI pioneers, such as Geoffrey Hinton, who later suggested that large language models do exhibit a form of understanding through their statistical processing of language [64][65]. - Searle's legacy is marked by both his intellectual contributions and the controversies surrounding his later years, including allegations of sexual harassment that affected his reputation [41][42].
她们估值840亿,刚发了第一个AI成果
量子位· 2025-09-11 01:58
Core Insights - Thinking Machines, valued at $12 billion, has released its first research blog focusing on overcoming nondeterminism in large language model (LLM) inference [1][51]. - The research emphasizes the challenge of reproducibility in LLM outputs, attributing it to batch non-invariance [3][12]. Group 1: Research Focus - The main theme of the research is "Defeating Nondeterminism in LLM Inference," which addresses why LLM inference results are often non-reproducible [3][8]. - The root cause identified is batch non-invariance, where the output of a single request is influenced by the number of requests in the same batch [14][15]. Group 2: Technical Findings - The research indicates that floating-point non-associativity and concurrent execution lead to different results in LLM inference, but this explanation is incomplete [9][10]. - The study reveals that the lack of batch invariance is the primary issue, as dynamic adjustments to batch sizes during deployment affect the computation order of key operations [15][16]. Group 3: Proposed Solutions - To achieve batch invariance, the research suggests fixing the reduction order in operations like RMSNorm and matrix multiplication, regardless of batch size [18][19]. - The proposed method involves compiling a unified kernel configuration for all input shapes to avoid switching parallel strategies due to batch size changes, even if it results in a performance loss of about 20% [22][21]. Group 4: Experimental Validation - Three types of experiments were conducted to validate the findings: inference determinism verification, performance verification, and real online policy reinforcement learning application verification [25]. - Results showed that using batch invariant kernels led to 1000 identical outputs, achieving deterministic inference, while non-invariant kernels produced 80 different results [27][28]. Group 5: Company Background - Thinking Machines was co-founded by Mira Murati, former CTO of OpenAI, and includes a team of notable figures from the AI industry, primarily from OpenAI [36][38][46]. - The company recently completed a $2 billion seed funding round, setting a record for AI funding, and is now valued at $12 billion despite not having any product yet [51][50].
肖仰华教授:具身智能距离“涌现”还有多远?
3 6 Ke· 2025-06-27 11:30
Group 1 - The development of artificial intelligence (AI) has two clear trajectories: one represented by AIGC (Artificial Intelligence Generated Content) and the other by embodied intelligence [3][6] - AIGC is considered a technological revolution due to its foundational nature, its ability to significantly enhance productivity, and its profound impact on societal structures [10][11] - Embodied intelligence aims to replicate human sensory and action capabilities, but its impact on productivity is seen as limited compared to cognitive intelligence [11][13] Group 2 - The current stage of AI development emphasizes the quality of data and training strategies over sheer data volume and computational power [3][15] - The scaling law, which highlights the importance of large datasets and computational resources, is crucial for both AIGC and embodied intelligence [14][15] - The industry faces challenges in gathering sufficient high-quality data for embodied intelligence, which is currently lacking compared to language models [20][21] Group 3 - The future of embodied intelligence relies on its ability to understand and interact with human emotions, making emotional intelligence a core requirement for consumer applications [5][28] - The development of embodied AI is hindered by the complexity of accurately modeling human experiences and environmental interactions [30][32] - There is a need for innovative data acquisition strategies, such as combining real, synthetic, and simulated data, to overcome current limitations in embodied intelligence training [22][23]
肖仰华教授:具身智能距离“涌现”还有多远?|Al&Society百人百问
腾讯研究院· 2025-06-27 06:59
Core Viewpoint - The article discusses the transformative impact of generative AI and embodied intelligence on technology, business, and society, emphasizing the need for a multi-faceted exploration of AI's opportunities and challenges [1]. Group 1: AI Development Trends - The development of AI in recent years has followed two clear trajectories: generative AI (AIGC) and embodied intelligence [5][9]. - Generative AI aims to equip machines with human-like cognitive abilities, while embodied intelligence focuses on enabling machines to mimic human sensory and action capabilities [10][11]. - The current AI landscape highlights the importance of data quality and training strategies over sheer data volume and computational power [6][19]. Group 2: Embodied Intelligence - The next phase of embodied intelligence is expected to involve mind-body coordination, reflecting the philosophical inquiry into how human-level intelligence arises [6][11]. - The application of embodied intelligence in consumer markets hinges on the machine's ability to empathize and understand human emotional needs [6][10]. - There is a significant gap in the data required for embodied intelligence to reach its potential, with current datasets lacking the scale necessary for generalization [7][24]. Group 3: AI as a Technological Revolution - Generative AI is characterized as a technological revolution based on three criteria: foundational nature, exponential productivity enhancement, and profound societal impact [13][14]. - The societal implications of AI's cognitive capabilities are vast, potentially affecting all human activities and leading to concerns about cognitive laziness among humans [14][16]. - In contrast, the impact of embodied intelligence on productivity is seen as limited compared to the cognitive advancements of generative AI [15][16]. Group 4: Data and Model Relationships - The relationship between model algorithms and data is crucial, with algorithms determining the lower limit of model performance and data defining the upper limit [20][21]. - The current focus in AI development is on enhancing data quality and training strategies, particularly in the context of embodied intelligence [19][22]. - The industry faces challenges in data acquisition for embodied intelligence, necessitating innovative approaches to data collection and synthesis [25][26]. Group 5: Future Directions - To overcome the data scarcity in embodied intelligence, strategies such as leveraging real, simulated, and synthetic data are being explored [25][26]. - The development of wearable devices capable of capturing real-world actions could provide a substantial data foundation for embodied intelligence [26]. - The complexity of human experience and environmental interaction presents significant challenges for the data-driven advancement of embodied intelligence [34][35].