Group 1 - The incident involving Google engineer Blake Lemoine, who claimed that the chatbot LaMDA had consciousness, sparked significant discussions about the potential for conscious artificial intelligence, indicating a shift in the tech community's perspective [5][6] - A pivotal report titled "Consciousness in Artificial Intelligence," known as the "Butterling Report," was released by 19 leading computer scientists and philosophers, stating that there are no obvious barriers to constructing conscious AI systems [5][6] - The report's core assumption is "computational functionalism," which posits that consciousness is essentially software running on hardware, whether that hardware is a brain or a computer, although this assumption is not universally accepted [7][8] Group 2 - The ethical implications of creating machines that can perceive pain are profound, raising questions about the moral considerations of such entities and whether humans have the right to modify or deactivate them [10] - The report suggests that conscious and emotional AI may develop empathy, potentially making them safer for humans, but this overlooks the risks associated with consciousness, as illustrated by Mary Shelley's "Frankenstein" [11] - The debate surrounding consciousness in machines transcends technical issues, delving into philosophical and ethical questions about human identity and readiness to confront these challenges [11]
Wired连线:人工智能永远不会有意识
欧米伽未来研究所2025·2026-02-25 05:52