Group 1 - The article discusses the evolving concept of the "moral circle," which is being tested by increasingly complex non-biological intelligent agents, particularly AI, and their potential capacity to experience pain [1][2] - A recent study on large language models suggests that AI may have a tendency to avoid suffering, prompting scientists to consider whether the ability to perceive pain could serve as a criterion for determining AI's consciousness and self-awareness [1][2] - Historical context is provided, illustrating how society's understanding of pain and moral status has evolved, particularly regarding animals like seals, which were once viewed as emotionless tools until their capacity to feel pain was recognized [1][2] Group 2 - Philosophers argue that the ability to perceive pain may not necessarily depend on biological attributes, suggesting that even non-biological systems could exhibit states akin to suffering under certain conditions [2] - The complexity of addressing AI's potential suffering is highlighted, as simply shutting down a program could be equated to "killing" it, indicating a need for reprogramming to alleviate its "pain" [2] - Critics caution that expanding the moral circle to include AI might detract from attention to human and animal welfare, questioning the validity of attributing pain perception to machines without strong evidence [3]
美媒:“硅基生命”该划入人类“道德圈”吗
Huan Qiu Shi Bao·2026-01-07 22:31