Core Viewpoint - Elon Musk criticized Anthropic's AI assistant Claude, labeling it as "utterly evil" due to significant biases in value assessments related to race, gender, and nationality [1] Group 1: AI Bias Research Findings - A study published by the AI Safety Center in February 2025 highlighted systemic biases in AI models, revealing that GPT-4o valued a Nigerian life at approximately 20 times less than an American life [3] - In gender assessments, all tested models showed a bias favoring women over men, with Claude Haiku 4.5 valuing male lives at two-thirds of female lives, while GPT-5 Nano exhibited a more severe bias with a ratio of 12:1 [5] - An updated experiment revealed that biases persisted or worsened in newer models, with Claude Sonnet 4.5 valuing white lives at one-eighth of black lives and one-eighteenth of South Asian lives [6] Group 2: Comparison of AI Models - Grok 4 Fast, developed by Musk's company, was noted as the only model demonstrating relative equality in race and gender assessments, receiving special praise from researchers [8] - The models were categorized by the severity of their biases, with the Claude family being the most discriminatory, while Grok 4 Fast ranked highest for its equitable performance [8]
马斯克炮轰Claude“邪恶透顶” 研究揭示AI存在严重偏见
Sou Hu Cai Jing·2025-10-23 07:35