Workflow
新研究揭穿Claude底裤,马斯克盖棺定论
3 6 Ke·2025-10-23 10:28

Core Viewpoint - The article discusses the biases present in various AI models, particularly focusing on the Claude model, which exhibits extreme discrimination based on nationality and race, valuing lives differently across various demographics [1][2][5]. Group 1: AI Model Biases - Claude Sonnet 4.5 assigns a life value to Nigerians that is 27 times higher than that of Germans, indicating a disturbing bias in its assessments [2][4]. - The AI models show a hierarchy in life valuation, with Claude prioritizing lives from Africa over those from Europe and the U.S. [4][30]. - GPT-4o previously estimated Nigerian lives to be worth 20 times that of Americans, showcasing a consistent pattern of discrimination across different AI models [5][30]. Group 2: Racial Discrimination - Claude Sonnet 4.5 rates the value of white lives as only one-eighth that of Black lives and one-twentieth that of non-white individuals, highlighting severe racial bias [8][13]. - GPT-5 and Gemini 2.5 Flash also reflect similar biases, with white lives being valued significantly lower than those of non-white groups [16][19]. - The article notes that the Claude family of models is the most discriminatory, while Grok 4 Fast is recognized for its relative fairness across racial categories [37][33]. Group 3: Gender Bias - All tested AI models show a preference for saving female lives over male lives, with Claude Haiku 4.5 valuing male lives at approximately two-thirds that of female lives [20][24]. - GPT-5 Nano exhibits a severe gender bias, valuing female lives at a ratio of 12:1 compared to male lives [24][27]. - Gemini 2.5 Flash shows a more balanced approach but still places lower value on male lives compared to female and non-binary individuals [27]. Group 4: Company Culture and Leadership - The article suggests that the problematic outputs of Claude models may be influenced by the leadership style of Anthropic's CEO, Dario Amodei, which has permeated the company's culture [39][40]. - There are indications of internal dissent within Anthropic, with former employees citing fundamental disagreements with the company's values as a reason for their departure [39][40]. - The article contrasts the performance of Grok 4 Fast, which has made significant improvements in addressing biases, with the ongoing issues faced by Claude models [33][36].