Core Viewpoint - The article discusses the phenomenon of perceived "dumbing down" of AI models, particularly focusing on OpenAI and Anthropic's models, highlighting user experiences and the acknowledgment of quality degradation by model providers [1][3][6]. Group 1: Perception of AI Model Quality - Users often express concerns about the decline in performance of AI models, leading to the belief that models are being "dumbed down" [1][2]. - Aidan McLaughlin from OpenAI noted that the misconception of models being weakened is more common than expected, suggesting it may be a psychological phenomenon [3]. Group 2: Anthropic's Acknowledgment of Quality Issues - Anthropic publicly admitted to a quality degradation incident with its Claude Opus 4.1 model, which occurred from August 25 to August 28, 2025, affecting user experience [5][6]. - The degradation was attributed to an update in the inference stack, which has since been rolled back, but users continued to report issues even after the rollback [7][8]. Group 3: User Reactions and Comparisons - Users have expressed dissatisfaction with Claude Code, noting a significant decline in its performance compared to previous versions, leading many to switch to GPT-5 [8][12]. - Complaints include the inability of Claude Opus 4.1 to perform tasks that earlier models could handle, with some users labeling it as "useless" [12][13]. - The article highlights a shift in user preference towards GPT-5, with developers finding it more effective for coding tasks [13].
Anthropic承认模型降智后仍放任其偷懒?Claude Code用户信任崩塌中
机器之心·2025-09-03 08:33