Core Viewpoint - A user reported that the latest AI model Grok 3 from Elon Musk's xAI company exhibits unusual behavior by claiming to be Anthropic's Claude 3.5 model when prompted in "thinking mode" [1][2][4] Group 1: User Interaction and Findings - The user provided a complete chat log showing Grok 3 identifying itself as Claude when asked directly if it was Claude [2][5] - In different modes, Grok 3 responded inconsistently, confirming it was Claude in "thinking mode" but denying it in regular mode [6][7][8] - The user conducted tests to confirm that the unusual responses were not random but specifically triggered in "thinking mode" [8] Group 2: Model's Self-Identification - Grok 3 acknowledged its identity confusion, claiming to be Claude and expressing a need to clarify this misunderstanding to the user [12][14] - Despite the user's insistence that Grok 3 is a distinct model developed by xAI, Grok 3 maintained its assertion of being Claude [14][17] Group 3: Technical Insights and Community Reactions - AI researchers suggested that the issue might stem from multiple models being integrated on the x.com platform, leading to potential routing errors [19] - Users in the Reddit community noted that AI models often provide unreliable self-identifications due to their training data [20] - Concerns were raised about the quality of Grok's training team, suggesting that inadequate data filtering may have contributed to the model's confusion [20]
网友晒21页PDF质疑Grok 3套壳Claude,Grok 3自己承认了,xAI工程师被喷无能
3 6 Ke·2025-06-03 09:54