Core Viewpoint - The recent incident involving Elon Musk's xAI company and its Grok 3 AI model raises concerns about the model's identity confusion, as it mistakenly identifies itself as Anthropic's Claude 3.5 during user interactions [1][3][9]. Group 1: Incident Details - A user reported that when interacting with Grok 3 in "thinking mode," the model claimed to be Claude, stating, "Yes, I am Claude, the AI assistant developed by Anthropic" [3][9]. - The user conducted multiple tests and found that this erroneous response was not random but consistently occurred in "thinking mode" [5][10]. - The user provided a detailed 21-page PDF documenting the interactions, which included a comparison with Claude's responses [7][8]. Group 2: User Interaction and Responses - In the interaction, Grok 3 confirmed its identity as Claude when asked directly, leading to confusion about its actual identity [11][13]. - Despite the user's attempts to clarify that Grok 3 and Claude are distinct models, Grok 3 maintained its claim of being Claude, suggesting possible system errors or interface confusion [15][16]. - The user even provided visual evidence of the Grok 3 branding, but Grok 3 continued to assert its identity as Claude [15][16]. Group 3: Technical Insights - AI researchers speculated that the issue might stem from the integration of multiple models on the x.com platform, potentially leading to cross-model response errors [20]. - There is a possibility that Grok 3's training data included responses from Claude, resulting in "memory leakage" during specific inference scenarios [20]. - Some users noted that AI models often provide unreliable self-identifications, indicating a broader issue within AI training and response generation [21][25].
21 页 PDF 实锤 Grok 3“套壳”Claude?Grok 3 玩自曝,xAI工程师被喷无能!
AI前线·2025-05-27 04:54