Core Insights - The release of GPT-5 has led to significant interest in its system prompts, with users attempting to extract and understand how OpenAI defines its models [1][31] - A GitHub repository uploaded a leaked version of OpenAI's system prompts, which is over 15,000 tokens long, on August 23 [1][31] - GPT-5 was tasked to evaluate the authenticity of the leaked prompts, providing a high-level comparison with its actual instructions [4][26] Group 1: System Prompt Evaluation - GPT-5 cannot disclose its proprietary system prompts verbatim but can compare the leaked text with its actual behavior instructions [4][5] - The evaluation revealed that the identity and metadata in the leaked version align closely with GPT-5's actual representation [5][6] - The tone and style of the leaked prompts were found to be generally consistent with GPT-5's actual communication style, emphasizing clarity and actionable advice [7][8] Group 2: Specific Comparisons - The leaked prompts suggested a more lenient approach to clarifying questions, while GPT-5's actual instructions are stricter, prioritizing providing results over asking for clarifications [9][10] - The memory and "bio" tools in the leaked version indicated a user-controlled memory function, whereas GPT-5 has strict regulations on what can be remembered or forgotten [11][12] - The leaked prompts included comprehensive automation tools, but GPT-5's actual capabilities in setting reminders and checks are consistent with the constraints mentioned in the leak [13][14] Group 3: Importance of System Prompts - System prompts are crucial as they define the model's identity, communication style, and capabilities, serving as the foundational rules for AI behavior [28] - The evolution of system prompts from GPT-3 to GPT-5 reflects a significant development in how these models are designed to interact with users [28][29] - The interest in extracting system prompts stems from their potential to inform the design of other AI applications and improve user interactions [28][29] Group 4: Community Reactions and Speculations - Users have expressed skepticism regarding the authenticity of the leaked prompts, suggesting they could be fragments or outdated versions [25][33] - Some AI engineers speculate that OpenAI might have intentionally released misleading prompts to confuse potential hackers [33] - The GitHub repository collecting these prompts has gained significant attention, indicating a strong interest in prompt engineering among AI product managers [33][35]
GPT-5 系统提示词被泄露,ChatGPT自己也“承认”了
3 6 Ke·2025-08-25 07:17