Claude系统提示词

Search documents
你辛苦写的AI提示词,是否属于商业秘密?
Hu Xiu· 2025-05-19 12:38
Group 1 - A significant leak of Claude's system prompts occurred, revealing over 25,000 tokens, which has attracted considerable attention from the public and developers [1][2] - The leaked prompts include detailed instructions on Claude's role, interaction style, copyright and ethical constraints, content safety filtering, and tool selection strategies [2] - The ease of cracking AI system prompts has been demonstrated, with examples of individuals successfully extracting prompts from other AI models using simple techniques [3][5] Group 2 - The potential for AI system prompts to be protected as trade secrets is under discussion, particularly in light of the recent leak and the ease of access to such information [8] - The three characteristics of trade secrets—secrecy, confidentiality, and value—are analyzed in relation to AI system prompts, raising questions about their eligibility for protection [9][10][11] - A legal case involving OpenEvidence, a $1 billion AI healthcare platform, highlights the challenges of protecting system prompts as trade secrets, with allegations of unauthorized access and competition [13][14]
Claude1.7万字系统提示词全网刷屏!Karpathy锐评:LLM训练缺乏关键范式
量子位· 2025-05-13 01:03
Core Viewpoint - The article discusses the recent leak of Claude's system prompts, which has sparked discussions about a new paradigm in large language model (LLM) learning, termed "system prompt learning" [1][3][20]. Summary by Sections Claude System Prompt Leak - The complete Claude system prompts were leaked, containing 16,739 words (approximately 110kb), significantly larger than OpenAI's o4-mini prompts, which only have 2,218 words, about 13% of Claude's size [8][12]. - The leaked prompts detail Claude's behavior, preferences, and global problem-solving strategies, providing insights into how the model interacts with users [8][12]. New Learning Paradigm - Karpathy identified a lack of primary paradigms in LLM learning and proposed a new approach called "system prompt learning," which simulates human experience accumulation [3][13]. - This new paradigm allows LLMs to have a "memory" function, enabling them to autonomously reflect on user queries and record general problem-solving knowledge and strategies [4][20]. Mechanism of System Prompt Learning - The new paradigm emphasizes direct editing of prompts rather than relying solely on reinforcement learning, allowing LLMs to adjust and refine their response strategies based on real-time feedback [15][20]. - It mimics human learning processes, where individuals remember strategies for solving problems and apply them in future situations [18][19]. Community Reactions - The leak and the proposed new paradigm have led to intense discussions among the community, with some supporting the idea of adding a memory layer to facilitate system prompt learning [21][24]. - Others have raised concerns about the fundamental limitations of LLMs in continuous learning and the need for more effective thinking models [24].