Prompt技巧
Search documents
一个被忽视的Prompt技巧,居然是复制+粘贴。
数字生命卡兹克· 2026-01-22 03:09
Core Viewpoint - The article discusses a technique from a Google paper that shows how repeating prompts can significantly improve the accuracy of non-reasoning large language models (LLMs) from 21.33% to 97.33% [1][7]. Group 1: Experiment Overview - Google conducted experiments using seven popular non-reasoning models, including Gemini 2.0 Flash, GPT-4o, and Claude 3, to test the effectiveness of prompt repetition [13]. - The results indicated that this simple technique won 47 out of 70 tests, with no failures, demonstrating a clear performance improvement across all tested models [25]. Group 2: Mechanism of Improvement - The improvement is attributed to the nature of causal language models, which predict words sequentially. By repeating the prompt, the model can "look back" at the previous context, enhancing its understanding [28][30]. - This technique allows the model to have a second chance to process the information, leading to better accuracy in responses [39][40]. Group 3: Implications for Prompt Engineering - The article suggests that for many straightforward Q&A scenarios, simply repeating the question can be a powerful optimization strategy, rather than relying on complex prompt structures [50]. - Future directions mentioned in the paper include integrating this repetition technique into the training process of models, which could further enhance their performance [52].
你骂AI越狠,它反而越聪明
投资界· 2025-10-25 06:33
Core Insights - The article discusses the surprising findings of a study on how politeness affects the performance of AI language models, specifically that rudeness can lead to better results from AI [5][11][21] Group 1: Study Findings - The study conducted by researchers from Penn State University found that using rude prompts resulted in higher accuracy from AI models compared to polite prompts [11][15] - The accuracy rates were as follows: very polite prompts had an accuracy of 80.8%, while very rude prompts achieved 84.8%, indicating a 4% improvement [15] - The study involved 250 questions across various subjects, with each prompt tested multiple times to ensure reliability [12][14] Group 2: Implications of Politeness - Politeness often conveys uncertainty in human communication, which may lead AI to interpret polite requests as less clear and more ambiguous [16][17] - Conversely, rude prompts provide clear and direct instructions, leading to more precise responses from AI [18][19] - The article suggests that the effectiveness of communication with AI reflects broader human communication patterns, where assertiveness often yields better results [20][21] Group 3: Philosophical Reflections - The relationship between humans and AI raises questions about communication styles and the efficiency of directness versus politeness [20][21] - The article posits that AI, trained on vast amounts of human data, mirrors human tendencies, revealing insights about human behavior and communication [21]
你骂AI越狠,它反而越聪明?
Hu Xiu· 2025-10-17 02:59
Core Insights - The article discusses a study titled "Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy," which concludes that using polite language with AI results in poorer performance compared to using rude or aggressive prompts [4][30]. Group 1: Study Findings - The study conducted by researchers from Penn State University involved 50 multiple-choice questions across various subjects, testing different levels of politeness in prompts [29]. - Results showed that the accuracy of responses increased from 80.8% with very polite prompts to 84.8% with very rude prompts, indicating a 4 percentage point improvement [32][34]. - The accuracy rates for different tones were as follows: Very Polite (80.8%), Polite (81.4%), Neutral (82.2%), Rude (82.8%), and Very Rude (84.8%) [35]. Group 2: Implications of Communication Style - The article suggests that politeness often conveys uncertainty, leading AI to provide more cautious and vague responses [46][56]. - In contrast, aggressive prompts signal clarity and certainty, prompting AI to deliver more precise and direct answers [60][62]. - The findings reflect broader human communication patterns, where assertiveness can lead to more effective outcomes in ambiguous situations [70][72]. Group 3: Philosophical Reflections - The article raises questions about the nature of human-AI interaction, suggesting that the relationship may require a more direct and clear communication style rather than one based on politeness [75][79]. - It posits that AI, trained on human data, reflects human communication flaws, highlighting the need for more straightforward expression of intentions [77][86]. - The conclusion emphasizes the importance of sincerity and clarity in communication, advocating for a balance between respect and directness in interactions with AI [85][89].
你骂AI越狠,它反而越聪明?
数字生命卡兹克· 2025-10-17 01:32
Core Viewpoint - The article discusses a study that reveals a counterintuitive finding: the more polite the prompt given to AI, the worse its performance, while rudeness leads to better results [3][26]. Group 1: Study Findings - The study conducted by researchers from Pennsylvania State University involved 50 multiple-choice questions across various subjects, testing different levels of politeness in prompts [22][25]. - Results showed that the "very polite" prompts had an accuracy of 80.8%, while "very rude" prompts achieved an accuracy of 84.8%, indicating a 4% improvement with rudeness [26][27]. - The study suggests that less effective models respond better to rude prompts, highlighting a trend where "the more you insult it, the smarter it gets" [28][29]. Group 2: Human Communication Insights - The article posits that politeness often conveys uncertainty in human interactions, as people tend to be polite when they are unsure or seeking help [34][38]. - In contrast, direct and rude communication signals clarity and certainty, prompting more effective responses from AI [42][44]. - The author draws parallels between human communication and AI interactions, suggesting that the AI's training data reflects a preference for directness over politeness [40][58]. Group 3: Philosophical Implications - The article raises philosophical questions about the nature of communication with AI, pondering whether humans should treat AI as a subordinate tool requiring harsh commands or reflect on their own communication habits [56][60]. - It emphasizes the importance of clear and direct language in interactions with AI, advocating for expressing needs without unnecessary politeness [62][65]. - The conclusion suggests that AI serves as a mirror reflecting human communication flaws, urging a shift towards more sincere and straightforward interactions [57][66].
o3-pro答高难题文字游戏引围观,OpenAI前员工讽刺苹果:这都不叫推理那什么叫推理
量子位· 2025-06-13 02:25
Core Viewpoint - OpenAI's latest reasoning model, o3-pro, demonstrates strong reasoning capabilities but has mixed performance in various evaluations, indicating a need for context and specific prompts to maximize its potential [1][2][3][4]. Evaluation Results - o3-pro achieved a correct answer in 4 minutes and 25 seconds during a reasoning test, showcasing its ability to process complex queries [2]. - In official evaluations, o3-pro surpassed previous models like o3 and o1-pro, becoming the best coding model from OpenAI [8]. - However, in the LiveBench ranking, o3-pro showed only a slight advantage over o3 with a score difference of 0.07, and it lagged behind o3 in agentic coding scores (31.67 vs 36.67) [11]. Contextual Performance - o3-pro excels in short context scenarios, showing improvement over o3, but struggles with long context processing, scoring 65.6 compared to Gemini 2.5 Pro's 90.6 in 192k context tests [15][16]. - The model's performance is highly dependent on the background information provided, as noted by user experiences [24][40]. User Insights - Bindu Reddy, a former executive at Amazon and Google, pointed out that o3-pro lacks proficiency in tool usage and agent capabilities [12]. - Ben Hylak, a former engineer at Apple and SpaceX, emphasized that o3-pro's effectiveness increases significantly when treated as a report generator rather than a chat model, requiring ample context for optimal results [22][24][26]. Comparison with Other Models - Ben Hylak found o3-pro's outputs to be superior to those of Claude Opus and Gemini 2.5 Pro, highlighting its unique value in practical applications [39]. - The model's ability to understand its environment and accurately describe tool usage has improved, making it a better coordinator in tasks [30][31]. Conclusion - The evaluation of o3-pro reveals that while it has advanced reasoning capabilities, its performance is contingent on the context and prompts provided, necessitating a strategic approach to maximize its utility in various applications [40][41].