一个被忽视的Prompt技巧,居然是复制+粘贴。

Core Viewpoint - The article discusses a technique from a Google paper that shows how repeating prompts can significantly improve the accuracy of non-reasoning large language models (LLMs) from 21.33% to 97.33% [1][7]. Group 1: Experiment Overview - Google conducted experiments using seven popular non-reasoning models, including Gemini 2.0 Flash, GPT-4o, and Claude 3, to test the effectiveness of prompt repetition [13]. - The results indicated that this simple technique won 47 out of 70 tests, with no failures, demonstrating a clear performance improvement across all tested models [25]. Group 2: Mechanism of Improvement - The improvement is attributed to the nature of causal language models, which predict words sequentially. By repeating the prompt, the model can "look back" at the previous context, enhancing its understanding [28][30]. - This technique allows the model to have a second chance to process the information, leading to better accuracy in responses [39][40]. Group 3: Implications for Prompt Engineering - The article suggests that for many straightforward Q&A scenarios, simply repeating the question can be a powerful optimization strategy, rather than relying on complex prompt structures [50]. - Future directions mentioned in the paper include integrating this repetition technique into the training process of models, which could further enhance their performance [52].

一个被忽视的Prompt技巧,居然是复制+粘贴。 - Reportify