Gemini准确率从21%飙到97%!谷歌只用了这一招:复制粘贴
猿大侠·2026-01-19 04:11

Core Insights - A recent study by Google Research reveals that simply repeating a question can significantly enhance the accuracy of large language models (LLMs) from 21.33% to 97.33% without requiring reasoning capabilities [1][4][18] - This technique, termed "prompt repetition," challenges the need for complex prompting strategies like "Chain of Thought" and "Multi-shot" [1][9][10] Group 1: Effectiveness of Prompt Repetition - The study demonstrated that prompt repetition outperformed baseline methods in 47 out of 70 tests, with no losses recorded [12][13] - In a specific test involving identifying the 25th name from a list of 50, the accuracy of Gemini 2.0 Flash-Lite improved from 21.33% to 97.33% through repetition [16][18] - The technique provides a "look-back" opportunity for models, allowing them to utilize previously seen information, thus enhancing performance [29][32] Group 2: Efficiency and Cost-Effectiveness - Prompt repetition does not significantly impact generation speed, as the processing of repeated prompts is highly parallelizable [36][40] - This finding suggests that developers can achieve high accuracy without the need for larger, more expensive models, making it a cost-effective solution [41][42] - The ability to enhance smaller models' performance to match or exceed that of larger models represents a significant advancement in AI technology [42] Group 3: Limitations and Safety Considerations - While effective for retrieval tasks, prompt repetition is not suitable for reasoning tasks, where models may already internally repeat the prompt [46][52] - The increased attention mechanism from repetition could potentially amplify certain instructions, raising security concerns regarding model vulnerabilities [56][58] - Developers are encouraged to consider the implications of prompt repetition on both model performance and security, potentially using it as a defensive strategy [60][61]

Gemini准确率从21%飙到97%!谷歌只用了这一招:复制粘贴 - Reportify