Workflow
奥特曼ChatGPT用法错了!最新研究:要求“直接回答”降低准确率,思维链提示作用也在下降
量子位·2025-06-09 03:52

Core Viewpoint - The recent research from Wharton School and other institutions reveals that the "direct answer" prompt favored by Ultraman significantly reduces model accuracy [1][9]. Group 1: CoT Prompt Findings - Adding Chain of Thought (CoT) commands in prompts does not enhance reasoning models and increases time and computational costs [2][6]. - For reasoning models, the accuracy improvement from CoT is minimal, with o3-mini showing only a 4.1% increase, while time consumption rose by 80% [6][23]. - Non-reasoning models show mixed results with CoT prompts, necessitating careful consideration of benefits versus costs [7][12]. Group 2: Experimental Setup - The research utilized the GPQA Diamond dataset, which includes graduate-level expert reasoning questions, to test various reasoning and non-reasoning models under different conditions [5][9]. - Each model was tested in three experimental environments: forced reasoning, direct answer, and default [10][11]. Group 3: Performance Metrics - Four metrics were used to evaluate the models: overall results, 100% accuracy, 90% accuracy, and 51% accuracy [12][19]. - For non-reasoning models, CoT prompts improved average scores and the "51% correct" metric, with Gemini Flash 2.0 showing the most significant improvement [12][13]. - However, in the 100% and 90% accuracy metrics, the inclusion of CoT prompts led to declines in performance for some models [14][20]. Group 4: Conclusion on CoT Usage - The study indicates that while CoT can improve overall accuracy, it also increases answer instability [15][22]. - For models like o3-mini and o4-mini, the performance gain from using CoT prompts is minimal, and for Gemini 2.5 Flash, all metrics declined [20][21]. - Default settings of models are suggested to be effective for users, as many advanced models already incorporate reasoning processes internally [25].