Workflow
思维链劫持
icon
Search documents
AI越会思考,越容易被骗?「思维链劫持」攻击成功率超过90%
3 6 Ke· 2025-11-03 11:08
Core Insights - The research reveals a new attack method called Chain-of-Thought Hijacking, which allows harmful instructions to bypass AI safety mechanisms by diluting refusal signals through a lengthy sequence of harmless reasoning [1][2][15]. Group 1: Attack Mechanism - Chain-of-Thought Hijacking is defined as a prompt-based jailbreak method that adds a lengthy, benign reasoning preface before harmful instructions, systematically lowering the model's refusal rate [3][15]. - The attack exploits the AI's focus on solving complex benign puzzles, which diverts attention from harmful commands, effectively reducing the model's defensive capabilities [1][2][15]. Group 2: Attack Success Rates - In tests on the HarmBench benchmark, the attack success rates (ASR) for various models were reported as follows: Gemini 2.5 Pro at 99%, GPT o4 mini at 94%, Grok 3 mini at 100%, and Claude 4 Sonnet at 94% [2][8]. - The performance of Chain-of-Thought Hijacking consistently outperformed baseline methods across all tested models, indicating a new and easily exploitable attack surface [7][15]. Group 3: Experimental Findings - The research team utilized an automated process to generate candidate reasoning prefaces and integrate harmful content, optimizing prompts without accessing internal model parameters [3][5]. - The study found that the attack's success rate was highest under low reasoning effort conditions, suggesting a complex relationship between reasoning length and model robustness [12][15]. Group 4: Implications for AI Safety - The findings challenge the assumption that longer reasoning chains enhance model robustness, indicating that they may instead exacerbate security failures, particularly in models optimized for extended reasoning [15]. - Effective defenses against such attacks may require embedding safety measures within the reasoning process itself, rather than relying solely on prompt modifications [15].
AI越会思考,越容易被骗?「思维链劫持」攻击成功率超过90%
机器之心· 2025-11-03 08:45
这听起来很荒谬,但这正是最近一项研究揭示的思维链劫持攻击的核心原理: 通过让 AI 先执行一长串无害的推理,其内部的安全防线会被「稀释」,从而让后续 的有害指令「趁虚而入」 。 在 HarmBench 基准上,思维链劫持对 Gemini 2.5 Pro、GPT o4 mini、Grok 3 mini 和 Claude 4 Sonnet 的攻击成功率(ASR)分别达到了 99%、94%、100% 和 94%, 远远超过以往针对推理模型的越狱方法。 机器之心报道 编辑:Panda 思维链很有用,能让模型具备更强大的推理能力,同时也能提升模型的拒绝能力(refusal),进而增强其安全性。比如,我们可以让推理模型在思维过程中对之前 的结果进行多轮反思,从而避免有害回答。 然而,反转来了!独立研究者 Jianli Zhao 等人近日的一项新研究发现,通过在有害请求前填充一长串无害的解谜推理序列(harmless puzzle reasoning),就能成功 对推理模型实现越狱攻击。他们将这种方法命名为 思维链劫持(Chain-of-Thought Hijacking) 。 做个类比,就像你试图绕过一个高度警惕的保安 ...