X @Anthropic
Anthropicยท2025-11-04 00:32
Current language models struggle to reason in ciphered language, led by Jeff Guo.Training or prompting LLMs to obfuscate their reasoning by encoding it using simple ciphers significantly reduces their reasoning performance.https://t.co/uqTCGWqmSaJeff Guo (@Jeff_Guo_):New Anthropic research: All Code, No Thought: Current Language Models Struggle to Reason in Ciphered LanguageCan LLMs do math when thinking in ciphered text? Across 10 LLMs & 28 ciphers, they only reason accurately in simple ciphers but easily ...