X @Anthropic
Anthropic·2025-10-09 16:28
Previous research suggested that attackers might need to poison a percentage of an AI model’s training data to produce a backdoor.Our results challenge this—we find that even a small, fixed number of documents can poison an LLM of any size.Read more: https://t.co/HGMA7k1Lnf ...