Explainer-Anthropic's case against the government: what the AI company says happened
Yahoo Finance·2026-03-09 20:26

Core Viewpoint - Anthropic has filed a lawsuit against the U.S. government, claiming retaliation for not removing safety limits on its AI model, Claude, and is willing to collaborate with the military under specific terms [1][2]. Group 1: Dispute Background - Anthropic has developed Claude into the most widely deployed frontier AI model for the government, including a specialized version for military use, while loosening some restrictions for national security purposes [3]. - The conflict originated in fall 2025 during negotiations over the Pentagon's GenAI.mil platform, where the Department of Defense demanded Anthropic to abandon its usage policy and allow Claude for "all lawful uses," which Anthropic partially agreed to, except for lethal autonomous warfare and mass surveillance of Americans [4]. Group 2: Company Position - Anthropic asserts that Claude has not been tested for lethal autonomous warfare or mass surveillance and cannot perform these tasks safely, offering to assist in transitioning the work to another provider if no agreement is reached [5]. - The company rejected the ultimatum presented by Secretary of Defense Pete Hegseth, which required compliance within four days or face penalties under the Defense Production Act or expulsion from the defense supply chain [7]. Group 3: Government Response - Following Anthropic's rejection of the ultimatum, President Donald Trump ordered all federal agencies to cease using Anthropic's technology, labeling the company as a "RADICAL LEFT, WOKE COMPANY" [8].

Explainer-Anthropic's case against the government: what the AI company says happened - Reportify