Core Viewpoint - Anthropic has filed a lawsuit against the U.S. government, claiming retaliation for its refusal to remove safety limits on its AI model, Claude, and challenging the government's designation of the company as a national security risk [1]. Group 1: Dispute Background - Anthropic has developed Claude into a widely deployed frontier AI model for the government, including a specialized version for military use, but has resisted demands to allow its use in lethal autonomous warfare and mass surveillance [1]. - The conflict began during negotiations over the Pentagon's GenAI.mil platform, where the Department of Defense demanded Anthropic abandon its usage policy entirely [1]. Group 2: Government's Actions - Secretary of Defense Pete Hegseth issued an ultimatum to Anthropic, demanding compliance within four days or face consequences, including expulsion from the defense supply chain [1]. - Following Anthropic's refusal, President Trump ordered all federal agencies to cease using Anthropic's technology, labeling the company as a "RADICAL LEFT, WOKE COMPANY" [1]. Group 3: Legal Claims - Anthropic argues that the supply chain designation lacks factual basis, citing its FedRAMP authorization and years of government praise for Claude's capabilities [1]. - The company has raised five legal claims against the government, alleging violations of the Administrative Procedure Act, First Amendment, Fifth Amendment, and unauthorized agency sanctions [1].
Explainer: Anthropic's case against the government: what the AI company says happened