Core Viewpoint - The U.S. Department of Defense is nearing a decision to sever ties with Anthropic, potentially designating the AI company as a supply chain risk due to dissatisfaction with the limitations on the use of its technology [2][5]. Group 1: Relationship Dynamics - The discussions between the U.S. military and Anthropic regarding the use of the Claude tool have been intense and prolonged, nearly leading to a breakdown in relations [2][5]. - Anthropic aims to ensure that its AI technology is not used for mass surveillance of citizens or for developing autonomous weapons that can be deployed without human involvement [2][5]. - The U.S. government desires to utilize Claude for "all legitimate purposes," indicating a fundamental disagreement on the scope of use [2][5]. Group 2: Implications of Supply Chain Risk - If Anthropic is classified as a supply chain risk, any company wishing to do business with the Department of Defense would be required to distance itself from Anthropic [2][5]. - A senior Pentagon official emphasized the importance of partnerships that support military operations and the safety of U.S. citizens [2][5]. Group 3: Previous Agreements and Future Negotiations - Last year, Anthropic secured a two-year contract with the Department of Defense for the Claude Gov model prototype and the Claude for Enterprise version [6]. - The negotiations between Anthropic and the military may set a precedent for future discussions with other AI companies like OpenAI, Google, and xAI, which have not yet engaged in classified work [6]. - Anthropic, founded by former OpenAI researchers, positions itself as a responsible AI company, aiming to prevent catastrophic risks associated with advanced technology [6].
Axios称美国国防部接近与Anthropic切断关系 双方就AI军事用途存分歧
Xin Lang Cai Jing·2026-02-16 15:19