自家产品被用于绑架马杜罗,Anthropic:任何使用都必须遵守规则
Xin Lang Cai Jing·2026-02-14 10:20

Core Viewpoint - The use of AI tool Claude by the U.S. military in operations against Venezuelan President Maduro has raised concerns from its developer, Anthropic, leading to potential reevaluation of their $200 million contract with the Pentagon [1][4][5]. Group 1: AI Tool Usage - The U.S. military utilized Anthropic's AI tool Claude for intelligence analysis and operational execution during the operation to capture Maduro [1][3]. - Claude was deployed on a classified platform through a partnership between Anthropic and Palantir Technologies, allowing military users access to the AI model [3]. - The Pentagon values the real-time data processing capabilities of AI models, especially in chaotic military environments, and seeks the right to use AI models under legal compliance [3]. Group 2: Company Concerns and Contract Implications - Anthropic has expressed dissatisfaction regarding the use of Claude in violent actions, emphasizing their commitment to safety and compliance with usage policies [1][4]. - Following the reports of Claude's involvement in military actions, the Pentagon is reconsidering its partnership with Anthropic, indicating that any company jeopardizing operational success may face contract reevaluation [4]. - The CEO of Anthropic has publicly voiced concerns about the implications of AI in lethal operations and domestic surveillance, which are central to the ongoing contract negotiations with the Pentagon [5].

自家产品被用于绑架马杜罗,Anthropic:任何使用都必须遵守规则 - Reportify