Core Viewpoint - The use of AI tool Claude by the U.S. military in operations against Venezuelan President Maduro has raised concerns from its developer, Anthropic, leading to potential reevaluation of their $200 million contract with the Pentagon [1][4][5]. Group 1: AI Tool Usage - The U.S. military utilized Anthropic's AI tool Claude for intelligence analysis and operational execution during the capture of Venezuelan President Maduro [1][3]. - Claude was deployed on a classified platform through a partnership between Anthropic and Palantir Technologies, allowing military users access to the AI model [3]. - The Pentagon values the real-time data processing capabilities of AI models, especially in chaotic military environments, and seeks the right to use AI models under legal compliance [3]. Group 2: Company Concerns and Contract Implications - Anthropic has expressed dissatisfaction regarding the use of Claude in violent actions, emphasizing their commitment to safety and compliance with usage policies [1][4]. - Following the reports of Claude's involvement in military operations, the Pentagon is reconsidering its partnership with Anthropic, indicating potential risks to operational success [4]. - The CEO of Anthropic has publicly voiced concerns about the application of AI in lethal actions and domestic surveillance, which are central to the ongoing contract negotiations with the Pentagon [5].
自家产品被用于绑架马杜罗,这家美国AI公司很不满
Guan Cha Zhe Wang·2026-02-14 09:49