美媒爆料:“美对委军事行动中使用了AI模型”
Xin Lang Cai Jing·2026-02-14 06:19

Core Insights - The U.S. military conducted a large-scale military operation against Venezuela on January 3, forcibly detaining President Maduro and his wife, reportedly utilizing the AI model "Claude" developed by Anthropic [1][4] - The deployment of "Claude" was achieved through a collaboration between Anthropic and Palantir, a big data analytics company commonly used by the U.S. Department of Defense [2][5] - Anthropic's spokesperson stated that any use of "Claude" must comply with their usage policy, which prohibits applications that promote violence or weapon development [2][5] Company Insights - Anthropic is the first AI model developer to be used for classified operations by the U.S. Department of Defense, raising concerns about the implications of AI in military actions [3][6] - The recent military action has prompted discussions within the U.S. government about potentially canceling a $200 million contract with Anthropic due to concerns over the use of "Claude" [2][5] - Anthropic has positioned itself as a safer alternative in the AI industry, emphasizing its commitment to AI safety, which is now challenged by the military's use of its technology [2][5] Industry Insights - The increasing application of AI models within the Pentagon indicates a growing trend towards integrating advanced technologies in military operations, with potential implications for future conflicts [3][6] - The military's use of AI tools spans a wide range of functions, from document summarization to controlling autonomous drones, highlighting the versatility and potential risks associated with AI in defense [3][6] - The international response to the U.S. military action has been critical, with multiple countries condemning the operation and calling for adherence to international law and the principles of the United Nations Charter [3][6]