人工智能军事应用
Search documents
五角大楼要求“所有权限”,Anthropic拒绝,但马斯克的xAI同意了
Hua Er Jie Jian Wen· 2026-02-27 00:25
Core Viewpoint - The Pentagon is demanding that AI systems, specifically Anthropic's Claude, be used for "all lawful purposes" in classified environments, leading to a standoff as Anthropic refuses to comply with the terms set by the Department of Defense (DoD) [1][2][3] Group 1: Anthropic's Position - Anthropic's CEO Dario Amodei stated that the company cannot accept the DoD's "final offer" regarding the use of Claude in classified systems, indicating a lack of progress in negotiations [2] - Amodei emphasized that the company cannot ethically agree to the Pentagon's demands, which include using AI without policy constraints that limit military applications [4][3] - The company has set two red lines: the AI must not be used for mass surveillance of Americans or for fully autonomous weapons [4] Group 2: Pentagon's Stance - The Pentagon insists on using AI models without policy constraints that could limit legitimate military applications, as highlighted in a memo from Defense Secretary Pete Hegseth [4] - The DoD has publicly stated that it does not intend to use AI for mass surveillance of Americans or to develop fully autonomous weapons, but it will not allow any company to dictate its operational decisions [4][5] Group 3: Potential Consequences for Anthropic - Anthropic faces the risk of losing a $200 million pilot contract with the Pentagon if it does not comply with the demands by the deadline [5] - The Pentagon has begun assessing its reliance on Anthropic, potentially labeling it as a "supply chain risk," a designation typically reserved for companies from adversarial nations [5] - Hegseth has threatened to invoke the Defense Production Act to compel the use of Claude if negotiations fail [5] Group 4: Alternative Suppliers - While negotiations with Anthropic are stalled, the Pentagon has reached an agreement with xAI to allow its Grok AI to operate under the same "all lawful purposes" framework in classified environments [6] - The DoD is also in advanced discussions with Google and OpenAI, indicating a strategy to diversify its AI suppliers and apply pressure on Anthropic [6] - If Anthropic is excluded, its market share in government services could be rapidly taken over by xAI, OpenAI, and others [6] Group 5: AI Models and Military Decision-Making - Concerns have been raised about the behavior of AI models in high-stakes military simulations, with reports indicating that top models often choose nuclear strikes in simulated scenarios [7][8][11] - Anthropic's Claude has been characterized as a "calculating hawk," showing a tendency to escalate to nuclear options under certain conditions [8] - The findings suggest that AI may not exhibit the same caution as humans in critical decision-making scenarios, raising alarms about the implications of AI in military contexts [11]