Core Viewpoint - Anthropic's CEO Dario Amodei has publicly rejected the U.S. military's demands for unrestricted use of its AI model Claude, emphasizing the company's commitment to ethical standards and the belief that military decisions should be made by the Department of Defense rather than private companies [1][2]. Group 1: Military Pressure and Company Response - The Pentagon has exerted extreme pressure on Anthropic, threatening to label the company as a supply chain risk if it does not comply with military demands by a specified deadline [2][3]. - Amodei stated that certain applications, such as large-scale domestic surveillance and fully autonomous weapons, are beyond the current safety scope of their technology and should not be included in contracts with the Department of Defense [2][3]. Group 2: Ethical Considerations and Future Engagement - The Department of Defense indicated that it only contracts with AI companies that agree to use AI for "any lawful purpose" without safety measures, threatening to exclude Anthropic from their system if they maintain their safeguards [3]. - Despite the threats, Amodei expressed a desire for the Department of Defense to reconsider their stance, highlighting the significant value Anthropic's technology could bring to the military while maintaining their ethical safeguards [3].
美军方要求无限制使用其AI大模型,被AI公司拒绝
Xin Lang Cai Jing·2026-02-27 03:23