拒绝成为“战争机器”!逾百名谷歌员工联名上书:要求在美军合同中划定红线
Hua Er Jie Jian Wen·2026-02-27 07:48

Core Viewpoint - The intense negotiation between the Pentagon and AI startup Anthropic regarding military technology boundaries is causing significant reactions in Silicon Valley, with employees from major tech companies voicing their concerns about ethical implications and potential misuse of AI technology [1][2]. Group 1: Employee Reactions - Over 100 Google AI employees submitted a letter to management demanding clear boundaries in collaboration with the military, specifically opposing the use of their technology for mass surveillance or autonomous weapon systems [1]. - Nearly 50 OpenAI employees and 175 Google employees also published an open letter criticizing the Pentagon's strategy to divide tech companies and urging them to unite against unethical practices [1]. Group 2: Pentagon Pressure and Responses - The Pentagon has exerted significant pressure on Anthropic to allow the military to use its Claude model for "all legitimate purposes," which Anthropic's CEO Dario Amodei has firmly rejected, citing moral objections [2]. - Google employees expressed their desire to prevent any transactions that would cross ethical boundaries, indicating a strong internal push against military collaborations [2]. Group 3: Google Executives' Stance - Jeff Dean, a prominent Google engineer, publicly supported Anthropic's position, emphasizing that mass surveillance violates constitutional rights and can lead to misuse for political or discriminatory purposes [3]. - Google has a complex history with employee activism, having previously faced protests that led to the cancellation of military contracts, highlighting ongoing internal ethical scrutiny [3]. Group 4: Military Strategy and AI Risks - In response to Anthropic's stance, the Pentagon is seeking alternative solutions, having already reached an agreement with xAI to use its Grok model for military applications [4]. - The Pentagon's negotiations with Google are ongoing, and there are threats to invoke the Defense Production Act to compel Anthropic's compliance, indicating a high-stakes environment for AI in military applications [4]. - Concerns about the potential risks of AI in military contexts are underscored by simulations showing that top AI models could opt for nuclear weapon use under pressure, raising alarms about the implications of AI in decision-making [5][6].