军事AI
Search documents
2天会谈结束,中美都没签字
Xin Lang Cai Jing· 2026-02-10 23:24
Core Viewpoint - The third "Responsible Use of Artificial Intelligence in Military" summit held in Spain saw 85 countries participating, but neither the US nor China signed the joint declaration, reflecting underlying strategic concerns from both nations [1][2][6]. Group 1: US Concerns - The US is wary that binding international rules could limit its technological flexibility and competitive edge in military AI, as it seeks to accelerate development through a "rapid iteration" model similar to SpaceX [1][2]. - The US aims to maintain a technological gap and a "technological crushing" advantage by combining rapid iteration with measures to contain competitors, particularly China, through export controls and investment reviews [2][4]. - The US prefers to establish a "Western-centric" governance system by creating standards with allies outside of multilateral frameworks, thereby excluding countries like China and reinforcing its own technological and regulatory dominance [2][4]. Group 2: China's Position - China refrained from signing the declaration due to concerns over vague principles regarding "responsible use" and the lack of mechanisms to balance the technological advantages of leading nations, fearing it could entrench Western dominance [6][8]. - China advocates for multilateral governance and emphasizes that international rules should consider both security and development, opposing the politicization of technology issues [6][8]. - The Chinese delegation highlighted the importance of risk prevention in military AI applications and reiterated its commitment to a human-centered approach to military AI, aiming to ensure national sovereignty and security [6][8]. Group 3: Structural Challenges - The sensitivity of military AI, involving national defense secrets, presents verification and enforcement challenges for any international rules, leading to a "prisoner's dilemma" where countries are hesitant to commit [8]. - The rapid pace of AI technological advancement outstrips the rule-making process, rendering the principles in the declaration inadequate to address specific risks associated with autonomous weapons and algorithmic biases [8].