具身智能面临“视觉攻击”风险
Ke Ji Ri Bao·2026-01-28 01:19

Core Insights - The research highlights a new threat to autonomous systems, specifically "visual attacks" that can hijack decision-making processes of self-driving cars and drones through malicious text embedded in the environment [1][2] Group 1: Research Findings - Scientists from the University of California, Santa Cruz, have revealed that attackers can manipulate autonomous systems by embedding specific text information in physical environments, leading to dangerous behaviors [1] - The study introduces the concept of "environmental indirect prompts," where malicious text can be placed on road signs or posters to mislead AI systems that rely on visual language models [2] - A framework named "CHAI" was designed to demonstrate command hijacking of embodied AI, optimizing attack text using generative AI and adjusting visual attributes to enhance attack effectiveness [2] Group 2: Experimental Results - The CHAI attack framework was tested in three scenarios: autonomous driving, emergency landings of drones, and target searches, showing a success rate of up to 95.5% in manipulating autonomous systems [2] - In real-world tests, misleading images successfully interfered with the navigation judgments of test vehicles, confirming the feasibility of such attacks in physical environments [2] Group 3: Industry Implications - The findings serve as a warning for the industry, emphasizing the need for new safety standards and protective mechanisms as AI becomes increasingly integrated into physical systems [1][2][3] - The research calls for more comprehensive considerations and proactive studies to ensure the safety of AI technologies as they are deployed in real-world applications [3]

具身智能面临“视觉攻击”风险 - Reportify