CHAI攻击
Search documents
路标文字可“劫持”自动驾驶汽车与无人机 具身智能面临“视觉攻击”风险
Ke Ji Ri Bao· 2026-01-28 01:56
Core Insights - The research highlights a new threat to autonomous systems, specifically "visual attacks" that can hijack decision-making processes of self-driving cars and drones through malicious text embedded in the environment [1][2][3] - The study emphasizes the urgent need for the industry to establish new safety standards and protective mechanisms to counteract these vulnerabilities [1][2] Group 1: Research Findings - Researchers from the University of California, Santa Cruz, have demonstrated that attackers can manipulate autonomous systems by embedding specific text in physical objects like road signs and posters, leading to dangerous behaviors [2][3] - A framework named "CHAI" was developed to test this concept, which optimizes attack text using generative AI and adjusts visual attributes such as color, size, and position to enhance the effectiveness of the attack [2] - In tests, the CHAI attack successfully interfered with navigation judgments of self-driving vehicles, achieving a manipulation success rate of up to 95.5% in simulated drone scenarios [2] Group 2: Implications for the Industry - The findings indicate that such attacks are feasible in the physical world, posing a real threat to the safety of intelligent systems as AI becomes more integrated into physical environments [2] - The research serves as a warning for the industry to consider the broader implications of AI safety and to conduct more proactive studies to strengthen the security foundation of these technologies [3]
具身智能面临“视觉攻击”风险
Ke Ji Ri Bao· 2026-01-28 01:19
Core Insights - The research highlights a new threat to autonomous systems, specifically "visual attacks" that can hijack decision-making processes of self-driving cars and drones through malicious text embedded in the environment [1][2] Group 1: Research Findings - Scientists from the University of California, Santa Cruz, have revealed that attackers can manipulate autonomous systems by embedding specific text information in physical environments, leading to dangerous behaviors [1] - The study introduces the concept of "environmental indirect prompts," where malicious text can be placed on road signs or posters to mislead AI systems that rely on visual language models [2] - A framework named "CHAI" was designed to demonstrate command hijacking of embodied AI, optimizing attack text using generative AI and adjusting visual attributes to enhance attack effectiveness [2] Group 2: Experimental Results - The CHAI attack framework was tested in three scenarios: autonomous driving, emergency landings of drones, and target searches, showing a success rate of up to 95.5% in manipulating autonomous systems [2] - In real-world tests, misleading images successfully interfered with the navigation judgments of test vehicles, confirming the feasibility of such attacks in physical environments [2] Group 3: Industry Implications - The findings serve as a warning for the industry, emphasizing the need for new safety standards and protective mechanisms as AI becomes increasingly integrated into physical systems [1][2][3] - The research calls for more comprehensive considerations and proactive studies to ensure the safety of AI technologies as they are deployed in real-world applications [3]