路标文字可“劫持”自动驾驶汽车与无人机 具身智能面临“视觉攻击”风险
Ke Ji Ri Bao·2026-01-28 01:56

Core Insights - The research highlights a new threat to autonomous systems, specifically "visual attacks" that can hijack decision-making processes of self-driving cars and drones through malicious text embedded in the environment [1][2][3] - The study emphasizes the urgent need for the industry to establish new safety standards and protective mechanisms to counteract these vulnerabilities [1][2] Group 1: Research Findings - Researchers from the University of California, Santa Cruz, have demonstrated that attackers can manipulate autonomous systems by embedding specific text in physical objects like road signs and posters, leading to dangerous behaviors [2][3] - A framework named "CHAI" was developed to test this concept, which optimizes attack text using generative AI and adjusts visual attributes such as color, size, and position to enhance the effectiveness of the attack [2] - In tests, the CHAI attack successfully interfered with navigation judgments of self-driving vehicles, achieving a manipulation success rate of up to 95.5% in simulated drone scenarios [2] Group 2: Implications for the Industry - The findings indicate that such attacks are feasible in the physical world, posing a real threat to the safety of intelligent systems as AI becomes more integrated into physical environments [2] - The research serves as a warning for the industry to consider the broader implications of AI safety and to conduct more proactive studies to strengthen the security foundation of these technologies [3]

路标文字可“劫持”自动驾驶汽车与无人机 具身智能面临“视觉攻击”风险 - Reportify