Workflow
越狱检测
icon
Search documents
多模态大模型存在「内心预警」,无需训练,就能识别越狱攻击
机器之心· 2025-07-21 08:43
Core Viewpoint - The rise of multimodal large models (LVLMs) has led to significant advancements in tasks such as image-text question answering and visual reasoning, but they are more susceptible to "jailbreaking" attacks compared to pure text models [2][5]. Group 1: Multimodal Model Security Challenges - LVLMs, such as GPT-4V and LLaVA, integrate images and text, enhancing their capabilities but also exposing them to security vulnerabilities [2]. - Existing methods to enhance model security, including cross-modal safety fine-tuning and external discriminator modules, face challenges such as high training costs and poor generalization [3]. Group 2: HiddenDetect Methodology - Researchers from CUHK MMLab and Taotian Group introduced HiddenDetect, a novel jailbreak detection method that does not require training [5]. - The core finding is that LVLMs retain rejection signals in their hidden states even when they generate inappropriate content, particularly in intermediate layers [5][9]. Group 3: Analysis of Rejection Signals - The study constructs a "rejection semantic vector" (RV) from frequently occurring tokens that indicate refusal, allowing for the measurement of rejection signal strength across model layers [9]. - Experimental results show significant differences in rejection signal strength between safe and unsafe inputs, with intermediate layers being more sensitive to safety concerns [9][10]. Group 4: Input Type Sensitivity - The analysis reveals that different input modalities activate distinct safety pathways, with text inputs showing quicker rejection signal activation compared to image-text inputs [17][19]. - The presence of visual modalities can delay the model's rejection response, weakening its safety mechanisms [19]. Group 5: Experimental Results and Effectiveness - The HiddenDetect method was evaluated across multiple mainstream LVLMs, demonstrating robust performance against various attack types while maintaining good generalization capabilities [23]. - The method achieved high detection effectiveness, with the proposed approach outperforming existing methods in terms of robustness and generalization [24]. Group 6: Future Directions - The research emphasizes the importance of safety in deploying large models in real-world applications and aims to expand the capabilities of the detection method while exploring the relationship between modality information and model safety [28].