Workflow
视觉头
icon
Search documents
ICCV 2025 | 清华&腾讯混元X发现「视觉头」机制:仅5%注意力头负责多模态视觉理解
机器之心· 2025-07-14 11:33
Core Insights - The article introduces SparseMM, a method that optimizes KV-Cache allocation based on the identification of visual heads in multimodal large models, significantly improving efficiency and performance in visual understanding tasks [5][30][31] Group 1: Visual Head Identification - Multimodal large models extend from large pre-trained language models (LLMs) and can exhibit strong performance in visual tasks after multimodal training [2] - The study identifies that less than 5% of attention heads, termed "visual heads," are primarily responsible for visual understanding, while most heads focus on text or auxiliary features [2][8] - A method based on OCR tasks is proposed to quantify the attention of each head towards visual content, revealing the sparse nature of visual heads [2][14] Group 2: SparseMM Methodology - SparseMM employs a differentiated cache allocation strategy, dividing the total cache budget into three parts: basic local cache for all heads, uniform distribution, and prioritized allocation for visual heads based on their scores [6][20] - The method has been tested across various multimodal benchmarks, achieving a decoding speedup of up to 1.87× and reducing peak memory usage by 52% [6][27] Group 3: Experimental Results - In OCR-rich datasets like DocVQA and TextVQA, SparseMM demonstrates significant performance advantages, maintaining high accuracy even with limited cache budgets [22][23] - The method shows robust performance across general visual tasks, maintaining nearly consistent performance with full cache models under constrained budgets [25] Group 4: Implications for Deployment - SparseMM effectively reduces inference costs and enhances the deployment efficiency of multimodal large models, particularly in high-resolution image and long-context scenarios [27][31] - The visualization of identified visual heads indicates their ability to accurately focus on relevant visual information, contrasting with non-visual heads that often miss critical details [28]