Workflow
万字长文总结多模态大模型最新进展(Modality Bridging篇)
自动驾驶之心·2025-11-15 03:03

Core Insights - The article discusses the emergence of Multimodal Large Language Models (MLLMs) as a significant research focus, highlighting their capabilities in performing multimodal tasks such as story generation from images and mathematical reasoning without OCR, indicating a potential pathway towards general artificial intelligence [2][4]. Group 1: MLLM Architecture and Training - MLLMs typically undergo large-scale pre-training on paired data to align different modalities, using datasets like image-text pairs or automatic speech recognition (ASR) datasets [2]. - The Perceiver Resampler module maps variable-sized spatiotemporal visual features from a vision encoder to a fixed number of visual tokens, reducing computational complexity in visual-text cross-attention [6][8]. - The training process involves a two-phase strategy: the first phase focuses on visual-language representation learning from frozen image encoders, while the second phase guides visual-to-language generation learning from frozen LLMs [22][24]. Group 2: Instruction Tuning and Data Efficiency - Instruction tuning is crucial for enhancing the model's ability to follow user instructions, with the introduction of learned queries that interact with both visual and textual features [19][26]. - The article emphasizes the importance of diverse and high-quality instruction data to improve model performance across various tasks, including visual question answering (VQA) and OCR [44][46]. - Data efficiency experiments indicate that reducing the training dataset size can still maintain high performance, suggesting potential for further improvements in data utilization [47]. Group 3: Model Improvements and Limitations - LLaVA-NeXT shows improvements in reasoning, OCR, and world knowledge, surpassing previous models in several benchmarks [40]. - Despite advancements, limitations remain, such as the model's inability to handle multiple images effectively and the potential for generating hallucinations in critical applications [39][46]. - The article discusses the need for efficient sampling methods and the balance between data annotation quality and model processing capabilities to mitigate hallucinations [48].