DeepStack
Search documents
从 LLaVA 到 Qwen3-VL:多模态大模型主流架构的演进之路
自动驾驶之心· 2025-12-03 00:04
Core Insights - The article discusses the evolution of artificial intelligence (AI) from a text-based model to a multimodal large model (MLLM) capable of perceiving and interacting with the physical world through vision and language [3][4]. - It highlights two successful technical evolution paths in MLLM: the LLaVA series, which emphasizes simplicity, and the Qwen3-VL, which focuses on deep integration [3][4]. Group 1: MLLM Architecture - MLLM follows a "trinity" architecture consisting of a visual encoder (Vision Transformer), a language model (LLM), and a connector that facilitates communication between the two [6][10]. - The visual encoder transforms images into mathematical representations, while the LLM processes these representations to generate coherent text responses [10][22]. - The connector acts as a bridge, translating visual features into a format that the LLM can understand, ensuring seamless integration of visual and textual information [36][37]. Group 2: Vision Transformer (ViT) - ViT revolutionizes image processing by treating images as sequences of patches, allowing the model to leverage transformer architecture for visual understanding [11][13]. - The process involves segmenting images into patches, flattening them into vectors, and adding positional information to maintain spatial context [13][16]. - ViT's multi-head attention mechanism enables the model to capture relationships between distant elements in an image, enhancing its ability to understand complex visual scenes [21][22]. Group 3: Language Model (LLM) - LLM serves as the cognitive core of MLLM, integrating visual and textual information to generate contextually relevant responses [22][23]. - The input to LLM is a combined sequence of visual and language tokens, allowing for a comprehensive understanding of the context [24][25]. - LLM employs autoregressive generation to predict the next token based on the entire context, facilitating coherent and contextually appropriate outputs [26][30]. Group 4: Connector Design - The connector's design is crucial for bridging the gap between visual and textual modalities, with two main approaches: the minimalist approach of LLaVA and the more complex Q-Former used in BLIP-2 [38][40]. - LLaVA's connector is a simple linear transformation that relies on the LLM's strength to learn the mapping between modalities [40][41]. - Q-Former, on the other hand, actively extracts and refines key information from visual features before passing them to the LLM, enhancing efficiency and reducing computational load [42][53]. Group 5: Challenges and Solutions - The article addresses the challenge of processing high-resolution images without overwhelming the model's computational capacity, leading to the exploration of different design philosophies [64]. - LLaVA's AnyRes solution allows the model to handle images of arbitrary resolutions by focusing on preprocessing techniques rather than restructuring the model [65].