Workflow
生成和理解多模态大模型
icon
Search documents
2025年了,生成和理解多模态大模型发展到哪一步了?
自动驾驶之心· 2025-08-25 23:34
Core Viewpoint - The article discusses the development trends of unified multimodal large models, particularly focusing on image understanding and generation, up to mid-2025, highlighting significant advancements and challenges in this field [1][2]. Group 1: Overview of Multimodal Large Models - The term "unified multimodal large models" primarily refers to models that integrate both image understanding and generation, excluding other modalities like Omni-LLM due to fewer academic papers in that area [3]. - Several notable early works in this domain include Google's Unified-IO, Alibaba's OFA, and Fudan's AnyGPT, which have significantly influenced subsequent research [3]. Group 2: Key Research Directions - Research on "integrated generation and understanding" of multimodal large models focuses on two main aspects: the development of visual tokenizers and the construction of suitable model architectures [14]. - The TokenFlow model by ByteDance employs different visual encoders for understanding and generation tasks, utilizing high-level semantic features for understanding and low-level features for generation [16][17]. Group 3: Model Architectures and Techniques - The Semantic-Priority Codebook (SPC) approach was introduced to improve the quality of image reconstruction tasks, highlighting the importance of semantic features in the quantization process [19][23]. - The QLIP model from UT Austin and Nvidia optimizes the visual tokenizer by aligning visual features suitable for generation with semantic information, using a unified visual encoder for both tasks [28][30]. Group 4: Training Strategies - The training strategy for QLIP involves two phases: the first focuses on learning semantically rich feature representations, while the second emphasizes improving image reconstruction quality [30][32]. - The UniTok model employs multi-codebook quantization to enhance codebook utilization, integrating visual features for both understanding and generation tasks [35][36]. Group 5: Recent Innovations - The DualToken model utilizes a single visual encoder to extract features for both understanding and generation, employing different visual codebooks for semantic and pixel features [39][41]. - The TokLIP model from Tencent also adopts a single-encoder approach, focusing on the alignment of visual features with text features through various loss functions [42][44].