Workflow
ICML 2025|多模态理解与生成最新进展:港科联合SnapResearch发布ThinkDiff,为扩散模型装上大脑
机器之心·2025-07-16 04:21

Core Viewpoint - The article discusses the introduction of ThinkDiff, a new method for multimodal understanding and generation that enables diffusion models to perform reasoning and creative tasks with minimal training data and computational resources [3][36]. Group 1: Introduction to ThinkDiff - ThinkDiff is a collaborative effort between Hong Kong University of Science and Technology and Snap Research, aimed at enhancing diffusion models' reasoning capabilities with limited data [3]. - The method allows diffusion models to understand the logical relationships between images and text prompts, leading to high-quality image generation [7]. Group 2: Algorithm Design - ThinkDiff transfers the reasoning capabilities of large visual language models (VLM) to diffusion models, combining the strengths of both for improved multimodal understanding [7]. - The architecture involves aligning VLM-generated tokens with the diffusion model's decoder, enabling the diffusion model to inherit the VLM's reasoning abilities [15]. Group 3: Training Process - The training process includes a vision-language pretraining task that aligns VLM with the LLM decoder, facilitating the transfer of multimodal reasoning capabilities [11][12]. - A masking strategy is employed during training to ensure the alignment network learns to recover semantics from incomplete multimodal information [15]. Group 4: Variants of ThinkDiff - ThinkDiff has two variants: ThinkDiff-LVLM, which aligns large-scale VLMs with diffusion models, and ThinkDiff-CLIP, which aligns CLIP with diffusion models for enhanced text-image combination capabilities [16]. Group 5: Experimental Results - ThinkDiff-LVLM significantly outperforms existing methods on the CoBSAT benchmark, demonstrating high accuracy and quality in multimodal understanding and generation [18]. - The training efficiency of ThinkDiff-LVLM is notable, achieving optimal results with only 5 hours of training on 4 A100 GPUs, compared to other methods that require significantly more resources [20][21]. Group 6: Comparison with Other Models - ThinkDiff-LVLM exhibits capabilities comparable to commercial models like Gemini in everyday image reasoning and generation tasks [25]. - The method also shows potential in multimodal video generation by adapting the diffusion decoder to generate high-quality videos based on input images and text [34]. Group 7: Conclusion - ThinkDiff represents a significant advancement in multimodal understanding and generation, providing a unified model that excels in both quantitative and qualitative assessments, contributing to the fields of research and industrial applications [36].