Core Insights - UniVideo demonstrates strong performance in video understanding, generation, and editing within a unified framework, leveraging a dual-stream architecture that combines a multimodal large language model (MLLM) and a multimodal diffusion Transformer (MM-DiT) [2][9][32] - The model achieves or surpasses state-of-the-art (SoTA) performance across various benchmarks without requiring task-specific designs, indicating its generalization capabilities to unseen tasks and new task combinations [2][24][33] Model Architecture - UniVideo consists of two main components: MLLM for multimodal instruction understanding and semantic reasoning, and MM-DiT for high-fidelity visual content generation [9][10] - The dual-stream design allows for robust semantic foundation and high-quality visual reconstruction, which is crucial for video editing and context generation tasks [11] Unified Multimodal Tasks - UniVideo integrates multiple video generation and editing tasks into a single multimodal instruction paradigm, enabling flexible task scheduling and generation [12][13] - The model can handle various tasks, including multimodal understanding (Image/Video to Text), text-to-image/video generation, image-to-video generation, and image/video editing [13][16][20] Experimental Results - In quantitative evaluations, UniVideo outperforms task-specific baseline methods across various metrics, achieving superior results in most experimental setups [24][32] - The model's performance in context generation and editing tasks is highlighted by its competitive scores in identity alignment, video quality, and aesthetic ratings compared to other models [26][27] Generalization Capabilities - UniVideo exhibits strong generalization capabilities, successfully transferring image editing skills to video editing tasks despite not being explicitly trained on free-form video editing instructions [28] - The model can also generalize to new task combinations that were not explicitly included during training, showcasing the advantages of a unified multimodal framework [29][33]
ICLR 2026|滑铁卢大学联合可灵提出UniVideo:统一视频理解、生成、编辑多模态
机器之心·2026-03-05 07:43