Workflow
UniPixel
icon
Search documents
多模态大模型首次实现像素级推理,3B参数超越72B传统模型,NeurIPS 2025收录
3 6 Ke· 2025-10-16 07:39
Core Insights - The article discusses the introduction of UniPixel, a unified pixel-level multimodal large model developed by a research team from Hong Kong Polytechnic University and Tencent ARC Lab, which can perform referring, segmentation, and reasoning tasks effectively [1][3][4]. Model Capabilities - UniPixel can accomplish three major tasks: target referring, pixel-level segmentation, and area reasoning, showcasing flexibility, precision, and scalability [3][4]. - The model has been accepted for presentation at NeurIPS 2025, with its code, data, and demo being open-sourced [3]. Technical Innovations - UniPixel redefines visual reasoning by enabling precise perception of specific areas or targets within images or videos, addressing limitations in traditional visual question-answering systems [4][6]. - The architecture is based on the Qwen2.5-VL model, supporting various input types and visual prompts, allowing for natural language responses and spatial-temporal masks [6][8]. Key Modules - The model incorporates three critical modules: a prompt encoder for visual prompts, an object memory bank for storing user-specified targets, and a mask decoder for generating precise spatial-temporal masks [8][12]. - UniPixel enhances its language model vocabulary with special tokens to facilitate the integration of visual prompts and memory retrieval processes [9]. Performance Evaluation - Extensive experiments on ten public benchmark datasets demonstrate UniPixel's superior performance across nine visual-language understanding tasks, particularly in segmentation tasks where it outperformed existing models [19][20]. - In the ReVOS reasoning segmentation benchmark, UniPixel achieved a J&F score of 62.1, surpassing all other models, indicating strong associative modeling capabilities between complex text prompts and pixel-level mask generation [20]. Training Data and Methodology - The training dataset comprises approximately 1 million samples, covering text, images, and videos, which enhances the model's adaptability across various task settings [17]. - The training strategy is modular and phased, allowing for collaborative training of visual encoders and language models without overfitting to specific tasks [16]. Future Implications - The introduction of UniPixel marks a significant milestone in multimodal AI, transitioning from modality alignment to fine-grained understanding, potentially leading to intelligent agents capable of precise focus and natural interaction [34].
多模态大模型首次实现像素级推理!3B参数超越72B传统模型,NeurIPS 2025收录
量子位· 2025-10-16 06:11
Core Insights - The article discusses the introduction of UniPixel, a unified pixel-level multimodal model developed by a research team from Hong Kong Polytechnic University and Tencent ARC Lab, which aims to enhance visual reasoning capabilities in AI systems [2][4]. Group 1: Model Overview - UniPixel is designed to perform three major tasks: referring, pixel-level segmentation, and reasoning, all within a single model, showcasing flexibility, precision, and scalability [4][8]. - The model has been accepted for presentation at NeurIPS 2025, and its code, data, and demo are fully open-sourced [5]. Group 2: Technical Innovations - UniPixel redefines visual reasoning by addressing the limitations of traditional visual question-answering systems, which often lack precise perception of specific areas or targets within images [8][9]. - The model incorporates an "Object Memory Bank" and supports three types of visual prompts (point, box, mask), enabling a comprehensive "perception-memory-reasoning" process [9][12]. Group 3: Architecture and Functionality - The architecture of UniPixel is based on the Qwen2.5-VL model, allowing it to process various inputs, including images, videos, and text prompts, and generate natural language responses along with spatial-temporal masks [12][14]. - Key components include a Prompt Encoder for unified encoding of visual prompts, an Object Memory Bank for storing user-specified targets, and a Mask Decoder for generating precise temporal masks [19][21]. Group 4: Training and Evaluation - The training process for UniPixel involved a modular and phased strategy, utilizing approximately 1 million samples across various datasets to enhance its adaptability to different tasks [28][29]. - Extensive experiments were conducted on 10 public benchmark datasets covering 9 major visual-language understanding tasks, demonstrating superior performance in complex reasoning and segmentation tasks [31][33]. Group 5: Performance Metrics - In the ReVOS reasoning segmentation benchmark, UniPixel-3B achieved a score of 62.1 J&F, surpassing all existing models, indicating its strong capability in associating complex text prompts with pixel-level mask generation [33]. - The model also excelled in other datasets such as MeViS, Ref-YouTube-VOS, and RefCOCO, showcasing its leading performance across various visual understanding tasks [33][34]. Group 6: Future Implications - The introduction of UniPixel marks a significant milestone in multimodal AI, transitioning from "modal alignment" to "fine-grained understanding," effectively merging object referring and segmentation with language reasoning [47][48].