AGILE

Search documents
AGILE:视觉学习新范式!自监督+交互式强化学习助力VLMs感知与推理全面提升
机器之心· 2025-10-20 07:48
Core Insights - Existing Vision-Language Models (VLMs) exhibit significant limitations in fine-grained visual information understanding and reasoning capabilities, which have not been fully activated [2] - AGILE introduces a novel self-supervised learning paradigm that enhances VLMs' visual perception and reasoning through an interactive agent-based approach [2][22] Methodology - AGILE employs a "puzzle" task as an efficient agent task that combines perception and reasoning, structured as a controllable and verifiable interactive form [8] - The training process consists of two phases: a Cold-Start phase using Gemini 2.5 Pro to generate 1.6K high-quality expert puzzle interaction trajectories, and a Reinforcement Learning phase training on 15.6K images using the GRPO algorithm [9][10] Experimental Results - In the simplest 2x2 puzzle task, AGILE improved accuracy from 9.5% to 82.8%, surpassing Gemini 2.5 Pro by 36.4 percentage points. In the more challenging 3x3 puzzle, accuracy increased from 0.4% to 20.8% [13] - The model's performance was evaluated using two metrics: Acc (the proportion of all blocks placed correctly) and Score (the proportion of correctly placed blocks) [13][14] Generalization Capability - After puzzle training, the model demonstrated an average improvement of 3.1% across nine general visual tasks, indicating strong generalization capabilities [15] Scaling Experiments - The study explored the impact of puzzle data scale on performance, revealing that as training data expanded from 0 to 16K, puzzle task accuracy increased from 22.0% to 82.8% [20] - Replacing 10K of conventional QA data with puzzle data in a 20K sample led to better model performance, highlighting the potential of puzzle tasks in alleviating data scarcity in multi-modal reinforcement learning [20]