视觉强化学习
Search documents
VinciCoder:多模态统一代码生成框架和视觉反馈强化学习,数据代码模型权重已开源
机器之心· 2025-11-17 04:23
长期以来,多模态代码生成(Multimodal Code Generation)的训练严重依赖于特定任务的监督微调(SFT)。尽管这种范式在 Chart-to-code 等单一任务上取得了显 著成功 ,但其 "狭隘的训练范围" 从根本上限制了模型的泛化能力,阻碍了通用视觉代码智能(Generalized VIsioN Code Intelligence)的发展 。同时,「SFT- only」的范式在确保代码可执行性和高视觉保真度方面存在显著瓶颈 。 在此背景下, 中科院 & 美团研究团队推出了 VinciCoder ,一个旨在打破 SFT 瓶颈的统一多模态代码生成模型。VinciCoder 首次将强化学习的奖励机制从文本域 转向视觉域 ,提出 视觉 强化学习 (ViRL) ,专攻 SFT 无法解决的视觉保真度难题。 本文提出的系统性框架 VinciCoder ,通过 "大规模 SFT + 粗细粒度 ViRL" 的两阶段策略,有效统一了从图表、网页、SVG 到科学绘图(LaTeX、化学分子)等多 样化代码生成任务 。 论文标题:VinciCoder: Unifying Multimodal Code Gener ...
比NanoBanana更擅长中文和细节控制!兔展&北大Uniworld V2刷新SOTA
量子位· 2025-11-05 05:39
Core Viewpoint - The article introduces UniWorld-V2, a new image editing model that excels in detail and understanding of Chinese language instructions, outperforming previous models like Nano Banana [1][4][6]. Group 1: Model Features - UniWorld-V2 demonstrates superior fine control in image editing, achieving results that surpass those of SFT models [11]. - The model can accurately interpret complex Chinese characters and phrases, showcasing its proficiency in rendering artistic fonts [11]. - Users can specify editing areas through bounding boxes, allowing for precise operations like moving objects out of designated areas [14]. - The model effectively understands commands such as "re-light the scene," integrating objects naturally into the environment with high light and shadow coherence [15]. Group 2: Technical Innovations - The core innovation behind UniWorld-V2 is the UniWorld-R1 framework, which applies reinforcement learning (RL) strategies to image editing [18]. - UniWorld-R1 is the first unified architecture based on RL, utilizing Diffusion Negative-aware Finetuning (DiffusionNFT) for efficient training without likelihood estimation [19]. - The framework employs a multi-modal large language model (MLLM) as a reward model, enhancing the model's alignment with human intentions through implicit feedback [19]. Group 3: Performance Metrics - In benchmark tests, UniWorld-V2 achieved a score of 7.83 in GEdit-Bench, surpassing GPT-Image-1 (7.53) and Gemini 2.0 (6.32) [24]. - The model also led in ImgEdit with a score of 4.49, outperforming all known models [24]. - The method significantly improved the performance of foundational models, with FLUX.1-Kontext's score rising from 3.71 to 4.02, and Qwen-Image-Edit's score increasing from 4.35 to 4.48 [25]. Group 4: Generalization and User Preference - UniWorld-R1 demonstrated strong generalization capabilities, improving FLUX.1-Kontext's score from 6.00 to 6.74 in GEdit-Bench [26]. - User preference studies indicated that participants favored UniWorld-FLUX.1-Kontext for its superior instruction alignment and editing capabilities, despite a slight edge in image quality for the official model [27]. Group 5: Historical Context - UniWorld-V2 builds upon the earlier UniWorld-V1, which was the first unified understanding and generation model, released three months ahead of notable models like Google’s Nano Banana [29].
VLA+RL还是纯强化?从200多篇工作中看强化学习的发展路线
具身智能之心· 2025-08-18 00:07
Core Insights - The article provides a comprehensive analysis of the intersection of reinforcement learning (RL) and visual intelligence, focusing on the evolution of strategies and key research themes in visual reinforcement learning [5][17][25]. Group 1: Key Themes in Visual Reinforcement Learning - The article categorizes over 200 representative studies into four main pillars: multimodal large language models, visual generation, unified model frameworks, and visual-language-action models [5][17]. - Each pillar is examined for algorithm design, reward engineering, and benchmark progress, highlighting trends and open challenges in the field [5][17][25]. Group 2: Reinforcement Learning Techniques - Various reinforcement learning techniques are discussed, including Proximal Policy Optimization (PPO) and Group Relative Policy Optimization (GRPO), which are used to enhance stability and efficiency in training [15][16]. - The article emphasizes the importance of reward models, such as those based on human feedback and verifiable rewards, in guiding the training of visual reinforcement learning agents [10][12][21]. Group 3: Applications in Visual and Video Reasoning - The article outlines applications of reinforcement learning in visual reasoning tasks, including 2D and 3D perception, image reasoning, and video reasoning, showcasing how these methods improve task performance [18][19][20]. - Specific studies are highlighted that utilize reinforcement learning to enhance capabilities in complex visual tasks, such as object detection and spatial reasoning [18][19][20]. Group 4: Evaluation Metrics and Benchmarks - The article discusses the need for new evaluation metrics tailored to large model visual reinforcement learning, combining traditional metrics with preference-based assessments [31][35]. - It provides an overview of various benchmarks that support training and evaluation in the visual domain, emphasizing the role of human preference data in shaping reward models [40][41]. Group 5: Future Directions and Challenges - The article identifies key challenges in visual reinforcement learning, such as balancing depth and efficiency in reasoning processes, and suggests future research directions to address these issues [43][44]. - It highlights the importance of developing adaptive strategies and hierarchical reinforcement learning approaches to improve the performance of visual-language-action agents [43][44].
视觉强化学习最新综述:全领域梳理(新加坡国立&浙大&港中文)
自动驾驶之心· 2025-08-16 00:03
Core Insights - The article discusses the integration of Reinforcement Learning with Computer Vision, marking a paradigm shift in how AI interacts with visual data [3][4] - It highlights the potential for AI to not only understand but also create and optimize visual content based on human preferences, transforming AI from passive observers to active decision-makers [4] Research Background and Overview - The emergence of Visual Reinforcement Learning (VRL) is driven by the successful application of Reinforcement Learning in Large Language Models (LLMs) [7] - The article identifies three core challenges in the field: stability in policy optimization under complex reward signals, efficient processing of high-dimensional visual inputs, and scalable reward function design for long-term decision-making [7][8] Theoretical Foundations of Visual Reinforcement Learning - The theoretical framework for VRL includes formalizing the problem using Markov Decision Processes (MDP), which unifies text and visual generation RL frameworks [15] - Three main alignment paradigms are proposed: RL with human feedback (RLHF), Direct Preference Optimization (DPO), and Reinforcement Learning with Verifiable Rewards (RLVR) [16][18] Core Applications of Visual Reinforcement Learning - The article categorizes VRL research into four main areas: Multimodal Large Language Models (MLLM), Visual Generation, Unified Models, and Visual-Language-Action (VLA) Models [31] - Each area is further divided into specific tasks, with representative works analyzed for their contributions [31][32] Evaluation Metrics and Benchmarking - A layered evaluation framework is proposed, detailing specific benchmarks for each area to ensure reproducibility and comparability in VRL research [44][48] - The article emphasizes the need for effective metrics that align with human perception and can validate the performance of VRL systems [61] Future Directions and Challenges - The article outlines four key challenges for the future of VRL: balancing depth and efficiency in reasoning, addressing long-term RL in VLA tasks, designing reward models for visual generation, and improving data efficiency and generalization capabilities [50][52][54] - It suggests that future research should focus on integrating model-based planning, self-supervised visual pre-training, and adaptive curriculum learning to enhance the practical applications of VRL [57]
中科大提出动作价值表征学习新方法,率先填补长期决策信息的缺失
量子位· 2025-03-31 04:35
Core Viewpoint - The article discusses the introduction of a novel robust action value representation learning method called ROUSER, which addresses the lack of long-term information in visual reinforcement learning by utilizing the Information Bottleneck framework [2][9]. Group 1: ROUSER Methodology - ROUSER maximizes the mutual information between the representation and action value to retain long-term information while minimizing the mutual information between the representation and state-action pairs to filter out irrelevant features [4][10]. - The method decomposes the robust representation of state-action pairs into representations that include single-step rewards and the robust representation of the next state-action pair, allowing for effective learning despite unknown action values [5][10]. Group 2: Experimental Results - In experiments involving 12 tasks with background and color interference, ROUSER outperformed various advanced methods in 11 of the tasks, demonstrating its effectiveness in enhancing generalization capabilities [6][18]. - ROUSER is compatible with both continuous and discrete control tasks, as evidenced by experiments conducted in the Procgen environment, which showed improved generalization performance when combined with value-based VRL methods [21][22]. Group 3: Theoretical Foundations - The theoretical proof indicates that ROUSER can accurately estimate action values using the learned vectorized representations, thereby improving the robustness of various visual reinforcement learning algorithms [3][17].