视觉语言模型(VLMs)

Search documents
Kitchen-R :高层任务规划与低层控制联合评估的移动操作机器人基准
具身智能之心· 2025-08-25 00:04
点击下方 卡片 ,关注" 具身智能 之心 "公众号 编辑丨具身智能之心 本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 写在前面&出发点 1)基准的重要性 基准在自然语言处理(如GLUE)、计算机视觉(如Visual Genome)中广泛用于评估模型进展;在机器人领域,基于模拟器的基准(如Behavior-1K)同样常见,兼 具模型评估与训练功能,且需准确模拟低层动作以支持真实机器人的结果迁移。 2)现有基准的割裂问题 近年来,大语言模型(LLMs)和视觉语言模型(VLMs)被广泛用于机器人任务规划与指令遵循,但现有基准存在明显缺陷: 3)Kitchen-R的核心价值 基准是机器人学和具身AI领域评估进展的核心工具,但当前基准存在显著割裂: 高层语言指令遵循类基准 常假设低层执行完美,而 低层机器人控制类基准 仅依赖 简单单步指令。这种割裂导致无法全面评估"任务规划+物理执行"均关键的集成系统。 为填补该空白,这里提出 Kitchen-R基准 ——一个在仿真厨房环境 ...
中科院自动化所机器人视觉中的多模态融合与视觉语言模型综述
具身智能之心· 2025-08-04 01:59
Core Insights - The article discusses the advancements in multimodal fusion and vision-language models (VLMs) as essential tools for enhancing robot vision technology, emphasizing their potential in complex reasoning and long-term task decision-making [4][10]. Multimodal Fusion and Robot Vision - Multimodal fusion enhances semantic scene understanding by integrating various data sources, such as visual, linguistic, depth, and lidar information, addressing limitations faced by traditional unimodal methods [8][9]. - The rise of VLMs has propelled the development of multimodal fusion paradigms, showcasing capabilities in zero-shot understanding and instruction following [9][10]. Key Applications and Challenges - The article identifies key applications of multimodal fusion in tasks like simultaneous localization and mapping (SLAM), 3D object detection, navigation, and robot manipulation [10][19]. - Challenges in multimodal fusion include cross-modal alignment, efficient training strategies, and real-time performance optimization [10][19]. Data Sets and Benchmarking - A comprehensive analysis of mainstream multimodal datasets used for robot tasks is provided, detailing their modality combinations, task coverage, and limitations [10][43]. - The importance of high-quality multimodal datasets is highlighted, as they are crucial for model training and performance evaluation [62]. Future Directions - The article suggests future research directions to address challenges in multimodal fusion, such as improving cross-modal alignment techniques and enhancing real-time performance [10][63]. - Emphasis is placed on the need for standardized datasets and benchmarks to facilitate comparisons across different research efforts [66].
让 VLMs 更适配机器人:小型VLMs也能展现出强大的视觉规划能力
具身智能之心· 2025-07-15 13:49
Core Insights - The article discusses the potential of large language models (LLMs) in robotic program planning, highlighting their ability to generate coherent action sequences but also noting their limitations in providing the necessary sensory details for physical execution [3][4] - It introduces a new framework called SelfReVision, which enhances the performance of small visual language models (VLMs) through self-distillation without external supervision, aiming to improve their planning capabilities in real-world scenarios [4][9] Research Background - LLMs show promise in generating action sequences but often lack the precision required for robotic tasks due to their reliance on human-centric training data [3] - Visual language models (VLMs) can potentially address these limitations, but existing methods either require specialized simulation environments or are costly to train and deploy [3] Methodology - SelfReVision is proposed as a self-improvement framework that allows small VLMs to enhance their performance through iterative self-critique and revision [4][6] - The framework operates in three stages: critique, revise, and verify, enabling models to generate and refine plans based on self-assessment [4][10] Experimental Setup - Two types of experiments were conducted to evaluate the planning capabilities of SelfReVision: image-based program planning and entity-agent tasks [11] - Evaluation metrics included coverage, ordering, completeness, overall quality, and a new metric called image groundedness [12] Key Results - SelfReVision significantly outperformed baseline models across various metrics, achieving an average win rate of 68% on the PLACES dataset and 72% on the SIMULATION dataset [13] - Larger models benefited more from SelfReVision, with an average gain of 74% for models with 12 billion parameters or more [13] Comparison with Other Methods - SelfReVision demonstrated clear advantages over other methods like Best-of-N and PaliGemma, with improvements of 60% in most settings compared to modest gains from Best-of-N [17] - When compared to GPT-4o, SelfReVision's plans had at least a 25% higher win rate for models with 12 billion parameters or more, indicating its effectiveness in enhancing smaller models [17] Ablation Studies - The complete Criticize-Revise-Verify (CRV) process showed the strongest performance, with average win rates of 68.3% on the PLACES dataset and 71.9% on the SIMULATION dataset [18] - Variants of the process showed significant performance drops, emphasizing the importance of the verification step in filtering out suboptimal revisions [18] Application in Entity-Agent Tasks - SelfReVision was tested in challenging scenarios, showing a 26% improvement for the Gemma 12B model and a 17% improvement for the Gemma 27B model in block manipulation tasks [21] - In hierarchical tasks, SelfReVision plans led to a 70% success rate in generating trajectories, surpassing the 61% success rate of baseline models [21]
AI Lab最新InternSpatia:VLM空间推理数据集,显著提升模型能力
具身智能之心· 2025-06-24 14:09
Core Insights - The article discusses the limitations of current Vision-Language Models (VLMs) in spatial reasoning tasks, highlighting the need for improved datasets and methodologies to enhance performance in various scenarios [3][12]. Dataset Limitations - The existing InternSpatial dataset has three main limitations: 1. Limited scene diversity, focusing primarily on indoor and outdoor environments, lacking diverse contexts like driving and embodied navigation [3]. 2. Restricted instruction formats, only supporting natural language or region masks, which do not encompass the variety of queries found in real-world applications [3]. 3. Lack of multi-view supervision, with over 90% of data focusing on single-image reasoning, failing to model spatiotemporal relationships across views [3]. Evaluation Benchmark - The InternSpatial-Bench evaluation benchmark includes 6,008 QA pairs across five tasks, assessing position comparison, size comparison, rotation estimation, object counting, and existence estimation [7]. - The benchmark also introduces 1,000 additional QA pairs for multi-view rotation angle prediction [7]. Data Engine Design - The data engine employs a three-stage automated pipeline: 1. Annotation generation using existing annotations or SAM2 for mask generation [9]. 2. View alignment to construct a standard 3D coordinate system [9]. 3. Template-based QA generation with predefined task templates [9]. Experimental Results - Spatial reasoning performance has improved, with InternVL-Spatial-8B showing a 1.8% increase in position comparison accuracy and a 17% increase in object counting accuracy compared to its predecessor [10]. - The model's performance across various tasks demonstrates significant enhancements, particularly in multi-view tasks [10]. Instruction Format Robustness - Current models exhibit a 23% accuracy drop when using the <box> format, while training with InternSpatial reduces the gap between different formats to within 5% [12]. - However, the automated QA generation struggles to replicate the complexity of natural language, indicating a need for further refinement [12].
FindingDory:具身智能体记忆评估的基准测试
具身智能之心· 2025-06-22 10:56
Group 1 - The core issue in embodied intelligence is the lack of long-term memory, which limits the ability to process multimodal observational data across time and space [3] - Current visual language models (VLMs) excel in planning and control tasks but struggle with integrating historical experiences in embodied environments [3][5] - Existing video QA benchmarks fail to adequately assess tasks requiring fine-grained reasoning, such as object manipulation and navigation [5] Group 2 - The proposed benchmark includes a task architecture that allows for dynamic environment interaction and memory reasoning validation [4][6] - A total of 60 task categories are designed to cover spatiotemporal semantic memory challenges, including spatial relations, temporal reasoning, attribute memory, and multi-target recall [7] - Key technical innovations include a programmatic expansion of task complexity through increased interaction counts and a strict separation of experience collection from interaction phases [9][6] Group 3 - Experimental results reveal three major bottlenecks in VLM memory capabilities across 60 tasks, including failures in long-sequence reasoning, weak spatial representation, and collapse in multi-target processing [13][14][16] - The performance of native VLMs declines as the number of frames increases, indicating ineffective utilization of long contexts [20] - Supervised fine-tuning models show improved performance by leveraging longer historical data, suggesting a direction for VLM refinement [25] Group 4 - The benchmark represents the first photorealistic embodied memory evaluation framework, covering complex household environments and allowing for scalable assessment [26] - Future directions include memory compression techniques, end-to-end joint training to address the split between high-level reasoning and low-level execution, and the development of long-term video understanding [26]