Workflow
多模态大语言模型(MLLMs)
icon
Search documents
超越英伟达Describe Anything,中科院 & 字节联合提出「GAR」,为DeepSeek-OCR添砖加瓦
3 6 Ke· 2025-10-28 07:26
Core Insights - DeepSeek-OCR has introduced a new concept called "Vision as Context Compression," focusing on using OCR capabilities to compress documents through images. The collaboration between the Chinese Academy of Sciences and ByteDance has proposed "Grasp Any Region" (GAR) as a new approach to explore whether natural images can also serve as text compression [1]. Group 1: GAR Capabilities - GAR achieves precise region captioning, providing a potential pathway for constructing dense captions for natural images [2]. - GAR possesses three main capabilities: accurate description of user-specified regions, modeling relationships between multiple regions, and performing complex combinatorial reasoning [5][6]. Group 2: Comparison with Existing Models - GAR demonstrates superior performance in accurately understanding user-specified regions compared to existing models like DAM, which often misidentify objects [9][40]. - GAR can accurately identify and describe very small objects, showcasing its detailed understanding capabilities [11][16]. Group 3: Technical Innovations - The GAR model integrates fine-grained understanding of specified regions while retaining global context, achieved through a novel prompt encoding scheme and Region of Interest (RoI)-aligned feature replay technology [25][28]. - The model's design allows it to focus on details without neglecting the overall context, which is crucial for accurate reasoning about complex relationships between objects [27][30]. Group 4: Data and Training - GAR was trained using a large-scale, high-quality dataset, including 456,000 fine-grained descriptions and 414,000 samples for relational understanding [30][35]. - The training process involved leveraging the Panoptic Scene Graph dataset to enhance multi-region relational reasoning capabilities [32]. Group 5: Benchmark Performance - GAR-8B achieved a score of 59.9 on the GAR-Bench-VQA test set, outperforming advanced models like GPT-4o and approaching the performance of top reasoning models [39]. - In the GAR-Bench-Cap test set, GAR-1B and GAR-8B scored 57.5 and 62.2, respectively, indicating their leading position in generating detailed and accurate local descriptions [41]. Group 6: Applications and Future Potential - GAR can be utilized as a data engine for training multimodal understanding models, enhancing instruction-following capabilities in text-to-image or text-to-video models, and providing precise descriptions for editing tasks [47]. - The model's open-source nature and support for local deployment via Gradio make it accessible for various applications [48].
大模型在具身推理上「翻车」了?4496 道题全面揭示短板
机器之心· 2025-10-28 00:41
Core Insights - The article focuses on the evaluation of multimodal large language models (MLLMs) in embodied intelligence tasks, providing detailed failure analysis and proposing an agent algorithm for improvement [25]. Group 1: Embodied Intelligence and MLLMs - Embodied intelligence is a concept where an agent can complete a closed-loop of perception, understanding, and decision-making in an environment, relying on various skills [2]. - Many excellent works have deployed MLLMs in different applications of embodied intelligence, but evaluations have mainly focused on subfields like pointing and spatial reasoning [2][4]. Group 2: BEAR Benchmark - The BEAR benchmark was proposed by Northeastern University in collaboration with other institutions to systematically evaluate MLLMs across various sub-capabilities, providing detailed error analysis and algorithm enhancements [4]. - BEAR includes 4,469 image-video-text VQA tasks and covers six major categories, including five foundational categories and a sixth long-range reasoning category, breaking down tasks into 14 different skills [8][9]. Group 3: Evaluation Results - The evaluation measured 20 different MLLMs, revealing that the best-performing model, GPT-5, only achieved a 52% success rate on the BEAR benchmark [11]. - Closed-source models generally performed better than open-source models, although some open-source models like the InternVL series showed strong potential, outperforming models like GPT-4o and Claude [11]. Group 4: Error Analysis - A fine-grained error analysis of GPT-4o revealed interesting findings, indicating that the model's visual capabilities are a major bottleneck across multiple categories, particularly in language grounding and trajectory understanding [19]. - The analysis showed that 88% of errors in long-range reasoning were attributed to lower-level perception and spatial reasoning issues [19]. Group 5: BEAR-Agent Development - The authors developed BEAR-Agent, a multimodal agent designed to enhance visual reasoning capabilities by providing tools and drawing auxiliary lines, significantly improving performance on the BEAR benchmark [17]. - The performance of both the best open-source model (InternVL3-14B) and the closed-source model (GPT-5) improved significantly with the integration of BEAR-Agent [17]. Group 6: Simulation Testing - Further experiments in a desktop manipulation environment demonstrated that BEAR-Agent improved the performance of MOKA by 20.17%, indicating its potential for embodied agents [21].
NeurIPS2025 | 攻破闭源多模态大模型:一种基于特征最优对齐的新型对抗攻击方法
机器之心· 2025-10-17 04:09
Core Insights - The article discusses the advancements and security vulnerabilities of Multimodal Large Language Models (MLLMs), particularly their susceptibility to adversarial attacks [2][8] - It introduces a novel attack framework called FOA-Attack, which enhances the transferability of adversarial samples across different models by optimizing feature alignment at both global and local levels [3][11] Group 1: Background and Motivation - MLLMs like GPT-4 and Claude-3 exhibit exceptional performance in tasks such as image understanding and visual question answering, but they inherit vulnerabilities from their visual encoders, making them prone to adversarial attacks [8][10] - Adversarial attacks can be categorized into non-targeted (aiming to produce incorrect outputs) and targeted (aiming for specific outputs), with the latter being particularly challenging in black-box scenarios where model internals are inaccessible [10][11] Group 2: FOA-Attack Framework - FOA-Attack employs a dual-dimensional alignment strategy, focusing on both global features (using cosine similarity loss for [CLS] tokens) and local features (using clustering and optimal transport for patch tokens) to improve transferability [6][11] - The framework includes a dynamic weight integration strategy that adapts the influence of multiple models during the attack generation process, enhancing the overall effectiveness of the attack [6][11] Group 3: Experimental Results - FOA-Attack significantly outperforms existing state-of-the-art methods in both open-source and closed-source MLLMs, achieving remarkable success rates, particularly against commercial closed-source models like GPT-4 [4][19] - In experiments, FOA-Attack achieved an attack success rate (ASR) of 75.1% against GPT-4, showcasing its effectiveness in real-world applications [19][24] Group 4: Conclusion and Future Directions - The findings highlight the vulnerabilities of current MLLMs in the visual encoding phase and suggest new defensive strategies, particularly in fortifying local feature robustness [24][25] - The authors have made the paper and code publicly available for further exploration and discussion, indicating a commitment to advancing research in this area [25][27]
景不动人动,MLLM如何面对「移步换景」的真实世界?OST-Bench揭示多模态大模型在线时空理解短板
3 6 Ke· 2025-10-14 08:54
Core Insights - The introduction of OST-Bench presents a new challenge for multimodal large language models (MLLMs) by focusing on dynamic online scene understanding, contrasting with traditional offline benchmarks [1][3][12] - OST-Bench emphasizes the necessity for models to perform real-time perception, memory maintenance, and spatiotemporal reasoning based on continuous local observations [3][4][12] Benchmark Characteristics - OST-Bench is designed to reflect real-world challenges more accurately than previous benchmarks, featuring two main characteristics: online settings requiring real-time processing and cross-temporal understanding that integrates current and historical information [3][4][12] - The benchmark categorizes dynamic scene understanding into three information types: agent spatial state, visible information, and agent-object spatial relationships, leading to the creation of 15 sub-tasks [7][12] Experimental Results - The performance of various models on OST-Bench reveals significant gaps between current MLLMs and human-level performance, particularly in complex spatiotemporal reasoning tasks [12][21] - Models like Claude-3.5-Sonnet and GPT-4.1 show varying degrees of success across different tasks, with human-level performance significantly higher than that of the models [9][10][12] Model Limitations - Current MLLMs exhibit a tendency to take shortcuts in reasoning, often relying on limited information rather than comprehensive spatiotemporal integration, which is termed "spatio-temporal reasoning shortcut" [15][18] - The study identifies that the models struggle with long-sequence online settings, indicating a need for improved mechanisms for complex spatial reasoning and long-term memory retrieval [12][21] Future Directions - The findings from OST-Bench suggest that enhancing complex spatial reasoning capabilities and long-term memory mechanisms will be crucial for the next generation of multimodal models to achieve real-world intelligence [22]
景不动人动,MLLM如何面对「移步换景」的真实世界?OST-Bench揭示多模态大模型在线时空理解短板
机器之心· 2025-10-14 06:33
Core Insights - The article discusses the introduction of OST-Bench, a new benchmark for evaluating multi-modal large language models (MLLMs) in dynamic online environments, emphasizing the challenges of real-world embodied perception and reasoning [2][24]. Group 1: Benchmark Characteristics - OST-Bench reflects the core challenges of embodied perception in real-world settings, contrasting with traditional offline benchmarks that do not account for dynamic scene exploration [2][7]. - The benchmark is designed to assess models' abilities to perform real-time perception, memory maintenance, and spatiotemporal reasoning based on continuous local observations [7][10]. - It includes 15 sub-tasks categorized into judgment, estimation, counting, and temporal localization, with a dataset comprising 10,000 test samples and 50,000 training samples [8][10]. Group 2: Model Performance and Challenges - Current mainstream MLLMs show significant performance gaps compared to human capabilities, particularly in cross-temporal information reasoning [17]. - Models struggle with complex spatiotemporal reasoning tasks, often resorting to "spatio-temporal reasoning shortcuts," leading to superficial answers without adequate reasoning [18][21]. - Fine-tuning experiments indicate that while models can improve their scores by over 10% with additional training data, they still fail to achieve over 50% accuracy in complex reasoning tasks, highlighting the need for better model design and training strategies [23][24].
给几何图片写标题就能让AI更聪明,UIUC发布高质量可泛化几何数据集
机器之心· 2025-09-25 23:54
Core Viewpoint - The article discusses the advancements in multi-modal large language models (MLLMs) and introduces a new framework called Geo-Image-Textualization, which addresses the limitations in geometric reasoning tasks by ensuring complete alignment between visual and textual information [1][21]. Group 1: Framework and Dataset - A research team from UIUC has proposed a reinforcement learning-based data generation and optimization framework called Geo-Image-Textualization, along with the release of the first fully aligned high-quality geometric image-text dataset, GeoReasoning-10K, which contains 10,000 carefully constructed image-description pairs [2][3]. - The GeoReasoning-10K dataset and related code have been made publicly available to promote community development [3][5]. Group 2: Innovations and Performance - The core innovations of the framework include a generation process for image-title-question/answer pairs, which enhances the model's performance in geometric reasoning tasks [6][8]. - The trained model demonstrates strong generalization capabilities, performing well not only in geometric tasks but also in arithmetic, algebra, and numerical reasoning, even with non-geometric image inputs [8]. - Models trained with GeoReasoning outperform other similar datasets in downstream tasks and exhibit good scalability [8][12]. Group 3: Experimental Results - In authoritative mathematical reasoning benchmarks MathVista and MathVerse, GeoReasoning-10K achieved optimal results compared to other geometric captioning datasets, showcasing superior data quality and extensibility [12][14]. - The article presents specific examples from the MathVista benchmark, illustrating the model's ability to solve complex geometric problems effectively [16][21]. Group 4: Future Implications - The Geo-Image-Textualization framework and GeoReasoning-10K dataset provide a new approach to overcoming the bottlenecks in geometric reasoning, enhancing the overall mathematical reasoning capabilities of AI models, and paving the way for applications in education and scientific computation [21][22].
西交利物浦&港科最新!轨迹预测基座大模型综述
自动驾驶之心· 2025-09-24 23:33
Core Insights - The article discusses the application of large language models (LLMs) and multimodal large language models (MLLMs) in the paradigm shift for autonomous driving trajectory prediction, enhancing the understanding of complex traffic scenarios to improve safety and efficiency [1][20]. Summary by Sections Introduction and Overview - The integration of LLMs into autonomous driving systems allows for a deeper understanding of traffic scenarios, transitioning from traditional methods to LFM-based approaches [1]. - Trajectory prediction is identified as a core technology in autonomous driving, utilizing historical data and contextual information to infer future movements of traffic participants [5]. Traditional Methods and Challenges - Traditional vehicle trajectory prediction methods include physics-based approaches (e.g., Kalman filters) and machine learning methods (e.g., Gaussian processes), which struggle with complex interactions [8]. - Deep learning methods improve long-term prediction accuracy but face challenges such as high computational demands and poor interpretability [9]. - Reinforcement learning methods excel in interactive scene modeling but are complex and unstable [9]. LLM-Based Vehicle Trajectory Prediction - LFM introduces a paradigm shift by discretizing continuous motion states into symbolic sequences, leveraging LLMs' semantic modeling capabilities [11]. - Key applications of LLMs include trajectory-language mapping, multimodal fusion, and constraint-based reasoning, enhancing interpretability and robustness in long-tail scenarios [11][13]. Evaluation Metrics and Datasets - The article categorizes datasets for pedestrian and vehicle trajectory prediction, highlighting the importance of datasets like Waymo and ETH/UCY for evaluating model performance [16]. - Evaluation metrics for vehicles include L2 distance and collision rates, while pedestrian metrics focus on minADE and minFDE [17]. Performance Comparison - A performance comparison of various models on the NuScenes dataset shows that LLM-based methods significantly reduce collision rates and improve long-term prediction accuracy [18]. Discussion and Future Directions - The widespread application of LFMs indicates a shift from local pattern matching to global semantic understanding, enhancing safety and compliance in trajectory generation [20]. - Future research should focus on developing low-latency inference techniques, constructing motion-oriented foundational models, and advancing world perception and causal reasoning models [21].
ICCV 2025 | ECD:高质量合成图表数据集,提升开源MLLM图表理解能力
机器之心· 2025-08-21 13:08
Core Viewpoint - The article discusses the development of the Effective Chart Dataset (ECD), a high-quality synthetic chart dataset aimed at improving the understanding of charts by multimodal large language models (MLLMs) [4][6][25]. Background and Motivation - In fields like scientific research and data analysis, charts are essential for information transmission. MLLMs must accurately identify and understand chart elements and perform deep reasoning on chart data. Current MLLMs struggle with high difficulty scientific chart understanding, achieving only 30%-50% accuracy [4][6]. Dataset Highlights - ECD is introduced as a large-scale, high-quality synthetic chart dataset with a modular data synthesis pipeline and a comprehensive evaluation benchmark called ECDBench [6][10]. - ECD includes over 10,500 charts, covering 25 themes and 29 chart types, with 252 combinations of subplots, making it the most extensive dataset in its category [12][10]. Quality and Diversity - The dataset contains over 300,000 question-answer pairs generated by GPT-4o, ensuring high quality through confidence filtering. Examples include descriptive and reasoning questions related to the charts [10][11]. - ECD achieves the lowest Frechet Inception Distance (FID) score, indicating high visual similarity to real scientific charts, and has a higher average pixel entropy compared to other synthetic datasets, suggesting greater complexity and information content [13][10]. Data Synthesis Process - The five-stage modular data synthesis pipeline includes single chart generation, multi-subplot combinations, visual diversity enhancement, image quality filtering, and question-answer pair generation [15][16]. Model Performance Comparison - ECD significantly improves the performance of various open-source MLLMs when fine-tuned with the dataset. For instance, LLaVA-Next-Llama3-8B showed substantial performance gains across multiple test sets after being trained with ECD [17][23]. Evaluation Benchmark - ECDBench is established as a high-quality evaluation benchmark for assessing the performance of MLLMs before and after fine-tuning with ECD. It provides comprehensive statistics for model evaluation [21][25]. Conclusion - ECD and ECDBench provide a solid foundation for advancing multimodal reasoning, scientific AI assistants, and automated chart generation, enhancing the capabilities of MLLMs in understanding complex chart data [25].
X-SAM:从「分割一切」到「任意分割」:统一图像分割多模态大模型,在20+个图像分割数据集上均达SoTA
机器之心· 2025-08-19 06:33
Core Viewpoint - The article discusses the development of X-SAM, a unified multimodal large language model for image segmentation, which enhances the capabilities of existing models by allowing for pixel-level understanding and interaction through visual prompts [4][26]. Background and Motivation - Segment Anything Model (SAM) excels in dense segmentation mask generation but is limited by its reliance on single input modes, hindering its applicability across various segmentation tasks [4]. - Multimodal large language models (MLLMs) have shown promise in tasks like image description and visual question answering but are fundamentally restricted in handling pixel-level visual tasks, which limits the development of generalized models [4]. Method Design - X-SAM introduces a unified framework that extends the segmentation paradigm from "segment anything" to "any segmentation" by incorporating visual grounded segmentation (VGS) tasks [4]. - The model employs a dual projectors architecture to enhance image understanding and a segmentation connector to provide rich multi-scale information for segmentation tasks [11][12]. - X-SAM utilizes a three-stage progressive training strategy to optimize performance across diverse image segmentation tasks, including segmentor fine-tuning, alignment pre-training, and mixed fine-tuning [16][22]. Experimental Results - X-SAM has been evaluated on over 20 segmentation datasets, achieving state-of-the-art performance across seven different image segmentation tasks [19]. - The model's performance metrics indicate significant improvements in various segmentation tasks compared to existing models, showcasing its versatility and effectiveness [20][21]. Summary and Outlook - X-SAM represents a significant advancement in the field of image segmentation, establishing a foundation for future research in video segmentation and the integration of temporal information [26]. - Future directions include expanding the model's capabilities to video segmentation tasks, potentially enhancing video understanding technologies [26].
穆尧团队最新!RoboTwin 2.0:用于鲁棒双臂操作的可扩展数据基准
自动驾驶之心· 2025-06-24 12:41
Core Insights - The article discusses the development of RoboTwin 2.0, a scalable data generation framework aimed at enhancing bimanual robotic manipulation through robust domain randomization and automated expert data generation [2][6][18]. Group 1: Motivation and Challenges - Existing synthetic datasets for bimanual robotic manipulation are insufficient, facing challenges such as lack of efficient data generation methods for new tasks and overly simplified simulation environments [2][5]. - RoboTwin 2.0 addresses these challenges by providing a scalable simulation framework that supports automatic, large-scale generation of diverse and realistic data [2][6]. Group 2: Key Components of RoboTwin 2.0 - RoboTwin 2.0 integrates three key components: an automated expert data generation pipeline, comprehensive domain randomization, and entity-aware adaptation for diverse robotic platforms [6][18]. - The automated expert data generation pipeline utilizes multimodal large language models (MLLMs) and simulation feedback to iteratively optimize task execution code [10][12]. Group 3: Domain Randomization - Domain randomization is applied across five dimensions: clutter, background texture, lighting conditions, desktop height, and diverse language instructions, enhancing the robustness of strategies against environmental variability [12][13]. - The framework generates a large object library (RoboTwin-OD) with 731 instances across 147 categories, each annotated with semantic and operational labels [3][18]. Group 4: Data Collection and Benchmarking - Over 100,000 dual-arm operation trajectories were collected across 50 tasks, supporting extensive benchmarking and evaluation of robotic strategies [24][22]. - The framework allows for flexible entity configurations, ensuring compatibility with diverse hardware setups and promoting scalability for future robotic platforms [20][22]. Group 5: Experimental Analysis - Evaluations demonstrated that RoboTwin 2.0 significantly improves the success rates of robotic tasks, particularly for low-degree-of-freedom platforms, with average increases of 8.3% in task success rates [29][31]. - The framework's data enhances the generalization capabilities of models, showing substantial improvements in performance when tested in unseen scenarios [32][34].