OmniVLM
Search documents
如何向一段式端到端注入类人思考的能力?港科OmniScene提出了一种新的范式...
自动驾驶之心· 2025-09-25 23:33
Core Insights - The article discusses the limitations of current autonomous driving systems in achieving true scene understanding and proposes a new framework called OmniScene, which integrates human-like cognitive abilities into the driving process [11][13][14]. Group 1: OmniScene Framework - OmniScene introduces a visual-language model (OmniVLM) that combines panoramic perception with temporal fusion capabilities for comprehensive 4D scene understanding [2][14]. - The framework employs a teacher-student architecture for knowledge distillation, embedding textual representations into 3D instance features to enhance semantic supervision [2][15]. - A hierarchical fusion strategy (HFS) is proposed to address the imbalance in contributions from different modalities during multi-modal fusion, allowing for adaptive calibration of geometric and semantic features [2][16]. Group 2: Performance Evaluation - OmniScene was evaluated on the nuScenes dataset, outperforming over ten mainstream models across various tasks, establishing new benchmarks for perception, prediction, planning, and visual question answering (VQA) [3][16]. - Notably, OmniScene achieved a significant 21.40% improvement in visual question answering performance, demonstrating its robust multi-modal reasoning capabilities [3][16]. Group 3: Human-like Scene Understanding - The framework aims to replicate human visual processing by continuously converting sensory input into scene understanding, adjusting attention based on dynamic driving environments [11][14]. - OmniVLM is designed to process multi-view and multi-frame visual inputs, enabling comprehensive scene perception and attention reasoning [14][15]. Group 4: Multi-modal Learning - The proposed HFS combines 3D instance representations with multi-view visual inputs and semantic attention derived from textual cues, enhancing the model's ability to understand complex driving scenarios [16][19]. - The integration of visual and textual modalities aims to improve the model's contextual awareness and decision-making processes in dynamic environments [19][20]. Group 5: Challenges and Solutions - The article highlights challenges in integrating visual-language models (VLMs) into autonomous driving, such as the need for domain-specific knowledge and real-time safety requirements [20][21]. - Solutions include designing driving attention prompts and developing new end-to-end visual-language reasoning methods to address safety-critical driving scenarios [22].