Mamba架构

Search documents
全面超越DiffusionDrive, GMF-Drive:全球首个Mamba端到端SOTA方案
理想TOP2· 2025-08-18 12:43
Core Insights - The article discusses the advancements in end-to-end autonomous driving, emphasizing the importance of multi-modal fusion architectures and the introduction of GMF-Drive as a new framework that improves upon existing methods [3][4][44]. Group 1: End-to-End Autonomous Driving - End-to-end autonomous driving has gained widespread acceptance as it directly maps raw sensor inputs to driving actions, reducing reliance on intermediate representations and information loss [3]. - Recent models like DiffusionDrive and GoalFlow demonstrate strong capabilities in generating diverse and high-quality driving trajectories [3]. Group 2: Multi-Modal Fusion Challenges - A key bottleneck in current systems is the integration of heterogeneous inputs from different sensors, with existing methods often relying on simple feature concatenation rather than structured information integration [4][6]. - The article highlights that current multi-modal fusion architectures, such as TransFuser, show limited performance improvements compared to single-modal architectures, indicating a need for more sophisticated integration methods [6]. Group 3: GMF-Drive Overview - GMF-Drive, developed by teams from University of Science and Technology of China and China University of Mining and Technology, includes three modules aimed at enhancing multi-modal fusion for autonomous driving [7]. - The framework combines a gated Mamba fusion approach with spatial-aware BEV representation, addressing the limitations of traditional transformer-based methods [7][44]. Group 4: Innovations in Data Representation - The article introduces a 14-dimensional pillar representation that retains critical 3D geometric features, enhancing the model's perception capabilities [16][19]. - This representation captures local surface geometry and height variations, allowing the model to differentiate between objects with similar point densities but different structures [19]. Group 5: GM-Fusion Module - The GM-Fusion module integrates multi-modal features through gated channel attention, BEV-SSM, and hierarchical deformable cross-attention, achieving linear complexity while maintaining long-range dependency modeling [19][20]. - The module's design allows for effective spatial dependency modeling and improved feature alignment between camera and LiDAR data [19][40]. Group 6: Experimental Results - GMF-Drive achieved a PDMS score of 88.9 on the NAVSIM benchmark, outperforming the previous best model, DiffusionDrive, by 0.8 points, demonstrating the effectiveness of the GM-Fusion architecture [29][30]. - The framework also showed significant improvements in key sub-metrics, such as driving area compliance and vehicle progression rate, indicating enhanced safety and efficiency [30][31]. Group 7: Conclusion - The article concludes that GMF-Drive represents a significant advancement in autonomous driving frameworks by effectively combining geometric representations with spatially aware fusion techniques, achieving new performance benchmarks [44].