Workflow
Autonomous Driving World Model
icon
Search documents
上交OmniNWM:突破三维驾驶仿真极限的「全知」世界模型
自动驾驶之心· 2025-10-24 16:03
Core Insights - The article discusses the OmniNWM research, which proposes a panoramic, multi-modal driving navigation world model that significantly surpasses existing state-of-the-art (SOTA) models in terms of generation quality, control precision, and long-term stability, setting a new benchmark for simulation training and closed-loop evaluation in autonomous driving [2][58]. Group 1: OmniNWM Features - OmniNWM integrates state generation, action control, and reward evaluation into a unified framework, addressing the limitations of existing models that rely on single-modal RGB video and sparse action encoding [10][11]. - The model utilizes a Panoramic Diffusion Transformer (PDiT) to jointly generate pixel-aligned outputs across four modalities: RGB, semantic, depth, and 3D occupancy [12][11]. - OmniNWM introduces a normalized Plücker Ray-map for action control, allowing for pixel-level guidance and improved generalization across out-of-distribution (OOD) trajectories [18][22]. Group 2: Challenges and Solutions - The article identifies three core challenges in current autonomous driving world models: limitations in state representation, ambiguity in action control, and lack of integrated reward mechanisms [8][10]. - OmniNWM's approach to state generation overcomes the limitations of existing models by capturing the full geometric and semantic complexity of real-world driving scenarios [10][11]. - The model's reward system is based on the generated 3D occupancy, providing a dense and integrated reward function that enhances the evaluation of driving behavior [35][36]. Group 3: Performance Metrics - OmniNWM supports the generation of long video sequences, exceeding the ground truth length with stable outputs, demonstrating its capability to generate over 321 frames [31][29]. - The model achieves significant improvements in video generation quality, outperforming existing models in metrics such as FID and FVD [51][52]. - The integration of a Vision-Language-Action (VLA) planner enhances the model's ability to understand multi-modal environments and output high-precision trajectories [43][50].
地平线&清华Epona:自回归式世界端到端模型~
自动驾驶之心· 2025-08-12 23:33
Core Viewpoint - The article discusses a unified framework for autonomous driving world models that can generate long-term high-resolution video while providing real-time trajectory planning, addressing limitations of existing methods [5][12]. Group 1: Existing Methods and Limitations - Current diffusion models, such as Vista, can only generate fixed-length videos (≤15 seconds) and struggle with flexible long-term predictions (>2 minutes) and multi-modal trajectory control [7]. - GPT-style autoregressive models, like GAIA-1, can extend indefinitely but require discretizing images into tokens, which degrades visual quality and lacks continuous action trajectory generation capabilities [7][13]. Group 2: Proposed Methodology - The proposed world model in the autonomous driving domain uses a series of forward camera observations and corresponding driving trajectories to predict future driving dynamics [10]. - The framework decouples spatiotemporal modeling using causal attention in a GPT-style transformer and a dual-diffusion transformer for spatial rendering and trajectory generation [12]. - An asynchronous multimodal generation mechanism allows for parallel generation of 3-second trajectories and the next frame image, achieving 20Hz real-time planning with a 90% reduction in inference computational power [12]. Group 3: Model Structure and Training - The Multimodal Spatiotemporal Transformer (MST) encodes past driving scenes and action sequences, enhancing temporal position encoding for implicit representation [16]. - The Trajectory Planning Diffusion Transformer (TrajDiT) and Next-frame Prediction Diffusion Transformer (VisDiT) are designed to handle trajectory and image predictions, respectively, with a focus on action control [21]. - A chain-of-forward training strategy is employed to mitigate the "drift problem" in autoregressive inference by simulating prediction noise during training [24]. Group 4: Performance Evaluation - The model demonstrates superior performance in video generation metrics, achieving a FID score of 7.5 and a FVD score of 82.8, outperforming several existing models [28]. - In trajectory control metrics, the proposed method achieves a high accuracy rate of 97.9% in comparison to other methods [34]. Group 5: Conclusion and Future Directions - The framework integrates image generation and vehicle trajectory prediction with high quality, showing strong potential for applications in closed-loop simulation and reinforcement learning [36]. - However, the current model is limited to single-camera input, indicating a need for addressing multi-camera consistency and point cloud generation challenges in the autonomous driving field [36].