端到端自动驾驶

Search documents
端到端VLA的起点:聊聊大语言模型和CLIP~
自动驾驶之心· 2025-08-19 07:20
Core Viewpoint - The article discusses the development and significance of end-to-end (E2E) algorithms in autonomous driving, emphasizing the integration of various advanced technologies such as large language models (LLMs), diffusion models, and reinforcement learning (RL) in enhancing the capabilities of autonomous systems [21][31]. Summary by Sections Section 1: Overview of End-to-End Autonomous Driving - The first chapter provides a comprehensive overview of the evolution of end-to-end algorithms, explaining the transition from modular approaches to end-to-end solutions, and discussing the advantages and challenges of different paradigms [40]. Section 2: Background Knowledge - The second chapter focuses on the technical stack associated with end-to-end systems, detailing the importance of LLMs, diffusion models, and reinforcement learning, which are crucial for understanding the future job market in this field [41][42]. Section 3: Two-Stage End-to-End Systems - The third chapter delves into two-stage end-to-end systems, exploring their emergence, advantages, and disadvantages, while also reviewing notable works in the field such as PLUTO and CarPlanner [42][43]. Section 4: One-Stage End-to-End and VLA - The fourth chapter highlights one-stage end-to-end systems, discussing various subfields including perception-based methods and the latest advancements in VLA (Vision-Language Alignment), which are pivotal for achieving the ultimate goals of autonomous driving [44][50]. Section 5: Practical Application and RLHF Fine-Tuning - The fifth chapter includes a major project focused on RLHF (Reinforcement Learning from Human Feedback) fine-tuning, providing practical insights into building pre-training and reinforcement learning modules, which are applicable to VLA-related algorithms [52]. Course Structure and Learning Outcomes - The course aims to equip participants with a solid understanding of end-to-end autonomous driving technologies, covering essential frameworks and methodologies, and preparing them for roles in the industry [56][57].
全面超越DiffusionDrive, GMF-Drive:全球首个Mamba端到端SOTA方案
理想TOP2· 2025-08-18 12:43
Core Insights - The article discusses the advancements in end-to-end autonomous driving, emphasizing the importance of multi-modal fusion architectures and the introduction of GMF-Drive as a new framework that improves upon existing methods [3][4][44]. Group 1: End-to-End Autonomous Driving - End-to-end autonomous driving has gained widespread acceptance as it directly maps raw sensor inputs to driving actions, reducing reliance on intermediate representations and information loss [3]. - Recent models like DiffusionDrive and GoalFlow demonstrate strong capabilities in generating diverse and high-quality driving trajectories [3]. Group 2: Multi-Modal Fusion Challenges - A key bottleneck in current systems is the integration of heterogeneous inputs from different sensors, with existing methods often relying on simple feature concatenation rather than structured information integration [4][6]. - The article highlights that current multi-modal fusion architectures, such as TransFuser, show limited performance improvements compared to single-modal architectures, indicating a need for more sophisticated integration methods [6]. Group 3: GMF-Drive Overview - GMF-Drive, developed by teams from University of Science and Technology of China and China University of Mining and Technology, includes three modules aimed at enhancing multi-modal fusion for autonomous driving [7]. - The framework combines a gated Mamba fusion approach with spatial-aware BEV representation, addressing the limitations of traditional transformer-based methods [7][44]. Group 4: Innovations in Data Representation - The article introduces a 14-dimensional pillar representation that retains critical 3D geometric features, enhancing the model's perception capabilities [16][19]. - This representation captures local surface geometry and height variations, allowing the model to differentiate between objects with similar point densities but different structures [19]. Group 5: GM-Fusion Module - The GM-Fusion module integrates multi-modal features through gated channel attention, BEV-SSM, and hierarchical deformable cross-attention, achieving linear complexity while maintaining long-range dependency modeling [19][20]. - The module's design allows for effective spatial dependency modeling and improved feature alignment between camera and LiDAR data [19][40]. Group 6: Experimental Results - GMF-Drive achieved a PDMS score of 88.9 on the NAVSIM benchmark, outperforming the previous best model, DiffusionDrive, by 0.8 points, demonstrating the effectiveness of the GM-Fusion architecture [29][30]. - The framework also showed significant improvements in key sub-metrics, such as driving area compliance and vehicle progression rate, indicating enhanced safety and efficiency [30][31]. Group 7: Conclusion - The article concludes that GMF-Drive represents a significant advancement in autonomous driving frameworks by effectively combining geometric representations with spatially aware fusion techniques, achieving new performance benchmarks [44].
“黑羊”绝影:如何给车企铺AI路?
2 1 Shi Ji Jing Ji Bao Dao· 2025-08-15 10:50
Group 1 - The core viewpoint is that SenseTime's automotive division, Jueying, has the potential to succeed after addressing key challenges in the automotive industry, with plans to expand partnerships with car manufacturers by 2025 [1] - SenseTime has invested seven years in developing AI technology, aiming to validate its value in the automotive sector [1] - Jueying plans to develop advanced end-to-end solutions based on NVIDIA's Thor platform, indicating a strategic move towards higher-level AI applications in vehicles [1] Group 2 - CEO Wang Xiaogang of Jueying was among the first to identify opportunities in the end-to-end field, having collaborated with Honda on an L4 autonomous driving project in 2017 [2] - The project faced challenges due to computational limitations and industry awareness, which delayed its implementation [2] - Following the production of Tesla's FSD V12, Jueying is accelerating its efforts to catch up, with plans to showcase its UniAD end-to-end deployment at the 2024 Beijing Auto Show [2] - A joint development of an end-to-end autonomous driving system with Dongfeng Motor is set to be realized by the end of this year [2]
多空博弈Robotaxi:“木头姐”建仓,机构现分歧
Di Yi Cai Jing· 2025-08-15 03:45
唱多、唱空交织,推动自动驾驶技术成熟。 今年以来,Robotaxi(自动驾驶出租车)受到全球资本市场广泛关注,但质疑声也如约而至。 近日,"木头姐"Cathie Wood旗下ARK基金斥资约1290万美元买入小马智行(NASDAQ:PONY)股 票,这是"木头姐"的主力基金首次持仓中国自动驾驶标的。据悉,"木头姐"被华尔街认为是"女版巴菲 特",其投资偏好是高成长、高风险及长期持有。 另一家中国Robotaxi头部企业文远知行(NASDAQ:WRD)二季度Robotaxi业务同比大增836.7%,该公 司早在今年5月就披露了Uber承诺向其追加投资1亿美元的事宜。 记者近期在广州体验百度旗下萝卜快跑Robotaxi时也出现"高峰期等车时间长达1个小时、且无车接 单"的情况。当记者问询叫车点附近运营车辆数量时,萝卜快跑客服回应称:"城市的可服务车辆并非固 定不变,会受多方因素影响进行动态调整。"根据附近居民、商户的反馈,下班高峰期萝卜快跑的等车 时长大于40分钟。 不可否认的是,现阶段Robotaxi派单时长、等车时长均较有人网约车更多,也是行业需要解决的课题。 韩旭表示,当自动驾驶公司开拓一个新城市时,自动驾 ...
自动驾驶现在关注哪些技术方向?应该如何入门?
自动驾驶之心· 2025-08-14 23:33
Core Viewpoint - The article emphasizes the establishment of a comprehensive community for autonomous driving, aiming to bridge communication between enterprises and academic institutions, while providing resources and support for individuals interested in the field [1][12]. Group 1: Community and Resources - The community has organized over 40 technical routes, offering resources for both beginners and advanced researchers in autonomous driving [1][13]. - Members include individuals from renowned universities and leading companies in the autonomous driving sector, fostering a collaborative environment for knowledge sharing [13][21]. - The community provides a complete entry-level technical stack and roadmap for newcomers, as well as valuable industry frameworks and project proposals for those already engaged in research [7][9]. Group 2: Learning and Development - The community offers a variety of learning routes, including perception, simulation, and planning control, to facilitate quick onboarding for newcomers and further development for those already familiar with the field [13][31]. - There are numerous open-source projects and datasets available, covering areas such as 3D object detection, BEV perception, and world models, which are essential for practical applications in autonomous driving [27][29][35]. Group 3: Job Opportunities and Networking - The community actively shares job postings and career opportunities, helping members connect with potential employers in the autonomous driving industry [11][18]. - Members can engage in discussions about career choices and research directions, receiving guidance from experienced professionals in the field [77][80]. Group 4: Technical Discussions and Innovations - The community hosts discussions on cutting-edge topics such as end-to-end driving, multi-modal models, and the integration of various technologies in autonomous systems [20][39][42]. - Regular live sessions with industry leaders are conducted, allowing members to gain insights into the latest advancements and practical applications in autonomous driving [76][80].
正式开课!端到端与VLA自动驾驶小班课,优惠今日截止~
自动驾驶之心· 2025-08-13 23:33
Core Viewpoint - The article emphasizes the significance of VLA (Vision-Language Alignment) as a new milestone in the mass production of autonomous driving technology, highlighting the progressive development from E2E (End-to-End) to VLA, and the growing interest from professionals in transitioning to this field [1][11]. Course Overview - The course titled "End-to-End and VLA Autonomous Driving Small Class" aims to provide in-depth knowledge of E2E and VLA algorithms, addressing the challenges faced by individuals looking to transition into this area [1][12]. - The curriculum is designed to cover various aspects of autonomous driving technology, including foundational knowledge, advanced models, and practical applications [5][15]. Course Structure - **Chapter 1**: Introduction to End-to-End Algorithms, covering the historical development and the transition from modular to end-to-end approaches, including the advantages and challenges of each paradigm [17]. - **Chapter 2**: Background knowledge on E2E technology stacks, focusing on key areas such as VLA, diffusion models, and reinforcement learning, which are crucial for future job interviews [18]. - **Chapter 3**: Exploration of two-stage end-to-end methods, discussing notable algorithms and their advantages compared to one-stage methods [18]. - **Chapter 4**: In-depth analysis of one-stage end-to-end methods, including various subfields like perception-based and world model-based approaches, culminating in the latest VLA techniques [19]. - **Chapter 5**: Practical assignment focusing on RLHF (Reinforcement Learning from Human Feedback) fine-tuning, providing hands-on experience with pre-training and reinforcement learning modules [21]. Target Audience and Learning Outcomes - The course is aimed at individuals with a foundational understanding of autonomous driving and related technologies, such as transformer models and reinforcement learning [28]. - Upon completion, participants are expected to achieve a level equivalent to one year of experience as an end-to-end autonomous driving algorithm engineer, mastering various methodologies and being able to apply learned concepts to real-world projects [28].
全面超越DiffusionDrive!中科大GMF-Drive:全球首个Mamba端到端SOTA方案
自动驾驶之心· 2025-08-13 23:33
Core Viewpoint - The article discusses the GMF-Drive framework developed by the University of Science and Technology of China, which addresses the limitations of existing multi-modal fusion architectures in end-to-end autonomous driving by integrating gated Mamba fusion with spatial-aware BEV representation [2][7]. Summary by Sections End-to-End Autonomous Driving - End-to-end autonomous driving has gained recognition as a viable solution, directly mapping raw sensor inputs to driving actions, thus minimizing reliance on intermediate representations and information loss [2]. - Recent models like DiffusionDrive and GoalFlow have demonstrated strong capabilities in generating diverse and high-quality driving trajectories [2][8]. Multi-Modal Fusion Challenges - A key bottleneck in current systems is the multi-modal fusion architecture, which struggles to effectively integrate heterogeneous inputs from different sensors [3]. - Existing methods, primarily based on the TransFuser style, often result in limited performance improvements, indicating a simplistic feature concatenation rather than structured information integration [5]. GMF-Drive Framework - GMF-Drive consists of three modules: a data preprocessing module that enhances geometric information, a perception module utilizing a spatial-aware state space model (SSM), and a trajectory planning module employing a truncated diffusion strategy [7][13]. - The framework aims to retain critical 3D geometric features while improving computational efficiency compared to traditional transformer-based methods [11][16]. Experimental Results - GMF-Drive achieved a PDMS score of 88.9 on the NAVSIM dataset, outperforming the previous best model, DiffusionDrive, by 0.8 points [32]. - The framework demonstrated significant improvements in key metrics, including a 1.1 point increase in the driving area compliance score (DAC) and a maximum score of 83.3 in the ego vehicle progression (EP) [32][34]. Component Analysis - The study conducted ablation experiments to assess the contributions of various components, confirming that the integration of geometric representations and the GM-Fusion architecture is crucial for optimal performance [39][40]. - The GM-Fusion module, which includes gated channel attention, BEV-SSM, and hierarchical deformable cross-attention, significantly enhances the model's ability to process multi-modal data effectively [22][44]. Conclusion - GMF-Drive represents a novel end-to-end autonomous driving framework that effectively combines geometric-enhanced pillar representation with a spatial-aware fusion model, achieving superior performance compared to existing transformer-based architectures [51].
双非硕多传感融合方向,技术不精算法岗学历受限,求学习建议。。。
自动驾驶之心· 2025-08-13 13:06
Core Viewpoint - The article emphasizes the importance of building a supportive community for students and professionals in the autonomous driving field, highlighting the establishment of the "Autonomous Driving Heart Knowledge Planet" as a platform for knowledge sharing and collaboration [6][16][17]. Group 1: Community and Learning Resources - The "Autonomous Driving Heart Knowledge Planet" aims to provide a comprehensive technical exchange platform for academic and engineering issues related to autonomous driving [17]. - The community has gathered members from renowned universities and leading companies in the autonomous driving sector, facilitating knowledge sharing and collaboration [17]. - The platform offers nearly 40 technical routes and access to over 60 datasets related to autonomous driving, significantly reducing the time needed for research and learning [17][31][33]. Group 2: Technical Learning Paths - The community has organized various learning paths for beginners, intermediate researchers, and advanced professionals, covering topics such as perception, simulation, and planning control in autonomous driving [11][13][16]. - Specific learning routes include end-to-end learning, multi-modal large models, and occupancy networks, catering to different levels of expertise [17]. - The platform also provides resources for practical implementation, including open-source projects and datasets, to help users quickly get started in the field [31][33]. Group 3: Industry Insights and Networking - The community facilitates job sharing and career advice, helping members navigate the job market in the autonomous driving industry [15][19]. - Members can engage in discussions about industry trends, job opportunities, and technical challenges, fostering a collaborative environment for professional growth [18][81]. - The platform regularly invites industry experts for live sessions, providing members with insights into the latest advancements and applications in autonomous driving [80].
传统感知逐渐被嫌弃,VLA已经上车了?!
自动驾驶之心· 2025-08-13 06:04
Core Viewpoint - The article discusses the launch of the Li Auto i8, which is the first model equipped with the VLA driver model, highlighting its advancements in understanding semantics, reasoning, and human-like driving intuition [2][7]. Summary by Sections VLA Driver Model Capabilities - The VLA model enhances four core capabilities: spatial understanding, reasoning ability, communication and memory, and behavioral ability [2]. - It can comprehend natural language commands during driving, set specific speeds based on past memories, and navigate complex road conditions while avoiding obstacles [5]. Industry Trends and Educational Initiatives - The VLA model represents a new milestone in the mass production of autonomous driving technology, prompting many professionals from traditional fields to seek transition into VLA-related roles [7]. - The article introduces a new course titled "End-to-End and VLA Autonomous Driving," designed to help individuals transition into this field by providing in-depth knowledge and practical skills [21][22]. Course Structure and Content - The course covers various topics, including end-to-end background knowledge, large language models, BEV perception, diffusion model theory, and reinforcement learning [12][26]. - It aims to build a comprehensive understanding of the research landscape in autonomous driving, focusing on both theoretical and practical applications [22][23]. Job Market and Salary Insights - The demand for VLA/VLM algorithm experts is high, with salary ranges for positions such as VLA model quantization deployment engineers and VLM algorithm engineers varying from 40K to 120K [15]. - The course is tailored for individuals looking to enhance their skills or transition into the autonomous driving sector, emphasizing the importance of mastering multiple technical domains [19][41].
闭环碰撞率爆降50%!DistillDrive:异构多模态蒸馏端到端新方案
自动驾驶之心· 2025-08-11 23:33
Core Insights - The article discusses the development of DistillDrive, an end-to-end autonomous driving model that significantly reduces collision rates by 50% and improves closed-loop performance by 3 percentage points compared to baseline models [2][7]. Group 1: Model Overview - DistillDrive utilizes a knowledge distillation framework to enhance multi-modal motion feature learning, addressing the limitations of existing models that overly focus on ego-vehicle status [2][6]. - The model incorporates a structured scene representation as a teacher model, leveraging diverse planning instances for multi-objective learning [2][6]. - Reinforcement learning is introduced to optimize the mapping from states to decisions, while generative modeling is used to construct planning-oriented instances [2][6]. Group 2: Experimental Validation - The model was validated on the nuScenes and NAVSIM datasets, demonstrating a 50% reduction in collision rates and a 3-point improvement in performance metrics [7][37]. - The nuScenes dataset consists of 1,000 driving scenes, while the NAVSIM dataset enhances perception capabilities with high-quality annotations and complex scenarios [33][36]. Group 3: Performance Metrics - DistillDrive outperformed existing models, achieving lower collision rates and reduced L2 error compared to SparseDrive, indicating the effectiveness of diversified imitation learning [37][38]. - The teacher model exhibited superior performance, confirming the effectiveness of reinforcement learning in optimizing state space [37][39]. Group 4: Future Directions - Future work aims to integrate world models with language models to further enhance planning performance and employ more effective reinforcement learning methods [54][55].