Core Insights - The article focuses on the development and algorithms of Vision-Language Action (VLA) models in autonomous driving over the past two years, providing a comprehensive overview of various research papers and projects in this field [1]. Group 1: VLA Preceding Work - The article mentions several key papers that serve as interpreters for VLA, including "DriveGPT4" and "TS-VLM," which focus on enhancing autonomous driving perception through large language models [3]. - Additional papers like "DynRsl-VLM" are highlighted for their contributions to improving perception in autonomous driving [3]. Group 2: Modular VLA - The article lists various end-to-end VLA models, such as "RAG-Driver" and "OpenDriveVLA," which aim to generalize driving explanations and enhance autonomous driving capabilities [4]. - Other notable models include "DriveMoE" and "LangCoop," which focus on collaborative driving and knowledge-enhanced safe driving [4]. Group 3: Enhanced Reasoning in VLA - The article discusses models like "ADriver-I" and "EMMA," which contribute to the development of general world models and multimodal approaches for autonomous driving [6]. - Papers such as "DiffVLA" and "S4-Driver" are mentioned for their innovative approaches to planning and representation in autonomous driving [6]. Group 4: Community and Resources - The article emphasizes the establishment of a community for knowledge sharing in autonomous driving, featuring over 40 technical routes and inviting industry experts for discussions [7]. - It also highlights the availability of job opportunities and a comprehensive entry-level technical stack for newcomers in the field [12][14]. Group 5: Educational Resources - The article provides a structured learning roadmap for various aspects of autonomous driving, including perception, simulation, and planning control [15]. - It mentions the compilation of numerous datasets and open-source projects to facilitate learning and research in the autonomous driving sector [15].
自动驾驶VLA工作汇总(模块化/端到端/推理增强)
自动驾驶之心·2025-08-12 11:42