Workflow
自动驾驶
icon
Search documents
博世最新一篇长达41页的自动驾驶轨迹规划综述
自动驾驶之心· 2025-12-05 00:03
Core Insights - The article discusses the advancements and applications of foundation models (FMs) in trajectory planning for autonomous driving, highlighting their potential to enhance understanding and decision-making in complex driving scenarios [4][5][11]. Background Overview - Foundation models are large-scale models that learn representations from vast amounts of data, applicable to various downstream tasks, including language and vision [4]. - The study emphasizes the importance of FMs in the autonomous driving sector, particularly in trajectory planning, which is deemed the core task of driving [8][11]. Research Contributions - A classification system for methods utilizing FMs in autonomous driving trajectory planning is proposed, analyzing 37 existing methods to provide a structured understanding of the field [11][12]. - The research evaluates the performance of these methods in terms of code and data openness, offering practical references for reproducibility and reusability [12]. Methodological Insights - The article categorizes methods into two main types: FMs customized for trajectory planning and FMs that guide trajectory planning [16][19]. - Customized FMs leverage pre-trained models, adapting them for specific driving tasks, while guiding FMs enhance existing trajectory planning models through knowledge transfer [19][20]. Application of Foundation Models - FMs can enhance trajectory planning capabilities through various approaches, including fine-tuning existing models, utilizing chain-of-thought reasoning, and enabling language and action interactions [9][19]. - The study identifies 22 methods focused on customizing FMs for trajectory planning, detailing their functionalities and the importance of prompt design in model performance [20][32]. Challenges and Future Directions - The article outlines key challenges in deploying FMs in autonomous driving, such as reasoning costs, model size, and the need for suitable datasets for fine-tuning [5][12]. - Future research directions include addressing the efficiency, robustness, and transferability of models from simulation to real-world applications [12][14]. Comparative Analysis - The study contrasts its findings with existing literature, noting that while previous reviews cover various aspects of autonomous driving, this research specifically focuses on the application of FMs in trajectory planning [13][14]. Data and Model Design - The article discusses the importance of data curation for training FMs, emphasizing the need for structured datasets that include sensor data and trajectory pairs [24][28]. - It also highlights different model design strategies, including the use of existing visual language models and the combination of visual encoders with large language models [27][29]. Language and Action Interaction - The research explores models that incorporate language interaction capabilities, detailing how these models utilize visual question-answering datasets to enhance driving performance [38][39]. - It emphasizes the significance of training datasets and evaluation metrics in assessing the effectiveness of language interaction in trajectory planning [39][41].
入门自动驾驶实操,全栈小车黑武士001性价比拉满了!
自动驾驶之心· 2025-12-05 00:03
Core Viewpoint - The article introduces the "Black Warrior 001," a cost-effective and easy-to-use autonomous driving educational platform designed for research and teaching purposes, priced at 36,999 yuan, which includes various courses and practical applications for students and researchers [3][5]. Group 1: Product Overview - The Black Warrior 001 is a lightweight solution that supports multiple functionalities such as perception, localization, fusion, navigation, and planning, making it suitable for undergraduate learning, graduate research, and training institutions [5]. - The product is designed to be user-friendly, allowing beginners to quickly engage in hands-on practice with autonomous driving systems [3][5]. Group 2: Performance Demonstration - The platform has been tested in various environments, including indoor, outdoor, and parking scenarios, showcasing its capabilities in perception, localization, fusion, navigation, and planning [7]. - Specific tests include outdoor park driving, point cloud 3D target detection, indoor 2D and 3D laser mapping, and night driving in outdoor settings [9][11][13][15][21]. Group 3: Hardware Specifications - Key sensors include a Mid 360 3D LiDAR, a 2D LiDAR from Lidar, a depth camera from Orbbec, and a main control chip, Nvidia Orin NX with 16GB RAM [23][24]. - The vehicle weighs 30 kg, has a battery power of 50W, operates at 24V, and has a maximum speed of 2 m/s [26][27]. Group 4: Software and Functionality - The software framework includes ROS, C++, and Python, supporting one-click startup and providing a development environment [29]. - The platform supports various functionalities such as 2D and 3D SLAM, target detection, navigation, and obstacle avoidance [30]. Group 5: After-Sales and Maintenance - The company offers one year of after-sales support for non-human damage, with free repairs for damages caused by user errors during the warranty period [53].
「理想汽车」智驾高管联手入局具身智能机器人赛道创业,已完成两轮融资!
Robot猎场备忘录· 2025-12-05 00:03
Core Insights - The article discusses the recent developments in the field of embodied intelligence, particularly focusing on the establishment of Hangzhou Zhijian Power Technology Co., Ltd. by former executives from Li Auto, which has attracted significant investment from top venture capital firms [2][4]. Financing and Investment - Hangzhou Zhijian Power has completed two rounds of financing, with a total amount of approximately $50 million, involving investors such as Sequoia Capital and BlueRun Ventures [2][6]. - The company was founded in July 2025 and has quickly gained attention from major investment firms, indicating a strong interest in the embodied intelligence sector [4][6]. Company Background and Team - The founding team includes Wang Kai, former CTO of Li Auto, and Jia Peng, former head of intelligent driving technology at Li Auto, both of whom have extensive experience in the automotive and technology sectors [8][4]. - The company is currently in the process of building its team and has posted job openings related to deep reinforcement learning algorithms, motion control algorithms, and embedded hardware [5][6]. Industry Trends - The article highlights a trend where many professionals from the autonomous driving sector are transitioning to the embodied intelligence field, leading to a surge in startup activity and investment [6][10]. - There are currently 15 well-known automotive companies entering the humanoid robot sector, with 11 based in China, indicating a competitive landscape [6][10]. Product Development - Xiaopeng Motors has launched the new generation IRON robot, which is described as the most humanoid robot to date, showcasing advanced features such as a Turing AI chip and solid-state batteries [7][10]. - The article suggests that the overlap between autonomous driving and humanoid robot technology could lead to significant advancements, although commercial viability remains a challenge [10][11].
Nullmax 徐雷:视觉能力将决定智驾系统上限,反对把激光雷达当 “拐棍”
晚点LatePost· 2025-12-04 12:09
Core Viewpoint - The ongoing debate in the autonomous driving field revolves around the merits of pure vision systems versus sensor fusion approaches, with a strong emphasis on the superiority of camera-based systems in terms of information richness and processing frequency [5][6][11]. Group 1: Technical Insights - Cameras provide higher frequency and richer information compared to LiDAR, with frame rates reaching 30 frames per second for cameras versus 10 frames per second for LiDAR [7][11]. - The reliance on LiDAR in some fusion systems may indicate a deficiency in the visual processing capabilities of those systems [5][6]. - The performance ceiling of autonomous driving systems is significantly influenced by the choice of sensors, with pure vision systems having a higher potential if algorithms and computational power are sufficiently advanced [8][11]. Group 2: Industry Perspectives - The current trend shows that many domestic manufacturers are achieving around 10 frames per second, while Tesla's systems are reportedly exceeding 20 frames per second, highlighting a gap in visual processing capabilities [17]. - The use of LiDAR is often seen as a shortcut to quickly deploy systems, but it may limit the long-term performance and development of autonomous driving technologies [6][19]. - The integration of multiple sensor types, including cameras and LiDAR, is viewed as beneficial, but the primary focus should remain on enhancing visual capabilities [14][19]. Group 3: Future Considerations - The industry is moving towards data-driven systems that leverage AI to generate diverse driving scenarios, which can enhance the training of autonomous systems without the high costs associated with extensive data collection [19]. - The evolution of sensor technology, such as the increase in LiDAR line counts, aims to improve detection capabilities, but this also raises cost considerations [18]. - The debate over sensor reliance continues, with some manufacturers still favoring LiDAR due to perceived limitations in visual processing, indicating a need for further advancements in camera-based systems [17][19].
技压群雄,文远知行一段式端到端ADAS方案问鼎中国智驾大赛
Ge Long Hui· 2025-12-04 07:10
Core Insights - The Chery Star Era ES, empowered by WeRide and Bosch, won the championship at the China Intelligent Driving Competition in Taizhou, achieving a unique "zero takeover" performance and scoring the highest in the event, leading the second place by nearly 10 points [1][4][11]. Group 1: Competition Overview - The competition consisted of preliminary and final rounds, selecting the highest-scoring models from various ADAS solutions for the finals, while other solutions had a main model directly enter the finals [4]. - The Taizhou event featured a record-long course of 35 km and introduced a strict scoring system where all takeovers were recorded and deducted points, significantly increasing the difficulty of scoring [7][11]. Group 2: Performance and Technology - The Chery Star Era ES achieved a score of 112.81 points, demonstrating its ability to maintain stable performance under complex road conditions and a zero-tolerance takeover scoring mechanism [11]. - WeRide's WePilot 3.0 represents a significant advancement in AI model integration, showcasing high human-like performance in various countries and receiving praise from customers and media [11]. - WePilot 3.0 employs a single end-to-end AI model that integrates perception, prediction, and planning control, enhancing decision-making consistency and safety in dynamic environments [11][12]. Group 3: Future Developments - WePilot 3.0 is designed to support the mass development of L2 advanced driver assistance systems, compatible with various computing platforms and multi-modal perception hardware, facilitating rapid adaptation for different manufacturers [17]. - The technology is already in mass production in the Chery Star Era ES and ET models, with plans to expand its availability to more models from Chery and GAC Group, providing consumers with reliable and efficient L2 assistance at accessible prices [17].
李弘扬团队最新!SimScale:显著提升困难场景的端到端仿真框架,NavSim新SOTA
自动驾驶之心· 2025-12-04 03:03
Core Viewpoint - The article discusses the limitations of current data scaling methods in autonomous driving and introduces SimScale, a framework designed to generate critical driving scenarios through scalable 3D simulation, enhancing the performance of end-to-end driving models without the need for more real-world data [2][5][44]. Background Review - Data scaling has been a fundamental principle in modern deep learning across various fields, including language and vision. In autonomous driving, end-to-end planning leverages large-scale driving data to create fully autonomous systems [5][44]. SimScale Framework - SimScale is a simulation generation framework that utilizes high-fidelity neural rendering to create diverse reactive traffic scenarios and pseudo-expert demonstrations. It integrates simulation and real-world data to enhance the robustness and generalization of various end-to-end models [6][12][44]. Simulation Data Generation - The framework employs a 3D Gaussian Splatting (3DGS) simulation data engine to control the states of the vehicle and other agents over time, rendering multi-view videos from the vehicle's perspective. This process involves perturbing vehicle trajectories to maximize state space coverage and generating corresponding expert trajectories for comparison [13][15][19]. Experimental Results - The results from the navhard and navtest benchmark tests show significant performance improvements across all models, with GTRS-Dense achieving a score of 47.2 on navhard, marking a new state-of-the-art performance. The integration of simulation data enhances model robustness in challenging and unseen scenarios [30][31][32][44]. Data Scaling Analysis - The study analyzes the scaling behavior of different planners under fixed real-world data conditions, revealing that the performance of planners improves predictably with increased simulation data. The exploration of pseudo-expert behaviors and interactive environments significantly enhances the effectiveness of simulation data [33][38][39][44]. Conclusion - SimScale demonstrates how large-scale simulation can amplify the value of real-world datasets in end-to-end autonomous driving. The framework's ability to generate pseudo-expert data and its collaborative training approach lead to notable improvements in model performance, emphasizing the importance of simulation in the development of autonomous driving technologies [44].
驭势科技 | 环境感知算法工程师招聘(可直推)
自动驾驶之心· 2025-12-04 03:03
Core Viewpoint - The article emphasizes the critical importance of environmental perception algorithms in ensuring the safety of autonomous driving, highlighting the need for skilled professionals in this field [5]. Group 1: Job Responsibilities - The role involves accurately detecting and locating all objects in the surrounding environment, such as roads, pedestrians, vehicles, and bicycles, to ensure safe driving [5]. - Responsibilities include processing data from machine vision and LiDAR for autonomous driving applications, achieving complex perception functions like multi-target tracking and semantic understanding [5]. Group 2: Qualifications - A solid mathematical foundation is required, particularly in geometry and statistics [5]. - Proficiency in machine learning and deep learning, along with practical experience in cutting-edge technologies, is essential [5]. - Experience in algorithms related to scene segmentation, object detection, recognition, and tracking based on vision or LiDAR is necessary [5]. - Strong engineering skills are required, with expertise in C/C++ and Python, as well as familiarity with at least one other programming language [5]. - Knowledge of 3D imaging principles and methods, such as stereo and structured light, is important [5]. - A deep understanding of computer architecture is needed to develop high-performance, real-time software [5]. - A passion for innovation and creating technology to solve real-world problems is encouraged [5].
三年半亏近8亿、现金流告急,驭势科技再闯港股“补血”
3 6 Ke· 2025-12-04 00:35
Core Viewpoint - Yushi Technology has submitted a second application for listing on the Hong Kong Stock Exchange, focusing on L4 autonomous driving solutions, particularly in closed and semi-closed scenarios such as airports and industrial parks [2][5][30]. Group 1: Company Overview - Founded in 2016, Yushi Technology specializes in L4-level autonomous driving solutions and has developed 52 models suitable for various scenarios [3][5]. - The company was co-founded by Wu Gansha, a notable figure in the autonomous driving industry and former head of Intel's China Research Institute [3][9]. Group 2: Financial Performance - Yushi Technology has raised approximately 1.751 billion RMB through six rounds of financing, with a post-financing valuation reaching 7.3 billion RMB [5][12]. - The company reported a significant revenue increase from 65.48 million RMB in 2022 to an estimated 265.5 million RMB in 2024, with a compound annual growth rate of about 110% [22][24]. - Despite revenue growth, Yushi Technology remains unprofitable, with pre-tax losses of 249.7 million RMB in 2022 and 213.1 million RMB in 2023 [22][24]. Group 3: Business Segments - The company's revenue is primarily derived from four segments: autonomous vehicle solutions, autonomous driving kits, software solutions, and vehicle leasing services [15][24]. - In 2024, the autonomous vehicle solutions segment accounted for 55.2% of total revenue, while the software solutions segment contributed 25.4% [24]. Group 4: Market Position - Yushi Technology holds a 91.7% market share in the airport scenario market in Greater China and a 45.1% share in the industrial park scenario market [16][17]. - The company has established partnerships with 17 airports in China and 3 overseas airports, indicating a strong presence in the market [16]. Group 5: Funding and Cash Flow - As of June 30, 2025, Yushi Technology had cash and cash equivalents of 170 million RMB, down from 222 million RMB at the end of 2024, highlighting ongoing cash flow challenges [29][30]. - The company is under pressure to list due to its financial situation, as it continues to operate at a loss while investing heavily in research and development [30].
英伟达开源自动驾驶软件,中国车企要接吗?
汽车商业评论· 2025-12-03 23:07
Core Insights - The article discusses the launch of the Alpamayo-R1 model by NVIDIA, which is the world's first open-source visual-language-action (VLA) model designed for autonomous driving scenarios, enhancing decision-making through "chain reasoning" [5][10][12] - The model significantly improves safety in complex long-tail scenarios, achieving a 12% increase in planning accuracy, a 35% reduction in accident rates, and a 25% decrease in near-miss incidents [10][12] - NVIDIA's strategy includes expanding its ecosystem influence by providing open-source technology, allowing automakers to quickly assemble autonomous driving systems [14][16] Technical Advancements - The Alpamayo-R1 model processes sensor data into natural language descriptions, enabling step-by-step reasoning similar to human drivers [5][10] - The model's low latency response of 99 milliseconds enhances its effectiveness in real-time decision-making [10] - The accompanying Cosmos developer toolchain offers resources for data construction, scene generation, and model evaluation, facilitating model fine-tuning and deployment [12] Strategic Considerations - NVIDIA's move to open-source its core algorithms is seen as a strategic effort to solidify its market position and drive demand for its hardware, such as the Orin/Thor automotive-grade chips [14][16] - The initiative is expected to establish industry standards for safety and evaluation, aligning with global regulatory demands for transparency in autonomous driving [19] - The shift from closed to open-source models in the autonomous driving sector may trigger a new wave of open-source development, as decision-making algorithms become critical competitive factors [24] Industry Impact and Opportunities - NVIDIA's open-source approach intensifies competition between open-source and closed-source ecosystems in the autonomous driving industry [21][24] - Chinese automakers, heavily reliant on NVIDIA's platforms, stand to benefit from the open-source tools for local algorithm development and scene tuning [26][27] - However, the industry faces challenges, including a significant talent gap in autonomous driving engineering, with a projected shortfall of over one million professionals by 2025 [29][30]
无人物流车行业即将迎来爆发期
Core Insights - The company has made significant progress in the large-scale deployment of its autonomous logistics vehicles, securing a total of 500 orders for the Xiaozhu T5 model from Master Company, indicating a growing market demand for autonomous logistics solutions [1][2] - The company is optimistic about the future of autonomous logistics vehicles, anticipating a surge in demand and a potential explosion in the industry as it continues to iterate on technology and expand its application boundaries [1][3] - The company aims to enhance its production capacity, with expectations to deliver around 10,000 units in the coming year, reflecting a strategic approach to scaling operations gradually [3][4] Order Acquisition and Market Expansion - The company has recently secured additional orders, including 100 units of autonomous logistics vehicles and a strategic partnership with Hunan Xiangjiang Intelligent, focusing on deepening technology applications in the region [2][3] - The Xiaozhu autonomous vehicles are beginning to penetrate the national market, with reports indicating that the company’s L4 autonomous driving business has made significant breakthroughs [2][3] - The company is collaborating with various partners to design, produce, and deliver 800 units of logistics vehicles that meet automotive standards, showcasing its commitment to expanding its product matrix [2][4] Technological Advancements and Cost Reduction - The company is focused on reducing costs through technological advancements and economies of scale, which are seen as key competitive advantages in the autonomous logistics vehicle market [4][5] - The company plans to continue investing in technology iteration and research and development for its autonomous logistics vehicles, aiming to build a mutually beneficial ecosystem with partners [4][5] - The company has announced plans to issue new H-shares to raise approximately HKD 204 million, with a significant portion allocated to the development of L4 autonomous logistics vehicles [4][5] Strategic Collaborations and Product Development - The company has achieved breakthroughs in the passenger vehicle sector, collaborating with a leading domestic automotive brand to provide advanced intelligent driving solutions for flagship SUV models [5] - The company is expanding its global footprint by entering multiple export vehicle supply chains and exploring diverse application scenarios, enhancing overall system cost-effectiveness and user experience [5] - The company has established partnerships with 42 automotive manufacturers, indicating a broadening customer base and ongoing global expansion efforts [5]