Workflow
融合感知方案
icon
Search documents
Waymo前CEO炮轰特斯拉:纯视觉方案短板,远处物体或无法准确识别
Sou Hu Cai Jing· 2026-01-23 11:04
作者 | 聂梦颖 编辑 | 志豪 车东西1月21日消息,据外媒Electrek报道,近期,Waymo前首席执行官、自动驾驶领域核心人物Krafcik,公开批评了特斯拉坚持的纯视觉硬件 方案,称其存在难以突破的物理瓶颈。 Krafcik的核心观点是,特斯拉依赖的纯视觉感知系统,在分辨率上存在根本性短板。 一、行业专家炮轰特斯拉纯视觉 远处物体识别能力不足 在2026年国际消费电子展期间,Krafcik向《Automotive News》阐述了其最新观点。与以往探讨软件算法不同,这一次他将矛头直接指向了特 斯拉的物理硬件。 Krafcik通过具体参数指出,特斯拉FSD系统存在严重的"近视"问题。特斯拉主要配置的是500万像素的广角摄像头,在像素总量有限的情况 下,广角设计意味着像素被分散在更宽的视野中,会导致对远处物体的识别能力严重不足。 据他测算,该系统的有效视力水平约为20/60至20/70。换言之,正常视力在60英尺外能看清的物体,该系统需拉近至20英尺才能识别。这一指 标甚至低于美国部分州机动车辆管理局核发驾照的最低视力要求。 ▲Krafcik正在接受《Automotive News》采访 更关键的是,特 ...
Nullmax 徐雷:视觉能力将决定智驾系统上限,反对把激光雷达当 “拐棍”
晚点LatePost· 2025-12-04 12:09
Core Viewpoint - The ongoing debate in the autonomous driving field revolves around the merits of pure vision systems versus sensor fusion approaches, with a strong emphasis on the superiority of camera-based systems in terms of information richness and processing frequency [5][6][11]. Group 1: Technical Insights - Cameras provide higher frequency and richer information compared to LiDAR, with frame rates reaching 30 frames per second for cameras versus 10 frames per second for LiDAR [7][11]. - The reliance on LiDAR in some fusion systems may indicate a deficiency in the visual processing capabilities of those systems [5][6]. - The performance ceiling of autonomous driving systems is significantly influenced by the choice of sensors, with pure vision systems having a higher potential if algorithms and computational power are sufficiently advanced [8][11]. Group 2: Industry Perspectives - The current trend shows that many domestic manufacturers are achieving around 10 frames per second, while Tesla's systems are reportedly exceeding 20 frames per second, highlighting a gap in visual processing capabilities [17]. - The use of LiDAR is often seen as a shortcut to quickly deploy systems, but it may limit the long-term performance and development of autonomous driving technologies [6][19]. - The integration of multiple sensor types, including cameras and LiDAR, is viewed as beneficial, but the primary focus should remain on enhancing visual capabilities [14][19]. Group 3: Future Considerations - The industry is moving towards data-driven systems that leverage AI to generate diverse driving scenarios, which can enhance the training of autonomous systems without the high costs associated with extensive data collection [19]. - The evolution of sensor technology, such as the increase in LiDAR line counts, aims to improve detection capabilities, but this also raises cost considerations [18]. - The debate over sensor reliance continues, with some manufacturers still favoring LiDAR due to perceived limitations in visual processing, indicating a need for further advancements in camera-based systems [17][19].
Nullmax 徐雷:视觉能力将决定智驾系统上限,反对把激光雷达当 “拐棍”
晚点Auto· 2025-12-02 13:28
Core Viewpoint - The ongoing debate in the autonomous driving field revolves around the merits of pure vision systems versus sensor fusion approaches, with a strong emphasis on the superiority of camera-based systems in terms of information richness and processing frequency [3][4][9]. Summary by Sections Technical Insights - Pure vision systems utilize cameras as the primary sensors, providing higher frequency and richer data compared to LiDAR systems, which have lower frame rates and less detailed point cloud information [4][9]. - The performance ceiling of autonomous driving systems is significantly influenced by the choice of sensors, with pure vision systems having a higher potential if algorithms and computational power are sufficiently advanced [5][9]. Sensor Performance - Cameras can capture images at 30 frames per second, while LiDAR typically operates at around 10 frames per second, leading to a disparity in the amount of information processed [4][9]. - The reliance on LiDAR in some fusion systems may indicate a lack of development in visual processing capabilities, which can hinder overall system performance in challenging scenarios [10][11]. Industry Perspectives - The CEO of Nullmax, Xu Lei, advocates for prioritizing visual capabilities in autonomous systems, suggesting that over-reliance on LiDAR may provide a short-term solution but limits long-term performance [4][10]. - Xu Lei emphasizes the importance of developing robust visual processing algorithms to fully leverage the data captured by cameras, as the information density is significantly higher than that of LiDAR [6][9]. Cost and Practical Considerations - The integration of multiple sensor types must consider cost and performance trade-offs, as adding LiDAR can increase system complexity and expense without necessarily enhancing performance [5][14]. - The industry is witnessing a trend where companies prioritize rapid deployment of systems, often opting for LiDAR to expedite the process, despite potential limitations in performance [11][16]. Future Directions - Xu Lei expresses an open-minded approach towards the use of various sensors, including future technologies, while maintaining that visual capabilities should remain the core focus of development [10][11]. - The evolution of sensor technologies, such as 4D millimeter-wave radar, is seen as complementary to camera systems, particularly in adverse weather conditions, although the necessity of LiDAR is debated [13][14].