Waymo前CEO炮轰特斯拉:纯视觉方案短板,远处物体或无法准确识别

Core Viewpoint - Krafcik criticizes Tesla's reliance on a pure vision hardware approach for autonomous driving, highlighting its fundamental limitations in object recognition at a distance [1][5]. Group 1: Critique of Tesla's Vision System - Krafcik points out that Tesla's Full Self-Driving (FSD) system suffers from a "nearsighted" issue due to its use of 5 million pixel wide-angle cameras, which limits its ability to recognize distant objects [1][3]. - The effective vision level of Tesla's system is estimated to be around 20/60 to 20/70, meaning it requires objects to be brought within 20 feet to be recognized, which is below the minimum vision requirements for driving licenses in some U.S. states [1][3]. Group 2: Comparison of Sensor Technologies - The debate centers on whether autonomous driving should rely on software algorithms to simulate the world or on physical hardware to perceive it [3][5]. - Krafcik argues that Tesla's "compute-centric" approach, which depends solely on cameras and computational power, is flawed as cameras can fail under strong light, blurriness, or extreme weather conditions [3][5]. - In contrast, companies like Waymo utilize a sensor fusion approach that combines LiDAR and radar, providing a more reliable solution by actively detecting distance and speed, thus maintaining safety even when visual signals are compromised [3][5]. Group 3: Implications for Tesla's Future - The ongoing debate between "pure vision" and "sensor fusion" has significant implications for Tesla's Robotaxi ambitions, with Krafcik's previous predictions about Tesla's reliance on remote monitoring and safety drivers proving accurate [5][6]. - If the physical limitations of the pure vision approach are indeed insurmountable, Tesla vehicles equipped with Hardware 3 and 4 may remain at the L2+ level of driving assistance, failing to achieve the promised L4 autonomous driving through software updates [6][7].

Waymo前CEO炮轰特斯拉:纯视觉方案短板,远处物体或无法准确识别 - Reportify