Core Viewpoint - Pure GPU can achieve basic functions for low-level autonomous driving but has significant shortcomings in processing speed and energy consumption, making it unsuitable for higher-level autonomous driving needs [4][40]. Group 1: GPU Limitations - Pure GPU can handle certain parallel computing tasks required for autonomous driving, such as sensor data fusion and image recognition, but is primarily designed for graphics rendering, leading to limitations [4][6]. - Early autonomous driving tests using pure GPU solutions, like the NVIDIA GTX 1080, showed a detection delay of approximately 80 milliseconds, which poses safety risks at high speeds [5]. - The data processing capacity for L4 autonomous vehicles generates about 5-10GB of data per second, requiring multiple GPUs to work together, which increases power consumption significantly [6][9]. Group 2: NPU and TPU Advantages - NPU is specifically designed for neural network computations, featuring a large number of MAC (Multiply-Accumulate) units, which optimize matrix multiplication and accumulation operations [10][19]. - TPU, developed by Google, utilizes a pulsed array architecture that enhances data reuse and reduces external memory access, achieving higher efficiency in large matrix operations compared to GPU [12][19]. - NPU and TPU architectures are more efficient for neural network inference, with NPU showing a significant reduction in energy consumption compared to GPU [36][40]. Group 3: Cost and Efficiency Comparison - In terms of energy efficiency, NPU's performance is 2.5 to 5 times better than that of GPU, with lower power consumption for equivalent AI computing power [36][40]. - The cost of NPU solutions is significantly lower than pure GPU solutions, with NPU hardware costs being only 12.5% to 40% of those for pure GPU setups [37][40]. - For example, achieving 144 TOPS of AI computing power with a pure GPU solution requires multiple GPUs, leading to a total cost of around $4000, while a solution with NPU can cost about $500 [37][40]. Group 4: Hybrid Solutions - NVIDIA's Thor chip integrates both GPU and NPU to leverage their strengths, allowing for efficient task division and compatibility with existing software, thus reducing development time and costs [33][40]. - The collaboration between GPU and NPU in autonomous driving systems enhances overall efficiency by avoiding frequent data transfers between different chips, resulting in a 40% efficiency improvement [33][40]. - The future trend in autonomous driving technology is expected to favor hybrid solutions that combine NPU and GPU capabilities to meet the demands of high-level autonomous driving while maintaining cost-effectiveness [40].
为什么Thor芯片要保留GPU,又有NPU?