Core Viewpoint - The article discusses Elon Musk's proposal to utilize the computational power of idle Tesla vehicles for distributed AI inference workloads, potentially creating a massive distributed inference fleet that could reach up to 100 gigawatts of inference capacity if the fleet scales to tens of millions or even a hundred million vehicles [2]. Group 1: AI and Vehicle Technology - Tesla has equipped its electric vehicles with AI accelerators necessary for various autonomous driving features, including Full Self-Driving (FSD) capabilities [2]. - Since 2019, Tesla has been using its own chips, which reportedly outperform NVIDIA GPUs by 21 times, with the first chip named HW3 capable of processing 720 trillion operations per second [3]. - The latest HW4 chip, launched in January 2023, is built on a 7-nanometer process and offers a performance improvement of 3 to 8 times over its predecessor, powering Tesla's AI4 architecture [3]. Group 2: In-Vehicle Computing Power - Tesla's latest vehicle infotainment systems are equipped with powerful computing capabilities, featuring AMD Ryzen processors and independent AMD Navi 23 GPUs, achieving performance levels of up to 10 TFLOPS, comparable to top gaming systems [4].
关于AI推理芯片,马斯克想法太疯狂