Core Insights - The second-generation VLA from XPeng Motors represents a significant shift in the traditional "vision-language-action" framework, enabling direct output from visual signals to action commands without language translation [1] - This model is the first mass-produced physical world model from XPeng, serving as both an action generation model and a physical world understanding model, applicable across various domains including AI vehicles, humanoid robots, and flying cars [1][2] - The second-generation VLA is equipped with a parameter scale in the billions, significantly surpassing the industry standard of tens of millions, and is trained on nearly 100 million clips, equivalent to the driving experience of a human driver over 65,000 years [1] Technical Advancements - The second-generation VLA has achieved a breakthrough in computing power and model architecture, leading to a significant evolution in XPeng's intelligent driving capabilities [1] - The introduction of "Xiaolu NGP" has increased the average takeover mileage on complex roads by 13 times, showcasing the model's generalized learning and intelligent emergence capabilities [1] - The industry-first "navigation-free automatic assisted driving" feature, Super LCC+ human-machine co-driving, can be activated globally without reliance on navigation [1] Future Plans - XPeng's second-generation VLA is set to launch a pioneer co-creation experience in December 2025, with full rollout in the first quarter of 2026 alongside the Ultra model [2] - Volkswagen has been announced as the strategic partner for the launch of the second-generation VLA, and XPeng's Turing AI chip has been designated for Volkswagen [2]
小鹏第二代VLA重磅发布,带来“物理世界模型”新范式