Core Insights - Yuanrong Qixing launched its next-generation driving assistance platform, DeepRoute IO2.0, featuring the self-developed VLA (Vision-Language-Action) model, which excels in complex road conditions compared to traditional end-to-end models [1] - The platform is designed with a "multi-modal + multi-chip + multi-vehicle" adaptability, supporting both LiDAR and pure vision versions, and has secured five fixed cooperation projects with mass production vehicles set to enter the market soon [1][2] - The VLA model integrates a vast knowledge base and enhances generalization capabilities, allowing it to better adapt to complex real-world driving environments [1] Technical Features - The standout feature is spatial semantic understanding, which can perceive potential risks in limited visibility environments and proactively make preventive judgments [2] - Other capabilities include the recognition of non-structured obstacles, understanding of textual road signs, and memory voice control for personalized driving experiences [2] - The company has established a solid foundation for mass commercialization, achieving partnerships with over 10 vehicle models and delivering nearly 100,000 mass-produced vehicles equipped with urban navigation assistance systems [2] Future Plans - The company aims to expand the application boundaries of the VLA model, accelerating mass production deployment in the passenger vehicle market while advancing Robotaxi business based on mass-produced vehicle platforms [3] - The VLA model is expected to extend its use to more mobile intelligent entities, gradually evolving from single-point functionality to a general intelligent system [3]
元戎启行发布DeepRoute IO 2.0平台及VLA模型 突破传统端到端模型局限
Zheng Quan Shi Bao Wang·2025-08-26 11:22