Workflow
DeepRoute IO 2.0平台
icon
Search documents
元戎启行 发布全新辅助驾驶平台
Shen Zhen Shang Bao· 2025-08-27 07:05
Core Viewpoint - Yuanrong Qixing launched its next-generation assisted driving platform, DeepRoute IO 2.0, which features a self-developed VLA (Vision-Language-Action) model that significantly improves safety and comfort compared to traditional end-to-end models [1] Group 1: Technology and Innovation - The VLA model integrates visual perception, semantic understanding, and action decision-making, making it more adept at handling complex road conditions [1] - DeepRoute IO 2.0 is designed with a "multi-modal + multi-chip + multi-vehicle" adaptation concept, supporting both LiDAR and pure vision versions for customized deployment across various mainstream passenger car platforms [1] - The VLA model addresses the "black box" issue of traditional models by linking and analyzing information to infer causal relationships, and it is inherently integrated with a vast knowledge base, enhancing its generalization ability in dynamic real-world environments [1] Group 2: Commercialization and Partnerships - Yuanrong Qixing has established a solid foundation for mass production and commercialization, securing partnerships with over 10 vehicle models for targeted collaboration [1]
对话元戎启行CEO周光:VLA模型主要成本是AI芯片,已实现近10万辆辅助驾驶方案交付
Tai Mei Ti A P P· 2025-08-26 12:43
Core Viewpoint - The launch of the DeepRoute IO 2.0 platform by Yuanrong Qixing marks a significant advancement in the field of autonomous driving, utilizing the innovative VLA (Vision-Language-Action) model to enhance safety and comfort in complex driving scenarios [2][6]. Group 1: Technology and Innovation - The VLA model integrates visual perception, semantic understanding, and action decision-making, representing a breakthrough compared to traditional end-to-end models [2]. - DeepRoute IO 2.0 is designed with a "multi-modal + multi-chip + multi-vehicle" adaptability, supporting both LiDAR and pure vision versions for various mainstream passenger car platforms [2][7]. - The VLA model's architecture allows for better generalization and adaptability to real-world driving conditions, overcoming the limitations of traditional models [7]. Group 2: Commercialization and Market Position - Yuanrong Qixing has secured over 10 model-specific collaborations and delivered nearly 100,000 vehicles equipped with urban navigation assistance systems, positioning itself in the industry's top tier [3][7]. - The company anticipates that by 2025, more than 200,000 vehicles featuring its combined assistance driving solutions will enter the consumer market [7]. - The company has completed six rounds of financing, raising over $500 million (approximately 3.57 billion RMB), with significant investments from major players like Alibaba and Fosun [7]. Group 3: Future Directions and Goals - The company aims to expand the application of the VLA model beyond automotive to robotics, indicating a vision for general artificial intelligence (AGI) in the physical world [3][4]. - Future developments will focus on enhancing safety in autonomous driving, with a commitment to defensive driving principles [8]. - The company plans to adopt a large model approach similar to Tesla's for developing L4 and L5 autonomous driving capabilities, emphasizing the need for a shift in the traditional definitions of autonomous driving [9].
元戎启行发布DeepRoute IO 2.0平台及VLA模型 突破传统端到端模型局限
Core Insights - Yuanrong Qixing launched its next-generation driving assistance platform, DeepRoute IO2.0, featuring the self-developed VLA (Vision-Language-Action) model, which excels in complex road conditions compared to traditional end-to-end models [1] - The platform is designed with a "multi-modal + multi-chip + multi-vehicle" adaptability, supporting both LiDAR and pure vision versions, and has secured five fixed cooperation projects with mass production vehicles set to enter the market soon [1][2] - The VLA model integrates a vast knowledge base and enhances generalization capabilities, allowing it to better adapt to complex real-world driving environments [1] Technical Features - The standout feature is spatial semantic understanding, which can perceive potential risks in limited visibility environments and proactively make preventive judgments [2] - Other capabilities include the recognition of non-structured obstacles, understanding of textual road signs, and memory voice control for personalized driving experiences [2] - The company has established a solid foundation for mass commercialization, achieving partnerships with over 10 vehicle models and delivering nearly 100,000 mass-produced vehicles equipped with urban navigation assistance systems [2] Future Plans - The company aims to expand the application boundaries of the VLA model, accelerating mass production deployment in the passenger vehicle market while advancing Robotaxi business based on mass-produced vehicle platforms [3] - The VLA model is expected to extend its use to more mobile intelligent entities, gradually evolving from single-point functionality to a general intelligent system [3]