Workflow
AI Spark
icon
Search documents
元戎启行CEO周光:幼年期的VLA智驾,强于巅峰期的端到端
Jing Ji Guan Cha Wang· 2025-08-31 01:05
Core Insights - Yuanrong Qixing launched its next-generation driver assistance platform, DeepRoute IO 2.0, which integrates a self-developed Vision-Language-Action (VLA) model, combining visual perception, semantic understanding, and action decision-making capabilities [2][3] - The shift towards VLA models is driven by the limitations of traditional end-to-end systems and the need for enhanced semantic understanding in complex driving scenarios [3][4] Group 1: Technological Advancements - The VLA model utilizes reinforcement learning to evolve and understand the reasoning behind actions, contrasting with the imitation learning of traditional end-to-end architectures [2][3] - Yuanrong Qixing's CEO, Zhou Guang, emphasizes the urgency of transitioning to a large model-driven company to avoid being outpaced by competitors [2][3] - The VLA system aims to teach AI to adopt a "defensive driving" approach, enabling it to make cautious decisions in uncertain situations [5][6] Group 2: Market Dynamics - Yuanrong Qixing has secured partnerships for over 10 vehicle models, achieving nearly 100,000 units of city navigation assistance system vehicles delivered, indicating significant market penetration [3][4] - The increasing scale of production presents new challenges, as any issues become magnified with higher delivery volumes [3][4] Group 3: Competitive Landscape - Zhou Guang critiques current mainstream technology routes, particularly the limitations of end-to-end systems based on BEV architecture, which struggle with occluded visual information [4][6] - The industry is witnessing a surge in VLA model development, with competitors like Xiaopeng Motors and Li Auto also exploring similar technologies [7][8] Group 4: Future Prospects - The VLA model is envisioned to extend beyond automotive applications, potentially benefiting robotics and autonomous systems in various environments [7][8] - Zhou Guang rates the current VLA model's performance at 6 out of 10, indicating room for improvement and growth, with expectations for significant advancements as next-generation chips become available [8][9]
对话周光:自动驾驶实现AGI,RoadAGI比L5更快 | GTC 2025
量子位· 2025-03-21 06:37
一凡 发自 凹非寺 量子位 | 公众号 QbitAI 自动驾驶实现垂直领域的AGI,有了新路径。 不是Robotaxi ,而是 RoadAGI 。 在英伟达GTC 2025上,元戎启行CEO 周光 受邀分享, 提出用RoadAGI,能更快大规模商用自动驾驶,实现垂直道路场景下的AGI , RoadAGI的实施平台,是元戎最新分享的 AI Spark : 不借助高精地图 ,一个平台赋能智能车、机器人甚至小电驴……总之,一切可动的移动体,都将具有自主移动的意识。 这是一条通过自动驾驶实现AGI的新途径。 元戎启行和CEO周光,代表AI公司、自动驾驶公司,开辟起了第二种可能性。 所以RoadAGI究竟是什么? 用RoadAGI迈向AGI 先说人人可感知的场景—— 你下一次点的外卖,可能是这样的: 赛博"外卖小哥", 全程不用高精地图 ,自动识别店铺: 拿到商品后,一溜小跑到路口,自主识别到红绿灯: 然后一停二看三通过: 它还能进到楼里,自己过闸机、摁电梯: 然后到电梯里,再自己摁楼层: 出电梯直接给你送到公司前台: 整个过程,是不是跟咱们人一样? 你也可以让它把商品放外卖柜里: 这就是元戎启行在 英伟达GTC 20 ...