Workflow
VLA
icon
Search documents
特斯拉最新技术分享,FSD核心架构曝光了
3 6 Ke· 2025-10-22 08:00
Core Insights - Tesla has publicly shared its FSD (Full Self-Driving) core architecture at the ICCV conference, indicating a significant development in its autonomous driving technology [1][4] - The presentation by Ashok Elluswamy has sparked discussions about Tesla's potential use of VLA (Vision-Language Architecture) in its systems, amidst an ongoing debate in the industry between VLA and world models [1][7] Technical Developments - The FSD architecture integrates a large neural network capable of processing multimodal inputs, including camera video, navigation data, vehicle motion status, and sound, with outputs that include panoramic segmentation, 3D occupancy networks, and language [6][10] - The architecture's ability to output language information suggests a shift towards a more advanced model capable of understanding and reasoning with long-term data [7][10] Industry Context - The debate between VLA and world models is prominent, with VLA proponents arguing for its ability to leverage vast internet data for knowledge accumulation and reasoning, while world model advocates claim it addresses the core challenges of autonomous driving more effectively [7][10] - The industry is moving towards larger model parameters, with Tesla's upcoming smart driving chip expected to reach 2000 TOPS, indicating a significant increase in computational power and model capabilities [10][12] Recent Updates - The latest FSD update (V14.1.3) includes enhancements for safety and personalization, improving obstacle avoidance and navigation capabilities [12] - Tesla has reintroduced the "Mad Max Mode," which allows for a more aggressive driving style, showcasing the system's adaptability in various driving scenarios [11][14]
别造轮子了!原力灵机开源Dexbotic:迈向具身智能的一站式VLA工具箱
具身智能之心· 2025-10-22 06:02
Core Insights - The article discusses the rapid development of embodied VLA (Vision-Language Agents) models and the challenges faced by individual developers and small research teams in creating and maintaining a unified open-source framework for these models [4][7][29]. Group 1: VLA Development Challenges - The current VLA development landscape is fragmented, with various teams using different deep learning frameworks and model architectures, leading to inefficiencies in model comparison and performance evaluation [4][7]. - Existing VLA models often do not leverage the capabilities of the latest LLMs (Large Language Models), which limits the potential of the "embodied brain" [4][7]. - There is a pressing need for a mature, unified open-source VLA framework to address these challenges, which has led to the creation of Dexbotic [4][7]. Group 2: Dexbotic Framework Features - Dexbotic integrates mainstream pre-trained models for manipulation and navigation policies, supporting both cloud and local training, making it user-friendly and ready to use [2][4]. - The framework introduces the Dexdata format to unify data from different sources, significantly reducing storage costs and simplifying data preparation for developers [9][10]. - Dexbotic's architecture consists of three layers: data layer, model layer, and experimental layer, enhancing the efficiency of algorithm comparison and model iteration by over 50% [11][24]. Group 3: Performance Improvements - Dexbotic's pre-trained models have shown significant performance improvements in various tasks, with DB-CogACT achieving an 18.2% increase in average success rate compared to the original CogACT model [21][22]. - The framework has also demonstrated strong performance in real-world tasks, with UR5e achieving a 100% success rate in specific tasks [29]. Group 4: Open Source and Community Engagement - Dexbotic aims to facilitate collaboration and innovation in the field of embodied intelligence by providing an open-source platform that allows developers to contribute and share their work [30][32]. - The initiative encourages participation from both academic and industrial partners to enhance the development of embodied intelligence technologies [30][32].
自驾行业完整的基建,更值得毕业的同学做探索!
自动驾驶之心· 2025-10-17 00:03
Core Viewpoint - The autonomous driving industry is maturing in terms of infrastructure and investment, making it a suitable field for students and professionals to explore and develop their skills [1][16]. Group 1: Industry Insights - The technology landscape in autonomous driving is consolidating, but there are still many product forms to refine, indicating ongoing opportunities for innovation [1]. - The industry is currently debating the technical routes of world models and VLA, suggesting that while theoretical aspects may be solidifying, practical implementation remains a challenge [1]. - The focus on L2 functionality and the regulatory progress for L3 indicates a gradual evolution towards more advanced levels of automation, with L4 still facing unresolved issues [1]. Group 2: Community and Learning Resources - A community called "Autonomous Driving Heart Knowledge Sphere" has been established, which integrates various resources such as videos, articles, learning paths, and job exchange, aimed at fostering collaboration and knowledge sharing [4][5]. - The community has grown to over 4,000 members, with a goal to reach nearly 10,000 in the next two years, providing a platform for both beginners and advanced learners [5]. - The community offers practical guidance on various topics, including entry points for end-to-end learning, multi-modal large models, and data annotation practices [7][8]. Group 3: Career Opportunities - The community actively shares job openings and facilitates connections between members and companies in the autonomous driving sector, enhancing employment opportunities [12][21]. - There is a focus on developing comprehensive learning paths for newcomers, ensuring they have access to a well-rounded education in autonomous driving technologies [17][38]. Group 4: Technical Development - The community has compiled over 40 technical routes and resources related to autonomous driving, covering areas such as perception, simulation, planning, and control [17][34]. - Regular discussions and live sessions with industry experts are held to explore trends, technical directions, and production challenges in autonomous driving [8][90].
工业界和学术界都在怎么搞端到端和VLA?
自动驾驶之心· 2025-10-17 00:03
Core Insights - The article discusses the evolution of end-to-end algorithms in autonomous driving, highlighting the transition from modular production algorithms to end-to-end and now to Vision-Language Alignment (VLA) models [1][3] - It emphasizes the rich technology stack involved in end-to-end algorithms, including BEV perception, visual language models (VLM), diffusion models, reinforcement learning, and world models [3] Summary by Sections End-to-End Algorithms - End-to-end algorithms are categorized into two main paradigms: single-stage and two-stage, with UniAD being a representative of the single-stage approach [1] - Single-stage can further branch into various subfields, particularly those based on VLA, which have seen a surge in related publications and industrial applications in recent years [1] Courses Offered - The article promotes two courses: "End-to-End and VLA Autonomous Driving Small Class" and "Practical Course on Autonomous Driving VLA and Large Models," aimed at helping individuals quickly and efficiently enter the field [3] - The "Practical Course" focuses on VLA, covering topics from VLM as an autonomous driving interpreter to modular and integrated VLA, along with detailed theoretical foundations [3][12] Instructor Team - The instructor team includes experts from both academia and industry, with backgrounds in multi-modal perception, autonomous driving VLA, and large model frameworks [8][11][14] - Notable instructors have published numerous papers in top-tier conferences and have extensive experience in research and practical applications in autonomous driving and large models [8][11][14] Target Audience - The courses are designed for individuals with a foundational understanding of autonomous driving, familiar with basic modules, and have knowledge of transformer models, reinforcement learning, and BEV perception [15][17]
学术和量产的分歧,技术路线的持续较量!从技术掌舵人的角度一览智驾的十年路....
自动驾驶之心· 2025-10-14 23:33
Core Insights - The article discusses the significant technological advancements in autonomous driving over the past decade, highlighting key innovations such as Visual Transformers, BEV perception, multi-sensor fusion, end-to-end autonomous driving, large models, VLA, and world models [3][4]. Group 1: Technological Milestones - The past ten years have seen remarkable technological developments in autonomous driving, with various solutions emerging through the collision and fusion of different technologies [3]. - A roundtable discussion is set to reflect on the technological milestones in the industry, focusing on the debate between world models and VLA [4][13]. Group 2: Industry Perspectives - The roundtable will feature insights from top industry leaders, discussing the evolution of autonomous driving technology and providing career advice for newcomers in the field [4][5]. - The discussion will also cover the perspectives of academia and industry regarding L3 autonomous driving, emphasizing the convergence of research directions and the practical implementation in engineering [13]. Group 3: Future Directions - The article raises questions about the future direction of autonomous driving technology, particularly the role of end-to-end systems as a foundational element of intelligent driving technology [13]. - It highlights the ongoing competition between academic research and engineering practices in the field, suggesting a need for new entrants to adapt and innovate [13].
开放几个自动驾驶技术交流群(世界模型/端到端/VLA)
自动驾驶之心· 2025-10-13 23:33
Group 1 - The establishment of a technical exchange group focused on autonomous driving technology has been announced, covering areas such as world models, end-to-end systems, and VLA [1] - The company invites interested individuals to join the discussion by adding a designated assistant on WeChat with specific instructions for group entry [1]
工业界大佬带队!三个月搞定端到端自动驾驶
自动驾驶之心· 2025-10-12 23:33
Core Viewpoint - 2023 marks the year of end-to-end production, with 2024 expected to be a significant year for end-to-end production in the automotive industry, as leading new forces and manufacturers have already achieved end-to-end production [1][3]. Group 1: End-to-End Production Development - The automotive industry is witnessing rapid development in end-to-end production, particularly in one-stage and two-stage paradigms, with one-stage methods like UniAD being prominent [1][3]. - Various one-stage methods have emerged, including perception-based, world model-based, diffusion model-based, and VLA-based approaches, indicating a strong push from both autonomous driving companies and vehicle manufacturers towards self-research and mass production of end-to-end autonomous driving [3][5]. Group 2: Course Overview - A course titled "End-to-End and VLA Autonomous Driving" has been launched, focusing on cutting-edge algorithms in both one-stage and two-stage end-to-end methods, aimed at bridging academic and industrial advancements [5][15]. - The course is structured into several chapters, covering topics such as the history and evolution of end-to-end algorithms, background knowledge on VLA, and detailed discussions on two-stage and one-stage end-to-end methods [9][10][12]. Group 3: Key Technologies and Techniques - The course emphasizes key technologies such as BEV perception, visual language models (VLM), diffusion models, and reinforcement learning, which are essential for mastering the latest advancements in autonomous driving [5][11]. - The second chapter of the course is highlighted as crucial for understanding the most frequently asked technical keywords in job interviews over the next two years [10]. Group 4: Practical Applications and Outcomes - The course includes practical assignments, such as RLHF fine-tuning, allowing participants to apply their knowledge in real-world scenarios and understand how to build and experiment with reinforcement learning modules [13][19]. - By completing the course, participants are expected to reach a level equivalent to one year of experience as an end-to-end autonomous driving algorithm engineer, gaining a comprehensive understanding of various methodologies and their applications [19].
学术界和工业界都在如何研究端到端与VLA?三个月搞定端到端自动驾驶!
自动驾驶之心· 2025-10-09 04:00
Core Viewpoint - The article discusses the evolution and current state of end-to-end algorithms in autonomous driving, highlighting the emergence of various subfields, particularly those based on Visual Language Models (VLA) and the increasing interest in these technologies within both academia and industry [1][3]. Summary by Sections End-to-End Algorithms - End-to-end algorithms are central to the current mass production of autonomous driving technologies, involving a rich technology stack. There are primarily two paradigms: single-stage and two-stage. The single-stage approach, exemplified by UniAD, directly models vehicle trajectories from sensor inputs, while the two-stage approach outputs trajectories based on perception results [1]. VLA and Related Technologies - The development has progressed from modular production algorithms to end-to-end systems and now to VLA. Key technologies involved include BEV perception, Visual Language Models (VLM), diffusion models, reinforcement learning, and world models. The article emphasizes the importance of understanding these technologies to grasp the cutting-edge directions in both academia and industry [3]. Courses Offered - The article promotes two courses aimed at helping individuals quickly and efficiently learn about end-to-end and VLA in autonomous driving. The courses are designed for those new to large models and VLA, covering foundational theories and practical applications [3][10]. Course Content - The "VLA and Large Model Practical Course" focuses on VLA, starting from VLM as an interpreter for autonomous driving, and covers modular and integrated VLA, as well as mainstream inference-enhanced VLA. It includes detailed theoretical foundations and practical assignments to build VLA models and datasets from scratch [3][10]. Instructor Team - The courses are led by experienced instructors from both academia and industry, with backgrounds in multi-modal perception, autonomous driving VLA, and large model frameworks. They have published numerous papers in top conferences and have substantial practical experience in the field [7][9][10]. Target Audience - The courses are aimed at individuals with a foundational understanding of autonomous driving, familiar with basic modules, and possessing knowledge of transformer models, reinforcement learning, and BEV perception. A background in probability theory, linear algebra, and programming in Python and PyTorch is also recommended [13].
从机械臂到人形,跨构型VLA如何破局?
具身智能之心· 2025-10-09 00:04
Core Insights - The article discusses two significant advancements in the field of embodied intelligence and VLA (Vision-Language Action) models, highlighting their potential to overcome existing challenges in the domain [3][7]. Group 1: VLA-Adapter - VLA-Adapter aims to improve the direct mapping from VLM (Vision-Language Model) features to action space without heavily relying on robotic data. The research team found that increasing the parameter count and introducing pre-trained robotic data did not significantly enhance model performance on general benchmarks [3]. - The new mapping scheme proposed by the team allows the model to achieve superior performance even at a 0.5 billion parameter scale, reducing training costs and lowering the entry barrier for VLA models [3]. Group 2: TrajBooster - TrajBooster is the first full-body humanoid operation VLA solution that addresses data scarcity issues for training VLA models in bipedal humanoid tasks. The scarcity arises from the high cost of remote operation data and the challenges of using existing heterogeneous robot data for training [7]. - By focusing on trajectory-centered methods, TrajBooster efficiently utilizes cross-body data, achieving full-body operation in bipedal robots with just 10 minutes of real machine remote operation data for fine-tuning [7]. Group 3: Contributors - Wang Yihao, a fourth-year PhD student at Beijing University of Posts and Telecommunications, is involved in the VLA-Adapter project and has contributed significantly to the field of embodied intelligence and VLA models [13]. - Liu Jiacheng, a second-year PhD student at Zhejiang University and West Lake University, leads the TrajBooster project, which is the only fully open-source work covering humanoid data collection, cross-body data enhancement, VLA model training, and hardware deployment [13].
自动驾驶Ask Me Anything问答整理!VLA和WA的路线之争?
自动驾驶之心· 2025-10-08 23:33
Core Insights - The article discusses the current state and future prospects of autonomous driving technology, emphasizing the importance of AI and various modeling approaches in achieving higher levels of automation [4][6][9]. Group 1: Industry Development - The autonomous driving industry is rapidly evolving, with significant advancements expected in the next few years, particularly in AI and related fields [4]. - Companies like Waymo and Tesla are leading the way in achieving Level 4 (L4) automation, while Level 5 (L5) may take at least five more years to realize [4][6]. - The integration of Vision-Language Models (VLA) is seen as a key to enhancing decision-making capabilities in autonomous vehicles, addressing long-tail problems that pure end-to-end models may struggle with [6][9]. Group 2: Technical Approaches - The article outlines different modeling approaches in autonomous driving, including end-to-end models and the emerging VLA paradigm, which combines language processing with visual data to improve reasoning and decision-making [5][9]. - The effectiveness of current autonomous driving systems is still limited, with many challenges remaining in achieving full compliance with traffic regulations and safety standards [10][14]. - The discussion highlights the importance of data and cloud computing capabilities in narrowing the performance gap between domestic companies and leaders like Tesla [14][15]. Group 3: Talent and Education - There is a recognized talent gap in the autonomous driving sector, with a strong recommendation for students to pursue AI and computer science to prepare for future opportunities in the industry [4][6]. - The article suggests that practical experience in larger autonomous driving companies may provide better training and growth opportunities compared to smaller robotics firms [16][20].