VLA

Search documents
华为高管:世界上根本没有免费的东西
Di Yi Cai Jing Zi Xun· 2025-08-27 08:51
Core Insights - Huawei's automotive business is rapidly expanding its assisted driving solutions and collaborating with various car manufacturers, including Baojun, Leap Motor, and Hongqi, indicating a growing presence in the industry [2][3] - The company emphasizes a diverse cooperation model with car manufacturers, ranging from component supply to full-stack solutions, enhancing their capabilities from product definition to marketing [2][9] - Huawei's approach to assisted driving technology diverges from the prevalent Vision-Language-Action (VLA) model, focusing instead on a World and Action (WA) model that utilizes direct sensory inputs for vehicle control [3][10] Cooperation Models - Huawei's cooperation with car manufacturers includes multiple models: component supply, single intelligence (either smart cockpit or assisted driving), dual intelligence (both), and full-stack solutions [2][9] - The collaboration process is designed to deepen over time, with Huawei supporting car manufacturers throughout the entire product lifecycle, from design to marketing [2][9] Technology Perspective - Huawei does not endorse the VLA approach, believing it is not the ultimate solution for autonomous driving; instead, it prioritizes the WA model, which aims for direct control through sensory inputs [3][10] - The company acknowledges the rapid development of assisted driving technology and anticipates a consolidation of players in the market, driven by data, computing power, and algorithms [11] Commercial Strategy - Huawei does not have a specific short-term profitability target for its automotive business, focusing instead on long-term user-centered investments and sustainable growth [8] - The company argues that there is no such thing as a free service in the automotive sector, as costs are often hidden in vehicle pricing or future service fees [13]
人形机器人,缺一个杀手级共识
创业邦· 2025-08-26 03:37
Core Viewpoint - The article discusses the contrasting approaches of two leading companies in the humanoid robotics industry, Starry Era and Yuzhu Technology, highlighting their differing philosophies on how to enhance robot capabilities and their respective paths towards commercialization [8][10][49]. Group 1: Company Strategies - Starry Era focuses on a "soft and hard integration" approach, emphasizing the importance of combining hardware and software to create a cohesive system for humanoid robots [30][32]. - Yuzhu Technology adopts a "hardware-first" strategy, prioritizing the development of hardware capabilities before integrating software solutions [31][32]. - Both companies have distinct views on the viability of the VLA (Vision-Language-Action) paradigm, with Starry Era seeing it as a broad framework for integrating various modalities, while Yuzhu expresses skepticism about its practical application [12][16]. Group 2: Technical Development - Starry Era has developed an end-to-end VLA model, ERA-42, which integrates reinforcement learning and world models, showcasing their commitment to advancing robot intelligence [15][39]. - Yuzhu Technology is concentrating on building reusable data and model resources, focusing on the engineering aspects of distributed computing to enhance their robots' capabilities [22][27]. - Both companies recognize the necessity of a closed-loop system that combines perception, decision-making, and execution to achieve effective humanoid robot performance in complex environments [34][54]. Group 3: Market Positioning - Starry Era is currently deploying its robots in B-end industrial scenarios, achieving over 70% efficiency in real-world applications, with plans to reach around 90% efficiency next year [23][36]. - Yuzhu Technology is primarily focusing on entertainment and demonstration scenarios, acknowledging that their robots are not yet ready for complex tasks, thus adopting a strategy of gradual market entry [26][27]. - Both companies anticipate a significant shift in the humanoid robotics market, with predictions of a "ChatGPT moment" within the next few years, where robots will be capable of understanding and executing complex instructions in unfamiliar environments [50][56]. Group 4: Future Outlook - The industry is expected to see parallel advancements in various technical paths, including end-to-end VLA and world models, with leading companies validating commercial viability in specific industrial applications [56]. - In the mid-term, a unified technical standard may emerge, expanding applications from industrial to logistics, healthcare, and retail sectors [56]. - Long-term aspirations include humanoid robots becoming household companions, necessitating advancements in safety, reliability, and natural interaction [56].
VLA方向的论文还不知怎么下手?有的同学已经CCF-A了......
自动驾驶之心· 2025-08-22 12:00
Core Insights - The article discusses the advancements of the Li Auto VLA driver model, highlighting its improved capabilities in understanding semantics, reasoning, and trajectory planning, which are crucial for autonomous driving [1][3][5] Group 1: VLA Model Capabilities - The VLA model demonstrates enhanced semantic understanding through multimodal input, improved reasoning via thinking chains, and a closer approximation to human driving intuition through trajectory planning [1] - Four core abilities of the VLA model are showcased: spatial understanding, reasoning ability, communication and memory capability, and behavioral ability [1][3] Group 2: Research and Development Trends - The VLA model has evolved from VLM+E2E, integrating various cutting-edge technologies such as end-to-end learning, trajectory prediction, visual language models, and reinforcement learning [5] - While traditional perception and planning tasks are still being optimized in the industry, the academic community is increasingly shifting focus towards large models and VLA, indicating a wealth of subfields still open for exploration [5] Group 3: VLA Research Guidance Program - A VLA research paper guidance program has been initiated, receiving positive feedback, aimed at helping participants systematically grasp key theoretical knowledge and develop their own research ideas [6] - The program includes a structured curriculum over 14 weeks, covering topics from traditional end-to-end autonomous driving to writing methodologies for research papers [9][11][30] Group 4: Course Structure and Requirements - The course is designed for a maximum of 8 participants per session, targeting individuals with a background in VLA and autonomous driving at various academic levels [12][15] - Participants are expected to have a foundational understanding of deep learning, Python programming, and familiarity with PyTorch, with specific hardware requirements suggested for optimal performance [21][22] Group 5: Expected Outcomes - Participants will gain insights into classic and cutting-edge research papers, coding skills, and methodologies for writing and submitting research papers, culminating in the production of a draft paper [20][34] - The program aims to enhance participants' understanding of algorithms, their advantages and disadvantages, and to stimulate their research ideas through structured guidance [20][34]
传统的感知被嫌弃,VLA逐渐成为新秀...
自动驾驶之心· 2025-08-20 09:15
Core Viewpoint - The article discusses the advancements in the VLA (Vision-Language Action) driver model by Li Auto, highlighting its four core capabilities: spatial understanding, reasoning, communication and memory, and behavioral capabilities. It emphasizes the significance of VLA in the field of autonomous driving, indicating a shift in focus from traditional perception and planning tasks to large models and VLA technologies [2][4]. Summary by Sections VLA Model Capabilities - The VLA model integrates dynamic targets, static elements, navigation maps, and spatial understanding, showcasing a more human-like reasoning ability. This positions VLA as a leading focus in both academia and industry for autonomous driving [2]. Shift in Research Focus - Traditional perception and planning tasks are becoming less prominent in top conferences, with academia increasingly shifting towards large models and VLA. Despite this, the industry continues to optimize traditional methods, indicating ongoing opportunities in both areas [4]. Educational Program - An educational program is introduced to help students systematically grasp key theoretical knowledge in VLA, enhance practical coding skills, and develop their own research ideas. The program includes a structured 12-week online group research course followed by 2 weeks of paper guidance and a 10-week maintenance period [5][34]. Course Structure - The course spans 14 weeks, covering topics from introductory lessons to advanced VLA models and paper writing methodologies. Each week focuses on different aspects of VLA and autonomous driving, culminating in a final project report and submission guidance [8][10][35]. Target Audience - The program is designed for master's and doctoral students in VLA and autonomous driving, individuals seeking to enhance their resumes for further studies abroad, and professionals in the AI and autonomous driving sectors looking to deepen their algorithmic knowledge [14][24]. Course Requirements - Participants are expected to have a foundational understanding of deep learning, basic programming skills in Python, and familiarity with PyTorch. Access to high-performance computing resources is recommended for optimal learning [20][21]. Course Highlights - The program features a "2+1" teaching model with experienced instructors, ensuring comprehensive support throughout the learning process. It emphasizes academic integrity and provides a structured evaluation system to enhance the learning experience [22][23].
端到端VLA的起点:聊聊大语言模型和CLIP~
自动驾驶之心· 2025-08-19 07:20
Core Viewpoint - The article discusses the development and significance of end-to-end (E2E) algorithms in autonomous driving, emphasizing the integration of various advanced technologies such as large language models (LLMs), diffusion models, and reinforcement learning (RL) in enhancing the capabilities of autonomous systems [21][31]. Summary by Sections Section 1: Overview of End-to-End Autonomous Driving - The first chapter provides a comprehensive overview of the evolution of end-to-end algorithms, explaining the transition from modular approaches to end-to-end solutions, and discussing the advantages and challenges of different paradigms [40]. Section 2: Background Knowledge - The second chapter focuses on the technical stack associated with end-to-end systems, detailing the importance of LLMs, diffusion models, and reinforcement learning, which are crucial for understanding the future job market in this field [41][42]. Section 3: Two-Stage End-to-End Systems - The third chapter delves into two-stage end-to-end systems, exploring their emergence, advantages, and disadvantages, while also reviewing notable works in the field such as PLUTO and CarPlanner [42][43]. Section 4: One-Stage End-to-End and VLA - The fourth chapter highlights one-stage end-to-end systems, discussing various subfields including perception-based methods and the latest advancements in VLA (Vision-Language Alignment), which are pivotal for achieving the ultimate goals of autonomous driving [44][50]. Section 5: Practical Application and RLHF Fine-Tuning - The fifth chapter includes a major project focused on RLHF (Reinforcement Learning from Human Feedback) fine-tuning, providing practical insights into building pre-training and reinforcement learning modules, which are applicable to VLA-related algorithms [52]. Course Structure and Learning Outcomes - The course aims to equip participants with a solid understanding of end-to-end autonomous driving technologies, covering essential frameworks and methodologies, and preparing them for roles in the industry [56][57].
从方法范式和应用场景上看强化与VLA/Flow Matching/机器人控制算法
具身智能之心· 2025-08-19 01:54
Core Viewpoint - The article discusses recent advancements in reinforcement learning (RL) and its applications in robotics, particularly focusing on the VLA (Vision-Language Action) models and diffusion policies, highlighting their potential to handle complex tasks that traditional RL struggles with [2][4][35]. Method Paradigms - Traditional RL and imitation learning combined with Sim2Real techniques are foundational approaches in robotics [3]. - VLA models differ fundamentally from traditional RL by using training data distributions to describe task processes and goals, allowing for the execution of more complex tasks [4][35]. - Diffusion Policy is a novel approach that utilizes diffusion models to generate continuous action sequences, demonstrating superior capabilities in complex task execution compared to traditional RL methods [4][5]. Application Scenarios - The article categorizes applications into two main types: basic motion control for humanoid and quadruped robots, and complex/long-range operational tasks [22][23]. - Basic motion control primarily relies on RL and Sim2Real, with current implementations still facing challenges in achieving fluid motion akin to human or animal movements [22]. - For complex tasks, architectures typically involve a pre-trained Vision Transformer (ViT) encoder and a large language model (LLM), utilizing diffusion or flow matching for action output [23][25]. Challenges and Future Directions - The article identifies key challenges in the field, including the need for better simulation environments, effective domain randomization, and the integration of external goal conditions [35]. - It emphasizes the importance of human intention in task definition and the limitations of current models in learning complex tasks without extensive human demonstration data [35][40]. - Future advancements may involve multi-modal input predictions for task goals and the potential integration of brain-machine interfaces to enhance human-robot interaction [35].
自动驾驶秋招交流群成立了!
自动驾驶之心· 2025-08-18 23:32
Core Viewpoint - The article emphasizes the convergence of autonomous driving technology, indicating a shift from numerous diverse approaches to a more unified model, which raises the technical barriers in the industry [1] Group 1 - The industry is witnessing a trend where previously many directions requiring algorithm engineers are now consolidating into unified models such as one model, VLM, and VLA [1] - The article encourages the establishment of a large community to support individuals in the industry, highlighting the limitations of individual efforts [1] - A new job and industry-related community is being launched to facilitate discussions on industry trends, company developments, product research, and job opportunities [1]
VLA都上车了,还不知道研究方向???
自动驾驶之心· 2025-08-16 16:04
Core Viewpoint - The article discusses the advancements of the Li Auto VLA driver model, highlighting its enhanced capabilities in understanding semantics, reasoning, and trajectory planning, which are crucial for autonomous driving [1][3]. Summary by Sections VLA Model Capabilities - The VLA model has improved in three main areas: better semantic understanding through multimodal input, enhanced reasoning abilities via thinking chains, and closer alignment with human driving intuition through trajectory planning [1]. - Four core capabilities of the VLA model are showcased: spatial understanding, reasoning, communication and memory, and behavioral capabilities [1][3]. Development and Research Trends - The VLA model has evolved from VLM+E2E, incorporating various cutting-edge technologies such as end-to-end learning, trajectory prediction, visual language models, and reinforcement learning [5]. - While traditional perception and planning tasks are still being optimized in the industry, the academic community is increasingly shifting focus towards large models and VLA, indicating a wealth of subfields still open for research [5]. VLA Research Guidance Program - A VLA research paper guidance program has been initiated, receiving positive feedback, with many students eager for a second session. The program aims to help participants systematically grasp key theoretical knowledge and develop their own research ideas [6]. - The program includes a structured curriculum over 14 weeks, covering topics from traditional end-to-end autonomous driving to writing methodologies for research papers [9][11]. Enrollment and Course Structure - The program is limited to 6-8 participants per session, targeting students at various academic levels interested in VLA and autonomous driving [12]. - Participants will gain insights into classic and cutting-edge papers, coding implementations, and methods for selecting research topics and writing papers [13][14]. Course Highlights - The course emphasizes a comprehensive learning experience with a "2+1" teaching model, involving main instructors and experienced research assistants to support students throughout the program [22]. - Students will receive guidance on coding, research ideas, and writing methodologies, culminating in the production of a research paper draft [31][32]. Required Skills and Resources - Participants are expected to have a foundational understanding of deep learning, basic programming skills in Python, and familiarity with PyTorch [19]. - The program encourages the use of high-performance computing resources, ideally with multiple GPUs, to facilitate research and experimentation [19]. Conclusion - The VLA model represents a significant advancement in autonomous driving technology, with ongoing research and educational initiatives aimed at fostering innovation in this field [1][5][31].
VLA与自动驾驶科研论文辅导第二期来啦~
自动驾驶之心· 2025-08-16 12:00
Core Insights - The article discusses the recent advancements in the Li Auto VLA driver model, highlighting its improved capabilities in understanding semantics, reasoning, and trajectory planning, which are crucial for autonomous driving [1][3]. Group 1: VLA Model Capabilities - The VLA model's enhancements focus on four core abilities: spatial understanding, reasoning, communication and memory, and behavioral capabilities [1]. - The reasoning and communication abilities are derived from language models, with memory capabilities utilizing RAG [3]. Group 2: Research and Development Trends - The VLA model has evolved from VLM+E2E, incorporating various cutting-edge technologies such as end-to-end learning, trajectory prediction, visual language models, and reinforcement learning [5]. - While traditional perception and planning tasks are still being optimized in the industry, the academic community is increasingly shifting towards large models and VLA, indicating a wealth of subfields still open for research [5]. Group 3: VLA Research Guidance Program - A VLA research paper guidance program has been initiated, aimed at helping participants systematically grasp key theoretical knowledge and develop their own research ideas [6]. - The program includes a structured 12-week online group research course followed by 2 weeks of paper guidance and a 10-week maintenance period for paper development [14][34]. Group 4: Course Structure and Content - The course covers various topics over 14 weeks, including traditional end-to-end autonomous driving, VLA end-to-end models, and writing methodologies for research papers [9][11][35]. - Participants will gain insights into classic and cutting-edge papers, coding skills, and methods for writing and submitting research papers [20][34]. Group 5: Enrollment and Requirements - The program is limited to 6-8 participants per session, targeting individuals with a background in deep learning and basic knowledge of autonomous driving algorithms [12][15]. - Participants are expected to have a foundational understanding of Python and PyTorch, with access to high-performance computing resources recommended [21].
VLA/强化学习/VLN方向的论文辅导招募!
具身智能之心· 2025-08-14 12:00
Group 1 - The article announces the availability of 1v1 paper guidance in the field of embodied intelligence, specifically offering three slots focused on vla, reinforcement learning, and sim2real directions, primarily targeting A and B conferences [1] - Major conferences mentioned include CVPR, ICCV, ECCV, ICLR, CoRL, ICML, and ICRA, indicating the relevance of the guidance to prominent events in the academic community [2] - Interested individuals are encouraged to add a specific WeChat contact for inquiries or to scan a QR code for consultation regarding the embodied paper guidance [3]