Workflow
自动驾驶之心
icon
Search documents
技术圈热议的π0/π0.5/A0,终于说清楚是什么了!功能/场景/方法论全解析~
自动驾驶之心· 2025-06-22 01:35
Core Insights - The article discusses the π0, π0.5, and A0 models, focusing on their architectures, advantages, and functionalities in robotic control and task execution [3][12][21]. π0 Model Structure - The π0 model is based on a pre-trained Vision-Language Model (VLM) and Flow Matching technology, integrating seven types of robots and over 68 tasks with more than 10,000 hours of data [3]. - It utilizes a VLM backbone, an Action Expert, and Cross-Embodiment Training to handle different robot action spaces [3]. π0 Advantages and Functions - The model can execute tasks directly from language prompts without additional fine-tuning, achieving a 20%-30% higher accuracy in task execution compared to baseline models [4][6]. - It supports complex task decomposition and high-frequency precise operations, generating continuous actions at a control frequency of up to 50Hz [4][6]. π0.5 Model Structure - The π0.5 model employs a two-stage training framework and a hierarchical architecture to learn from diverse data sources and generalize to new environments [7][9]. - It integrates a Vision-Language-Action (VLA) model that encodes multi-modal inputs into a unified sequence for decision-making [9]. π0.5 Advantages and Functions - The π0.5 model shows a 25%-40% higher success rate in tasks compared to π0, with a training speed improvement of three times due to mixed discrete-continuous action training [12][13]. - It effectively handles long-duration tasks and demonstrates zero-shot semantic understanding, allowing it to recognize and act on previously unseen objects [13][16]. A0 Model Structure - The A0 model features a layered architecture that focuses on Affordance understanding and action execution, utilizing a diffusion model for predicting contact points and trajectories [21][25]. - It integrates multi-source data to create a unified Affordance representation, enhancing its ability to perform complex tasks [26]. A0 Advantages and Functions - The A0 model exhibits cross-platform generalization capabilities, allowing deployment across various robotic platforms with high efficiency in spatial reasoning [26][27]. - It achieves an average success rate of 62.5% in tasks, with specific tasks like drawer opening reaching a 75% success rate [27].
理想最新DriveAction:探索VLA模型中类人驾驶决策的基准~
自动驾驶之心· 2025-06-21 13:15
点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近15个 方向 学习 路线 今天自动驾驶之心为大家分享理想汽车最新的工作—DriveAction! 探索VLA模型 中类人驾驶决策的基准。 如果您有相关工作需要分享,请在文末联系我们! >>点击进入→ 自动驾驶之心 『多模态大模型』技术交流群 论文作者 | Yuhan Hao等 编辑 | 自动驾驶之心 研究背景与问题提出 在自动驾驶技术不断发展的进程中,Vision-Language-Action(VLA)模型凭借其强大的多模态处理能力, 为自动驾驶系统的发展带来了新的机遇。然而,现有的基准数据集在场景多样性、动作级标注的可靠性以 及与人类偏好一致的评估协议等方面存在明显不足,这严重制约了VLA模型的进一步发展和实际应用。 具体来看,现有基准数据集主要存在以下问题: DriveAction基准的核心创新 为解决上述问题,本文提出了DriveAction基准,这是首个专为VLA模型设计的动作驱动基准,具有以下三 大核心创新: 场景多样性不足 :大多数基准数据集基于开源数据构建,来源单一,难以覆盖现实驾驶中的各种复杂 场景,如道路合并与出口 ...
MinMax-M1:超越DeepSeek,支持百万级token上下文
自动驾驶之心· 2025-06-21 13:15
以下文章来源于AIGC面面观 ,作者欠阿贝尔两块钱 AIGC面面观 . 整理LLM、AIGC的入门笔记 | 论文学习笔记 | 一线大厂面经 | 探索AIGC落地 作者 | 欠阿贝尔两块钱 来源 | AIGC面面观 点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近15个 方向 学习 路线 >>点击进入→ 自动驾驶之心 『大模型』技术交流群 主要贡献 1. 高效混合架构设计 :结合MoE架构与Lightning Attention)的模型MiniMax-M1, 支持百万级上下文窗 口(1M tokens) ,生成长度达80K tokens时FLOPs仅为传统注意力模型的25%。 2. 超越DAPO的算法CISPO :通过 剪裁重要性采样权重 提升RL效率,相比DAPO实现2倍加速,避免了 传统方法(如PPO/GRPO)对低概率token有更好的采样效果。 3. 可扩展上下文 :支持从40K到80K Token生成长度的扩展。 本文只做学术分享,如有侵权,联系删文 1.混合注意力架构 Lighting Attention : 采用I/O感知的线性注意力计算,通过分块计算和内存优化 ,将长 ...
量产项目卡在了场景泛化,急需千万级自动标注?
自动驾驶之心· 2025-06-21 13:15
而自从端到端和大语言LLM横空出世以来,大规模无监督的预训练 + 高质量数据集做具体任务的微调, 可能也会成为量产感知算法下一阶段需要发力的方向。同时数 据的联合标注也是当下各家训练模型的实际刚需,以往分开标注的范式不再适合智能驾驶的算法发展需求。今天自动驾驶之心就和大家一起分享下4D数据的标注流 程: 最复杂的当属动态障碍物的自动标注,涉及四个大的模块: 而为了尽可能的提升3D检测的性能,业内使用最多的还是点云3D目标检测或者LV融合的方法: 得到离线单帧的3D检测结果后,需要利用跟踪把多帧结果串联起来,但当下跟踪也面临诸多的实际问题: 离线3D目标检测; 离线跟踪; 后处理优化; 传感器遮挡优化; 点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近15个 方向 学习 路线 千万级4D标注方案应该怎么做? 智能驾驶算法的开发已经到了深水区,各家都投入了大量的精力去做量产落地。其中一块最关键的就是如何高效的完成4D数据标注。无论是3D动态目标、OCC还是静 态标注。 相比于车端的感知算法,自动标注系统更像是一个不同模块组成的系统, 充分利用离线的算力和时序信息,才能得到更好的感知结果 ...
商汤绝影世界模型负责人离职。。。
自动驾驶之心· 2025-06-21 13:15
Core Viewpoint - The article discusses the challenges and opportunities faced by SenseTime's autonomous driving division, particularly focusing on the competitive landscape and the importance of technological advancements in the industry. Group 1: Company Developments - The head of the world model development for SenseTime's autonomous driving division has left the company, which raises concerns about the future of their cloud technology system and the R-UniAD generative driving solution [2][3]. - SenseTime's autonomous driving division has successfully delivered a mid-tier solution based on the J6M model to GAC Trumpchi, but the mid-tier market is expected to undergo significant upgrades this year [4]. Group 2: Market Dynamics - The mid-tier market will see a shift from highway-based NOA (Navigation on Autopilot) to full urban NOA, which represents a major change in the competitive landscape [4]. - Leading companies are introducing lightweight urban NOA solutions based on high-tier algorithms, targeting chips with around 100 TOPS computing power, which are already being demonstrated to OEM clients [4]. Group 3: High-Tier Strategy - The key focus for SenseTime this year is the one-stage end-to-end solution, which has shown impressive performance and is a requirement for high-tier project tenders from OEMs [5]. - Collaborations with Dongfeng Motor aim for mass production and delivery of the UniAD one-stage end-to-end solution by Q4 2025, marking a critical opportunity for SenseTime to establish a foothold in the high-tier market [5][6]. Group 4: Competitive Landscape - SenseTime's ability to deliver a benchmark project in the high-tier segment is crucial for gaining credibility with OEMs and securing additional projects [6][7]. - The current window of opportunity for SenseTime in the high-tier market is limited, as many models capable of supporting high-tier software and hardware costs are being released this year [6][8].
自动驾驶基础模型全面盘点(LLM/VLM/MLLM/扩散模型/世界模型)
自动驾驶之心· 2025-06-21 11:18
Core Insights - The article discusses the critical role of foundation models in generating and analyzing complex driving scenarios for autonomous vehicles, emphasizing their ability to synthesize diverse and realistic high-risk safety scenarios [2][4]. Group 1: Foundation Models in Autonomous Driving - Foundation models enable the processing of heterogeneous inputs such as natural language, sensor data, and high-definition maps, facilitating the generation and analysis of complex driving scenarios [2]. - A unified classification system is proposed, covering various model types including Large Language Models (LLMs), Vision-Language Models (VLMs), Multimodal Large Language Models (MLLMs), Diffusion Models (DMs), and World Models (WMs) [2][4]. Group 2: Methodologies and Tools - The article reviews methodologies, open-source datasets, simulation platforms, and benchmark testing challenges relevant to scenario generation and analysis [2]. - Specific evaluation metrics for assessing scenario generation and analysis are discussed, highlighting the need for dedicated assessment standards in this field [2]. Group 3: Current Challenges and Future Directions - The article identifies open challenges and research questions in the field of scenario generation and analysis, suggesting areas for future research and development [2].
多样化大规模数据集!SceneSplat++:首个基于3DGS的综合基准~
自动驾驶之心· 2025-06-20 14:06
Core Insights - The article introduces SceneSplat-Bench, a comprehensive benchmark for evaluating visual-language scene understanding methods based on 3D Gaussian Splatting (3DGS) [11][30]. - It presents SceneSplat-49K, a large-scale dataset containing approximately 49,000 raw scenes and 46,000 filtered 3DGS scenes, which is the most extensive open-source dataset for complex and high-quality scene-level 3DGS reconstruction [9][30]. - The evaluation indicates that generalizable methods consistently outperform per-scene optimization methods, establishing a new paradigm for scalable scene understanding through pre-trained models [30]. Evaluation Protocols - The benchmark evaluates methods based on two key metrics in 3D space: foreground mean Intersection over Union (f-mIoU) and foreground mean accuracy (f-mAcc), addressing object size imbalance and reducing viewpoint dependency compared to 2D evaluations [22][30]. - The evaluation dataset includes ScanNet, ScanNet++, and Matterport3D for indoor scenes, and HoliCity for outdoor scenes, emphasizing the methods' capabilities across various object scales and complex environments [22][30]. Dataset Contributions - SceneSplat-49K is compiled from multiple sources, including SceneSplat-7K, DL3DV-10K, HoliCity, and Aria Synthetic Environments, ensuring a diverse range of indoor and outdoor environments [9][10]. - The dataset preparation involved approximately 891 GPU days and extensive human effort, highlighting the significant resources invested in creating a high-quality dataset [7][9]. Methodological Insights - The article categorizes methods into three types: per-scene optimization methods, per-scene optimization-free methods, and generalizable methods, with SceneSplat representing the latter [23][30]. - Generalizable methods eliminate the need for extensive single-scene computations during inference, allowing for efficient processing of 3D scenes in a single forward pass [24][30]. Performance Results - The results from SceneSplat-Bench demonstrate that SceneSplat excels in both performance and efficiency, often surpassing the pseudo-label methods used for its pre-training [24][30]. - The performance of various methods shows significant variation based on the dataset's complexity, indicating the importance of challenging benchmarks in revealing the limitations of competing methods [28][30].
为什么定义2000 TOPS + VLA + VLM为L3 级算力?
自动驾驶之心· 2025-06-20 14:06
Core Viewpoint - The article discusses the advancements in autonomous driving technology, particularly focusing on Xiaopeng Motors' recent paper presented at CVPR 2025, which validates the scaling laws in the context of autonomous driving and introduces new standards for computing power in Level 3 (L3) autonomous vehicles [4][6][22]. Group 1: Scaling Laws and Model Performance - Xiaopeng Motors' paper systematically verifies the effectiveness of scaling laws in autonomous driving, indicating that larger model parameters lead to improved performance [4][6]. - The research establishes a clear power-law relationship between model performance, parameter scale, data scale, and computational power, originally proposed by OpenAI [4][6]. Group 2: Computing Power Standards - The paper introduces a new computing power standard of 2000 TOPS for L3 autonomous driving, highlighting the exponential increase in computational requirements as the driving level advances [8][20]. - For L2 systems, the required computing power ranges from 80 to 300 TOPS, while L3 systems necessitate thousands of TOPS due to the complexity of urban driving scenarios [8][20]. Group 3: VLA and VLM Model Architecture - Xiaopeng's VLA (Vision-Language-Action) model architecture integrates visual understanding, reasoning, and action generation capabilities, requiring substantial computational resources [10][12]. - The architecture's visual processing module alone demands hundreds of TOPS for real-time data fusion from multiple sensors [10][12]. Group 4: Comparison of Onboard and Data Center Computing Power - The article differentiates between onboard computing power, which focuses on real-time data processing for driving decisions, and data center computing power, which is used for offline training and model optimization [12][15]. - Onboard systems must balance real-time performance and power consumption, while data centers can leverage significantly higher computational capabilities for complex model training [12][15]. Group 5: Market Dynamics and Competitive Landscape - The market for AI chips in autonomous driving is dominated by a few key players, with NVIDIA holding a 36% market share, followed by Tesla and Huawei [20]. - The competitive landscape has shifted significantly since 2020, impacting the development of AI chips and their applications in autonomous driving [17][20].
[大模型实践] 卡比人贵时代的深度学习经验
自动驾驶之心· 2025-06-20 14:06
Core Viewpoint - The article emphasizes the importance of developing new methodologies for large model experiments, focusing on key indicators, identifying true bottlenecks, balancing large and small experiments, and enhancing team collaboration [1]. Group 1: Key Indicators - Identifying key indicators is crucial as they should clearly differentiate between state-of-the-art (SoTA) models and others, guiding the direction of model iterations [4]. - Good indicators must objectively reflect performance levels and accurately indicate the direction for model improvements, avoiding the pitfalls of focusing on misleading metrics [4]. Group 2: Experimentation Methodologies - The cost of experiments has increased significantly, making it essential to conduct meaningful experiments rather than low-value ones [5]. - It is advised to conduct large experiments to identify significant issues while using small experiments to filter out incorrect ideas [6]. Group 3: Team Collaboration - Given the complexity of large model experiments, it is important for team members to understand their comparative advantages and roles within the team [8]. - Effective collaboration can be enhanced by finding ways to observe and document experiments together, increasing communication frequency [8].
打造万人的自动驾驶黄埔军校,一个死磕技术的地方~
自动驾驶之心· 2025-06-20 14:06
Core Viewpoint - The article emphasizes the establishment of a comprehensive community for autonomous driving and embodied intelligence, aiming to gather industry professionals and facilitate rapid responses to challenges within the sector. The goal is to create a community of 10,000 members within three years, focusing on academic, product, and recruitment connections in the field [2][4]. Group 1: Community Development - The community aims to provide a platform for industry professionals to share the latest technological developments, engage in discussions, and access job opportunities [2][3]. - The initiative has already attracted notable figures from companies like Huawei and various leading researchers in the autonomous driving field [2]. - The community is designed to support newcomers by offering structured learning paths and resources to quickly build their technical knowledge [2]. Group 2: Knowledge Sharing and Resources - The "Autonomous Driving Heart Knowledge Planet" serves as a technical exchange platform, primarily for students and professionals looking to transition into the autonomous driving sector [4][11]. - The community has established connections with numerous companies for recruitment purposes, including well-known names like Xiaomi, NIO, and NVIDIA [4][11]. - Members have access to a wealth of resources, including over 5,000 pieces of content, live sessions with industry experts, and discounts on paid courses [14][18]. Group 3: Technological Focus Areas - The article outlines key technological areas to focus on by 2025, including visual large language models (VLM), end-to-end trajectory prediction, and 3D generative simulation techniques [6][10]. - The community has developed learning paths covering various subfields such as perception, mapping, and AI model deployment, ensuring comprehensive coverage of the autonomous driving technology stack [11][16]. - Regular live sessions will focus on cutting-edge topics like VLA, large models, and embodied intelligence, providing insights into practical applications and research advancements [19][18]. Group 4: Engagement and Interaction - The community encourages active participation, with weekly discussions and Q&A sessions to foster engagement among members [12][14]. - It aims to create a supportive environment for both beginners and advanced professionals, facilitating networking and collaboration opportunities [12][11]. - The platform is designed to be a dynamic space where members can freely ask questions and share knowledge, enhancing the overall learning experience [12][11].