自动驾驶之心
Search documents
马斯克:钱不到位,这CEO是一天也干不下去了?
自动驾驶之心· 2025-10-24 16:03
智能车参考 . 追踪车圈先进技术|好用产品|新进展和认知 作者 | 杰西卡 来源 | 智能车参考 点击下方 卡片 ,关注" 自动驾驶之心 "公众号 戳我-> 领取 自动驾驶近30个 方向 学习 路线 >>自动驾驶前沿信息获取 → 自动驾驶之心知识星球 本文只做学术分享,如有侵权,联系删文 马斯克这次狠话到头了,"爱谁干谁干"。 以下文章来源于智能车参考 ,作者有车有据 起因是特斯拉为了留住这位CEO,推出了一项新的薪酬方案, 价值万亿美元 。 但外界的质疑和担忧声不断,让这个方案在正式表决落地前,又显得疑云重重。 这位CEO自进入特斯拉以来,拿工资之路一直是一波三折,反转再反转,上一次 2018年 提出的薪酬方案,马斯克提前全部达标,然而现在 还一分钱都没拿到手…… 但这一次,马斯克不忍了,表示如果得不到高额薪酬,他将离开特斯拉,至少 CEO是干不了了 。 马斯克:不发工资我就不干了 马斯克所说的 高额薪酬 ,指的是特斯拉最近为他设计的新"薪酬OKR",为的是留住马斯克 至少10年 ,保证他继续以CEO的身份掌舵。 不过,想拿到全部工资,绩效目标也是史诗级地狱难度: 市值 ,最高需要翻近8倍到 8.5万亿美元 ...
上交OmniNWM:突破三维驾驶仿真极限的「全知」世界模型
自动驾驶之心· 2025-10-24 16:03
Core Insights - The article discusses the OmniNWM research, which proposes a panoramic, multi-modal driving navigation world model that significantly surpasses existing state-of-the-art (SOTA) models in terms of generation quality, control precision, and long-term stability, setting a new benchmark for simulation training and closed-loop evaluation in autonomous driving [2][58]. Group 1: OmniNWM Features - OmniNWM integrates state generation, action control, and reward evaluation into a unified framework, addressing the limitations of existing models that rely on single-modal RGB video and sparse action encoding [10][11]. - The model utilizes a Panoramic Diffusion Transformer (PDiT) to jointly generate pixel-aligned outputs across four modalities: RGB, semantic, depth, and 3D occupancy [12][11]. - OmniNWM introduces a normalized Plücker Ray-map for action control, allowing for pixel-level guidance and improved generalization across out-of-distribution (OOD) trajectories [18][22]. Group 2: Challenges and Solutions - The article identifies three core challenges in current autonomous driving world models: limitations in state representation, ambiguity in action control, and lack of integrated reward mechanisms [8][10]. - OmniNWM's approach to state generation overcomes the limitations of existing models by capturing the full geometric and semantic complexity of real-world driving scenarios [10][11]. - The model's reward system is based on the generated 3D occupancy, providing a dense and integrated reward function that enhances the evaluation of driving behavior [35][36]. Group 3: Performance Metrics - OmniNWM supports the generation of long video sequences, exceeding the ground truth length with stable outputs, demonstrating its capability to generate over 321 frames [31][29]. - The model achieves significant improvements in video generation quality, outperforming existing models in metrics such as FID and FVD [51][52]. - The integration of a Vision-Language-Action (VLA) planner enhances the model's ability to understand multi-modal environments and output high-precision trajectories [43][50].
2025年全球汽车Tier1厂商排名
自动驾驶之心· 2025-10-24 16:03
Core Insights - The article discusses the competitive landscape of global Tier 1 automotive suppliers, highlighting the rise of Chinese manufacturers in the electric and intelligent driving sectors while traditional players face challenges [2][4][5]. Group 1: Global Tier 1 Suppliers Ranking - The top 20 global Tier 1 automotive suppliers for 2025 are led by Bosch, ZF Friedrichshafen, and Denso, with strengths in automotive electronics, powertrains, and autonomous driving [2]. - Notable Chinese suppliers like Desay SV and Foryoung are making significant strides in intelligent driving and automotive electronics, indicating a shift in market dynamics [2][5]. Group 2: Trends in Electrification and Intelligence - The electrification trend is accelerating, with battery manufacturers like CATL and BYD increasing their market share, particularly in the context of rapid growth in new energy vehicles [3]. - Intelligent driving and smart cockpit technologies are emerging as core growth areas, with Chinese firms gaining market share in these domains [3]. Group 3: Market Competition Dynamics - Traditional Tier 1 suppliers such as Bosch and ZF are experiencing revenue and profit declines in 2024, despite their established technological advantages [4]. - Chinese Tier 1 suppliers are breaking through barriers in the new energy and intelligent driving sectors, challenging the dominance of international players [5]. Group 4: Regional Market Changes - The Chinese market is witnessing rapid growth in new energy vehicles, providing substantial opportunities for local Tier 1 suppliers [10]. - In contrast, the European and American markets are experiencing a slowdown in electrification but continue to demand advancements in autonomous driving and smart cockpit technologies [10]. Group 5: Technological Innovation and Collaboration - Suppliers with comprehensive capabilities in hardware, software, and system integration are expected to capture larger market shares in the future [6]. - Traditional Tier 1 suppliers are investing in Chinese startups and developing localized products to regain their competitive edge [6].
自动驾驶之心合伙人招募!
自动驾驶之心· 2025-10-24 16:03
Group 1 - The article announces the recruitment of 10 outstanding partners for the autonomous driving sector, focusing on course development, paper guidance, and hardware research [2] - The main areas of expertise sought include large models, multimodal models, diffusion models, end-to-end systems, embodied interaction, joint prediction, SLAM, 3D object detection, world models, closed-loop simulation, and model deployment and quantization [3] - Candidates are preferred from QS200 universities with a master's degree or higher, especially those with significant contributions to top conferences [4] Group 2 - The compensation package includes resource sharing for job seeking, doctoral studies, and overseas study recommendations, along with substantial cash incentives and opportunities for entrepreneurial project collaboration [5] - Interested parties are encouraged to add WeChat for consultation, specifying "organization/company + autonomous driving cooperation inquiry" [6]
沈劭劼团队25年成果一览:9篇顶刊顶会,从算法到系统的工程闭环
自动驾驶之心· 2025-10-24 00:04
Core Viewpoint - The article emphasizes the advancements and contributions of the Aerial Robotics Group (ARCLab) at Hong Kong University of Science and Technology (HKUST) in the fields of autonomous navigation, drone technology, sensor fusion, and 3D vision, highlighting their dual focus on academic influence and engineering implementation [2][3][23]. Summary by Sections Team and Leadership - The ARCLab is led by Professor Shen Shaojie, who has been instrumental in the development of intelligent driving technologies and has received numerous accolades for his research contributions [2][3]. Achievements and Recognition - The team has received multiple prestigious awards, including IEEE T-RO Best Paper Awards and IROS Best Student Paper Awards, showcasing their high academic impact and engineering capabilities [3][4]. Research Focus and Innovations - ARCLab's research focuses on five main areas: more stable state estimation and multi-source fusion, lightweight mapping and map alignment, reliable navigation in complex/extreme environments, comprehensive scene understanding and topology reasoning, and precise trajectory prediction and decision-making [23][24]. Productization and Engineering Execution - The lab emphasizes a product-oriented approach with strong engineering execution, addressing real-world challenges and prioritizing solutions that are reproducible, deployable, and scalable [3][4]. Talent Development - ARCLab has successfully nurtured a number of young scholars and technical leaders who are active in both academia and industry, contributing to the lab's sustained high output and influence [4]. Key Research Papers and Contributions - The article outlines several key research papers from 2025, focusing on advancements in state estimation, mapping, navigation, scene understanding, and trajectory prediction, all of which are aimed at enhancing the robustness and efficiency of autonomous systems [4][23]. Keywords for 2025 - The keywords for the year 2025 are stability, lightweight, practicality, universality, and interpretability, reflecting the lab's ongoing commitment to addressing real-world challenges in autonomous systems [24].
Optimus要量产了,特斯拉Q3电话会议(251023)
自动驾驶之心· 2025-10-24 00:04
Core Viewpoint - Tesla's Optimus humanoid robot is projected to become one of the largest products in history, with plans to establish a production line capable of manufacturing 1 million units annually, ultimately aiming for a total output of 10 million units, and potentially reaching 50 million to 100 million units in the long term [3][5][16]. Group 1: Production and Development Timeline - The release of Optimus Gen3 is expected in the first quarter of 2026 or earlier, with the first generation production line currently being installed for mass production [6]. - A prototype for the Optimus production intention is set to be showcased in early 2024, with mass production planned to start by the end of next year [15]. - The production goal is to establish a line capable of producing 1 million units annually, with a long-term vision of reaching outputs of 10 million to 100 million units [16]. Group 2: Technological Advancements - Tesla's Full Self-Driving (FSD) AI technology can be directly transferred to the Optimus robot, although it will require extensive imitation learning and video data for improved generalization capabilities [7][9]. - The Optimus robot is currently patrolling Tesla's headquarters, demonstrating autonomous navigation and interaction capabilities, which marks a significant advancement in its development [10]. - The design of the robot's dexterous hands and forearms presents challenges, with a focus on achieving high precision through a tendon-driven mechanism [11][17]. Group 3: Supply Chain and Manufacturing Challenges - Tesla aims to build a humanoid robot supply chain from scratch, as no existing supply chain for humanoid robots currently exists, unlike those for cars and computers [13]. - The company must achieve vertical integration and design components in-house to successfully manufacture humanoid robots, which is a unique position compared to other robotics startups [14]. Group 4: Future Predictions and Features - Predictions for the upcoming shareholder meeting suggest that Gen3 may be showcased in a static display or may not appear at all, with a higher likelihood of seeing demonstrations of Gen2.5 and new dexterous hands [17]. - The robot is expected to feature a tendon-driven hand design with a total of 31 actuators, allowing for a high degree of freedom and precision [17]. - Optimus will incorporate Grok for enhanced autonomous planning and dialogue capabilities [18].
FSD v14很有可能是VLA!ICCV'25 Ashok技术分享解析......
自动驾驶之心· 2025-10-24 00:04
Core Insights - Tesla's FSD V14 series has shown rapid evolution with four updates in two weeks, indicating a new phase of accelerated development in autonomous driving technology [4][5] - The transition to an end-to-end architecture from version 12 has sparked industry interest in similar technologies, emphasizing the importance of a unified neural network model for driving control [7][9] Technical Advancements - The end-to-end system reduces intermediate processing steps, allowing for seamless gradient backpropagation from output to perception, enhancing overall model optimization [7] - Ashok highlighted the complexity of encoding human value judgments in autonomous driving scenarios, showcasing the system's ability to learn from human driving data to make nuanced decisions [9] - Traditional modular systems face challenges in defining interfaces for perception and decision-making, while end-to-end models minimize information loss and improve decision-making in rare scenarios [11][13] Data Utilization - Tesla's data engine collects vast amounts of driving data, generating the equivalent of 500 years of driving data daily, which is crucial for training the FSD model [18][19] - The company employs complex mechanisms to gather data from rare scenarios, ensuring the model can generalize effectively [19] Model Structure and Challenges - The ideal end-to-end model structure involves high-dimensional input data (e.g., 7 channels of 5 million pixel camera video) mapped to low-dimensional output signals, presenting significant training challenges [16] - The end-to-end system's architecture is designed to ensure interpretability and safety, avoiding the pitfalls of being a "black box" [20][22] Evaluation Framework - A robust evaluation framework is essential for end-to-end systems, focusing on closed-loop performance and the ability to assess diverse driving behaviors [32][34] - Tesla's closed-loop simulation system plays a critical role in validating the correctness of the end-to-end policy and generating adversarial samples for model testing [36][38] Future Implications - The integration of Tesla's simulation capabilities into robotics suggests potential advancements in embodied AI, enhancing the versatility of AI applications across different domains [40][42]
做了几期线上交流,我发现大家还是太迷茫
自动驾驶之心· 2025-10-24 00:04
Core Viewpoint - The article emphasizes the establishment of a comprehensive community called "Autonomous Driving Heart Knowledge Planet," aimed at providing a platform for knowledge sharing and networking in the autonomous driving industry, addressing the challenges faced by newcomers in the field [1][3][14]. Group 1: Community Development - The community has grown to over 4,000 members and aims to reach nearly 10,000 within two years, providing a space for technical sharing and communication among beginners and advanced learners [3][14]. - The community integrates various resources including videos, articles, learning paths, Q&A, and job exchange, making it a comprehensive hub for autonomous driving enthusiasts [3][5]. Group 2: Learning Resources - The community has organized over 40 technical learning paths, covering topics such as end-to-end autonomous driving, multi-modal large models, and data annotation practices, significantly reducing the time needed for research [5][14]. - Members can access a variety of video tutorials and courses tailored for beginners, covering essential topics in autonomous driving technology [9][15]. Group 3: Industry Insights - The community regularly invites industry experts to discuss trends, technological advancements, and production challenges in autonomous driving, fostering a serious content-driven environment [6][14]. - Members are encouraged to engage with industry leaders for insights on job opportunities and career development within the autonomous driving sector [10][18]. Group 4: Networking Opportunities - The community facilitates connections between members and various autonomous driving companies, offering resume forwarding services to help members secure job placements [10][12]. - Members can freely ask questions regarding career choices and research directions, receiving guidance from experienced professionals in the field [87][89].
京东入局新能源汽车赛道,名称官宣......
自动驾驶之心· 2025-10-23 08:14
Core Viewpoint - GAC Group, in collaboration with JD.com and CATL, has officially named its new vehicle "Aion UT Super," which is positioned as a "national good car" [1]. Group 1 - The Aion UT Super is the first model to feature "GAC Huawei Cloud Car Machine" technology [2]. - It is equipped with a large battery that offers a range of 500 km, a first in its class, and supports battery swapping in just 99 seconds, utilizing CATL's chocolate battery swapping technology [2]. Group 2 - The article mentions the establishment of nearly a hundred technical communication groups by "Automated Driving Heart," covering various advanced topics in automated driving technology [6]. - The community has around 4,000 members and includes over 300 automated driving companies and research institutions, focusing on more than 30 learning paths in automated driving technology [6].
手持激光雷达即可在线实时重建点云!超高性价比3D扫描仪来了~
自动驾驶之心· 2025-10-23 00:04
Core Viewpoint - The article introduces the GeoScan S1, a highly cost-effective 3D laser scanner designed for industrial and research applications, emphasizing its lightweight design, ease of use, and advanced features for real-time 3D scene reconstruction. Group 1: Product Features - The GeoScan S1 offers centimeter-level precision in 3D scene reconstruction using a multi-modal sensor fusion algorithm, capable of generating point clouds at a rate of 200,000 points per second and covering distances up to 70 meters [1][29]. - It supports scanning areas exceeding 200,000 square meters and can be equipped with a 3D Gaussian data collection module for high-fidelity scene restoration [1][30]. - The device is designed for easy operation with a one-button start feature, allowing users to export scan results without complex setups [5][27]. Group 2: Technical Specifications - The GeoScan S1 integrates various sensors, including RTK, IMU, and dual wide-angle cameras, and features a compact design with dimensions of 14.2cm x 9.5cm x 45cm and a weight of 1.3kg (without battery) [22][12]. - It operates on a power input of 13.8V - 24V with a power consumption of 25W, and has a battery capacity of 88.8Wh, providing approximately 3 to 4 hours of operational time [22][26]. - The system runs on Ubuntu 20.04 and supports various data export formats, including PCD and LAS [22][42]. Group 3: Market Positioning - The GeoScan S1 is positioned as the most cost-effective handheld 3D laser scanner in the market, with a starting price of 19,800 yuan for the basic version [9][57]. - The product is backed by extensive research and validation from teams at Tongji University and Northwestern Polytechnical University, having been tested in over a hundred projects [9][38]. - The scanner is suitable for a wide range of applications, including urban planning, infrastructure monitoring, and complex scene mapping in various environments such as industrial parks and tunnels [46][52].