视觉语言模型

Search documents
自驾一边是大量岗位,一遍是招不到人,太魔幻了......
自动驾驶之心· 2025-07-26 02:39
Core Viewpoint - The autonomous driving industry is experiencing a paradox where job vacancies exist alongside a scarcity of suitable talent, leading to a cautious hiring environment as companies prioritize financial sustainability and effective business models over rapid expansion [2][3]. Group 1: Industry Challenges - Many companies possess a seemingly complete technology stack (perception, control, prediction, mapping, data closure), yet they still face significant challenges in achieving large-scale, low-cost, and high-reliability commercialization [3]. - The gap between "laboratory results" and "real-world performance" remains substantial, indicating that practical application of technology is still a work in progress [3]. Group 2: Talent Acquisition - Companies are not necessarily unwilling to hire; rather, they have an unprecedented demand for "top talent" and "highly compatible talent" in the autonomous driving sector [4]. - The industry is shifting towards a more selective hiring process, focusing on candidates with strong technical skills and relevant experience in cutting-edge research and production [3][4]. Group 3: Community and Resources - The "Autonomous Driving Heart Knowledge Planet" is the largest community for autonomous driving technology in China, established to provide industry insights and facilitate talent development [9]. - The community has nearly 4,000 members and includes over 100 experts in the autonomous driving field, offering various learning pathways and resources [7][9]. Group 4: Learning and Development - The community emphasizes the importance of continuous learning and networking, providing a platform for newcomers to quickly gain knowledge and for experienced individuals to enhance their skills and connections [10]. - The platform includes comprehensive learning routes covering nearly all subfields of autonomous driving technology, such as perception, mapping, and AI model deployment [9][12].
ICCV‘25 | 华科提出HERMES:首个统一驾驶世界模型!
自动驾驶之心· 2025-07-25 10:47
Core Viewpoint - The article introduces HERMES, a unified driving world model that integrates 3D scene understanding and future scene generation, significantly reducing generation errors by 32.4% compared to existing methods [4][17]. Group 1: Model Overview - HERMES addresses the fragmentation in existing driving world models by combining scene generation and understanding capabilities [3]. - The model utilizes a BEV (Bird's Eye View) representation to integrate multi-view spatial information and introduces a "world query" mechanism to enhance scene generation with world knowledge [3][4]. Group 2: Challenges and Solutions - The model overcomes the challenge of multi-view spatiality by employing a BEV-based world tokenizer, which compresses multi-view images into BEV features, thus preserving key spatial information while adhering to token length limitations [5]. - To address the integration of understanding and generation, HERMES introduces world queries that enhance the generated scenes with world knowledge, bridging the gap between understanding and generation [8]. Group 3: Performance Metrics - HERMES demonstrates superior performance on the nuScenes and OmniDrive-nuScenes datasets, achieving an 8.0% improvement in the CIDEr metric for understanding tasks and significantly lower Chamfer distances in generation tasks [4][17]. - The model's world query mechanism contributes to a 10% reduction in Chamfer distance for 3-second point cloud predictions, showcasing its effectiveness in enhancing generation performance [20]. Group 4: Experimental Validation - The experiments utilized datasets such as nuScenes, NuInteract, and OmniDrive-nuScenes, employing metrics like METEOR, CIDEr, ROUGE for understanding tasks, and Chamfer distance for generation tasks [19]. - Ablation studies confirm the importance of the interaction between understanding and generation, with the unified framework outperforming separate training methods [18]. Group 5: Qualitative Results - HERMES is capable of accurately generating future point cloud evolutions and understanding complex scenes, although challenges remain in scenarios involving complex turns, occlusions, and nighttime conditions [24].
从“想得好”到“做得好”有多远?具身大小脑协同之路解密
具身智能之心· 2025-07-23 08:45
Core Viewpoint - The article discusses the integration of "brain," "cerebellum," and "body" in embodied intelligent systems, emphasizing the need for improved collaboration and data acquisition for advancing artificial general intelligence (AGI) [2][3][4]. Group 1: Components of Embodied Intelligence - The "brain" is responsible for perception, reasoning, and planning, utilizing large language models and visual language models [2]. - The "cerebellum" focuses on movement, employing motion control algorithms and feedback systems to enhance the naturalness and precision of robotic actions [2]. - The "body" serves as the physical entity that executes the plans generated by the "brain" and the movements coordinated by the "cerebellum," embodying the principle of "knowing and doing" [2]. Group 2: Challenges and Future Directions - There is a need for the "brain" to enhance its reasoning capabilities, enabling it to infer task paths without explicit instructions or maps [3]. - The "cerebellum" should become more intuitive, allowing robots to react flexibly in complex environments and handle delicate objects with care [3]. - The collaboration between the "brain" and "cerebellum" requires improvement, as current communication is slow and responses are delayed, aiming for a seamless interaction system [3]. Group 3: Data Acquisition - The article highlights the challenges in data collection, noting that it is often difficult, expensive, and noisy, which hinders the training of intelligent systems [3]. - There is a call for the development of a training repository that is realistic, diverse, and transferable to enhance data quality and accessibility [3]. Group 4: Expert Discussion - A roundtable discussion is planned with experts from Beijing Academy of Artificial Intelligence and Zhiyuan Robotics to explore recent technological advancements and future pathways for embodied intelligence [4].
小米提出DriveMRP:合成难例数据+视觉提示事故识别率飙至88%!
自动驾驶之心· 2025-07-22 12:46
Core Viewpoint - The article discusses advancements in autonomous driving technology, specifically focusing on the DriveMRP framework, which synthesizes high-risk motion data to enhance the motion risk prediction capabilities of vision-language models (VLMs) [1][4]. Background and Core Objectives - Autonomous driving technology has rapidly developed, but accurately predicting the safety of ego vehicle movements in rare high-risk scenarios remains a significant challenge. Existing trajectory evaluation methods often provide a single reward score, lacking risk type explanation and decision-making support [1]. Limitations of Existing Methods - Rule-based methods rely heavily on external world models and are sensitive to perception errors, making them difficult to generalize to complex real-world scenarios, such as extreme weather conditions [2]. Core Innovative Solutions - **DriveMRP-10K**: A synthetic high-risk motion dataset containing 10,000 high-risk scenarios, generated through a "human-in-the-loop" mechanism, enhancing the VLM's motion risk prediction capabilities [4]. - **DriveMRP-Agent**: A VLM framework that improves risk reasoning by using inputs like BEV layout and scene images [5]. - **DriveMRP-Metric**: Evaluation metrics that assess model performance through high-risk trajectory synthesis and automatic labeling of motion attributes [5]. Performance Improvement - On the DriveMRP-10K dataset, the DriveMRP-Agent achieved a scene understanding metric (ROUGE-1-F1) of 69.08 and a motion risk prediction accuracy of 88.03%, significantly surpassing other VLMs. The accident identification accuracy improved from 27.13% to 88.03% [7][8]. Dataset Effectiveness - The DriveMRP-10K dataset significantly enhances the performance of various general VLMs, demonstrating its "plug-and-play" enhancement capability [10]. Key Component Ablation Experiments - The inclusion of global context in the model led to significant improvements in scene understanding and risk prediction metrics, highlighting the importance of global information for reasoning [12].
AI们数不清六根手指,这事没那么简单
Hu Xiu· 2025-07-11 02:54
Core Viewpoint - The article discusses the limitations of AI models in accurately interpreting images, highlighting that these models rely on memory and biases rather than true visual observation [19][20][48]. Group 1: AI Model Limitations - All tested AI models, including Grok4, OpenAI o3, and Gemini, consistently miscounted the number of fingers in an image, indicating a systemic issue in their underlying mechanisms [11][40]. - A recent paper titled "Vision Language Models are Biased" explains that large models do not genuinely "see" images but instead rely on prior knowledge and memory [14][19]. - The AI models demonstrated a strong tendency to adhere to preconceived notions, such as the belief that humans have five fingers, leading to incorrect outputs when faced with contradictory evidence [61][64]. Group 2: Experiment Findings - Researchers conducted experiments where AI models were shown altered images, such as an Adidas shoe with an extra stripe, yet all models incorrectly identified the number of stripes [39][40]. - In another experiment, AI models struggled to accurately count legs on animals, achieving correct answers only 2 out of 100 times [45]. - The models' reliance on past experiences and biases resulted in significant inaccuracies, even when prompted to focus solely on the images [67]. Group 3: Implications for Real-World Applications - The article raises concerns about the potential consequences of AI misjudgments in critical applications, such as quality control in manufacturing, where an AI might overlook defects due to its biases [72][76]. - The reliance on AI for visual assessments in safety-critical scenarios, like identifying tumors in medical imaging or assessing traffic situations, poses significant risks if the AI's biases lead to incorrect conclusions [77][78]. - The article emphasizes the need for human oversight in AI decision-making processes to mitigate the risks associated with AI's inherent biases and limitations [80][82].
AI们数不清六根手指,这事没那么简单。
数字生命卡兹克· 2025-07-10 20:40
Core Viewpoint - The article discusses the inherent biases in AI visual models, emphasizing that these models do not truly "see" images but rely on memory and preconceived notions, leading to significant errors in judgment [8][24][38]. Group 1: AI Model Limitations - All tested AI models consistently miscounted the number of fingers in an image, with the majority asserting there were five fingers, despite the image showing six [5][12][17]. - A study titled "Vision Language Models are Biased" reveals that AI models often rely on past experiences and associations rather than actual visual analysis [6][8][18]. - The models' reliance on prior knowledge leads to a failure to recognize discrepancies in images, as they prioritize established beliefs over new visual information [24][28][36]. Group 2: Implications of AI Bias - The article highlights the potential dangers of AI biases in critical applications, such as quality control in manufacturing, where AI might overlook defects due to their rarity in the training data [30][34]. - The consequences of these biases can be severe, potentially leading to catastrophic failures in real-world scenarios, such as automotive safety [33][35]. - The article calls for a cautious approach to relying on AI for visual judgments, stressing the importance of human oversight and verification [34][39].
以玩促学?游戏代码驱动数据合成,提升多模态大模型通用推理
机器之心· 2025-07-04 08:59
Core Insights - The article presents a novel approach called Code2Logic, which utilizes game code to synthesize multimodal reasoning data, enhancing the reasoning capabilities of visual language models (VLMs) [47][48]. - The research indicates that training AI using game scenarios can significantly improve its performance in geometric and graphical reasoning tasks [1][24]. Data and Model - The scarcity of high-quality multimodal reasoning data limits the advancement of VLMs' complex reasoning abilities, prompting the need for a cost-effective method to generate such data [4]. - The research team from Fudan University and ByteDance proposes leveraging game code to automatically synthesize visual reasoning data, capitalizing on the structured nature of games [12][13]. Methodology - The Code2Logic method involves three core steps: generating game code using large language models (LLMs), designing question-answer templates from the game code, and constructing an automated data engine to generate Q&A instances [13][14][15]. - The GameQA dataset created through this method encompasses 30 games, 158 reasoning tasks, and 140,000 Q&A pairs, showcasing its scalability and diversity [18]. Training and Performance - Training on GameQA data leads to significant performance improvements in both in-domain and out-of-domain tasks, demonstrating the generalization capabilities of models trained with this dataset [24][25]. - The study reveals that models trained with GameQA outperform those trained on traditional geometric reasoning datasets, indicating the cognitive diversity and reasoning complexity inherent in game data [28][29]. Scaling Effects - The research identifies two scaling effects: increased game variety enhances out-of-domain generalization, and sample diversity correlates positively with generalization performance [37][38]. - These findings suggest that the diversity and scalability of GameQA contribute to stronger generalization in reasoning tasks [39]. Limitations and Challenges - The analysis highlights key limitations in VLMs' reasoning capabilities, particularly in 3D spatial perception, pattern recognition, and strategic planning [42][45]. - The study emphasizes the need for further improvements in models' abilities to handle complex reasoning tasks effectively [46].
今年大火的目标导航到底是什么?从目标搜索到触达有哪些路线?
具身智能之心· 2025-06-26 14:19
Core Viewpoint - Goal-Oriented Navigation empowers robots to autonomously complete navigation tasks based on goal descriptions, marking a significant shift from traditional visual language navigation systems [2][3]. Group 1: Technology Overview - Embodied navigation is a core area of embodied intelligence, relying on three technical pillars: language understanding, environmental perception, and path planning [2]. - Goal-Oriented Navigation requires robots to explore and plan paths in unfamiliar 3D environments using only goal descriptions such as coordinates, images, or natural language [2]. - The technology has been industrialized in various verticals, including delivery, healthcare, and hospitality, enhancing service efficiency [3]. Group 2: Technological Evolution - The evolution of Goal-Oriented Navigation can be categorized into three generations: - First Generation: End-to-end methods focusing on reinforcement learning and imitation learning, achieving breakthroughs in Point Navigation and closed-set image navigation tasks [5]. - Second Generation: Modular methods that explicitly construct semantic maps, breaking tasks into exploration and goal localization [5]. - Third Generation: Integration of large language models (LLMs) and visual language models (VLMs) to enhance knowledge reasoning and open vocabulary target matching [7]. Group 3: Challenges and Learning Path - The complexity of embodied navigation, particularly Goal-Oriented Navigation, necessitates knowledge from multiple fields, making it challenging for newcomers to enter the domain [9]. - A new course has been developed to address these challenges, focusing on quick entry, building a research framework, and combining theory with practice [10][11][12]. Group 4: Course Structure - The course will cover the theoretical foundations and technical lineage of Goal-Oriented Navigation, including task definitions and evaluation benchmarks [15]. - It will also delve into the Habitat simulation ecosystem, end-to-end navigation methodologies, modular navigation architectures, and LLM/VLM-driven navigation systems [16][18][20][22]. - A significant project will focus on the reproduction of VLFM algorithms and their deployment in real-world scenarios [24].
上海交大最新!DyNaVLM:零样本、端到端导航框架
具身智能之心· 2025-06-22 10:56
Core Viewpoint - The article discusses the development of DyNaVLM, a zero-shot, end-to-end navigation framework that integrates vision-language models (VLM) to enhance navigation capabilities in dynamic environments, overcoming limitations of traditional methods [4][5]. Group 1: Introduction and Optimization Goals - Navigation is a fundamental capability in autonomous agents, requiring spatial reasoning, real-time decision-making, and adaptability to dynamic environments. Traditional methods face challenges in generalization and scalability due to their modular design [4]. - The advancement of VLMs offers new possibilities for navigation by integrating perception and reasoning within a single framework, although their application in embodied navigation is limited by spatial granularity and contextual reasoning capabilities [4]. Group 2: Core Innovations of DyNaVLM - **Dynamic Action Space Construction**: DyNaVLM introduces a dynamic action space that allows robots to determine navigation goals based on visual information and language instructions, enhancing movement flexibility in complex environments [6]. - **Collaborative Graph Memory Mechanism**: Inspired by retrieval-augmented generation (RAG), this mechanism enhances memory management for better navigation performance [8]. - **No-Training Deployment Mode**: DyNaVLM can be deployed without task-specific fine-tuning, reducing deployment costs and improving generalization across different environments and tasks [8]. Group 3: System Architecture and Methodology - **Problem Formalization**: The system takes inputs such as target descriptions and RGB-D observations to determine appropriate actions, maintaining a memory function to extract spatial features [11]. - **Memory Manager**: This component connects VLM and graph-structured memory, capturing spatial relationships and semantic object information [12]. - **Action Proposer and Selector**: The action proposer simplifies continuous search space into discrete candidates, while the selector generates final navigation actions based on geometric candidates and contextual memory [14][15]. Group 4: Experimental Evaluation - **Simulation Environment Evaluation**: DyNaVLM achieved a success rate (SR) of 45.0% and a path length weighted success rate (SPL) of 0.232 in ObjectNav benchmarks, outperforming previous VLM frameworks [19][22]. - **Real-World Evaluation**: DyNaVLM demonstrated superior performance in real-world settings, particularly in tasks requiring the identification of multiple targets, showcasing its robustness and efficiency in dynamic environments [27].
万马科技20250612
2025-06-12 15:07
摘要 万马科技通过收购有方科技切入车联网领域,车联网业务收入从 2021 年的 5,000 万元增长到 2024 年的 2.6 亿元,利润也显著提升,并已建 立完整的数据闭环工具链和智驾算力中心。 国内车联网行业渗透率约为 80%,海外市场渗透率不足 30%,随着智 能驾驶对数据需求的增加,国内外市场均有较大的发展空间,尤其 Robotaxi 对实时数据监控和技术要求更高,单车价值提升显著。 优卡科技提供蓝海全球车联和云自动驾驶数据闭环两大解决方案,支持 1,400 万辆车辆,客户包括吉利、上汽、东风和理想等,并在全球范围 内支持 Robotaxi 企业的业务布局。 Robotaxi 被视为车联网行业发展的"皇冠上的明珠",高盛预测中国 Robotaxi 市场年化增长率将达到 96%。目前已在北京、武汉、广州以 及香港、迪拜等地进行常态化运营,特斯拉也即将推出相关业务。 Robotaxi 运营对网络质量有极高要求,包括运行安全、用户交互、合 规性、自动驾驶数据采集和运维等方面,需要高清地图、车路协同、远 程脱困以及海量数据支持。 万马科技 20250612 据监控需求高,对技术和数据量要求也更高,从单车价值上 ...