Workflow
世界模型
icon
Search documents
筹备了半年!端到端与VLA自动驾驶小班课来啦(一段式/两段式/扩散模型/VLA等)
自动驾驶之心· 2025-07-09 12:02
Core Viewpoint - End-to-End Autonomous Driving is the core algorithm for the next generation of intelligent driving mass production, marking a significant shift in the industry towards more integrated and efficient systems [1][3]. Group 1: End-to-End Autonomous Driving Overview - End-to-End Autonomous Driving can be categorized into single-stage and two-stage approaches, with the former directly modeling vehicle planning and control from sensor data, thus avoiding error accumulation seen in modular methods [1][4]. - The emergence of UniAD has initiated a new wave of competition in the autonomous driving sector, with various algorithms rapidly developing in response to its success [1][3]. Group 2: Challenges in Learning and Development - The rapid advancement in technology has made previous educational resources outdated, creating a need for updated learning paths that encompass multi-modal large models, BEV perception, reinforcement learning, and more [3][5]. - Beginners face significant challenges due to the fragmented nature of knowledge across various fields, making it difficult to extract frameworks and understand development trends [3][6]. Group 3: Course Structure and Content - The course on End-to-End and VLA Autonomous Driving aims to address these challenges by providing a structured learning path that includes practical applications and theoretical foundations [5][7]. - The curriculum covers the history and evolution of End-to-End algorithms, background knowledge necessary for understanding current technologies, and practical applications of various models [8][9]. Group 4: Key Technologies and Innovations - The course highlights significant advancements in two-stage and single-stage End-to-End methods, including notable algorithms like PLUTO and DiffusionDrive, which represent the forefront of research in the field [4][10][12]. - The integration of large language models (VLA) into End-to-End systems is emphasized as a critical area of development, with companies actively exploring new generation mass production solutions [13][14]. Group 5: Expected Outcomes and Skills Development - Upon completion of the course, participants are expected to reach a level equivalent to one year of experience as an End-to-End Autonomous Driving algorithm engineer, mastering various methodologies and key technologies [22][23]. - The course aims to equip participants with the ability to apply learned concepts to real-world projects, enhancing their employability in the autonomous driving sector [22][23].
「世界模型」也被泼冷水了?邢波等人揭开五大「硬伤」,提出新范式
机器之心· 2025-07-09 07:10
机器之心报道 编辑:泽南、+0 现在的世界模型,值得批判。 我们知道,大语言模型(LLM)是通过预测对话的下一个单词的形式产生输出的。由此产生的对话、推理甚至创作能力已经接近人类智力水平。 但目前看起来,ChatGPT 等大模型与真正的 AGI 还有肉眼可见的差距。如果我们能够完美地模拟环境中每一个可能的未来,是否就可以创造出强大的 AI 了?回想 一下人类:与 ChatGPT 不同,人类的能力组成有具体技能、深度复杂能力的区分。 模拟推理的案例:一个人(可能是自私的)通过心理模拟多个可能结果来帮助一个哭泣的人。 人类可以执行广泛的复杂任务,所有这些任务都基于相同的人类大脑认知架构。是否存在一个人工智能系统也能完成所有这些任务呢? 论文:Critiques of World Models 论文链接:https://arxiv.org/abs/2507.05169 研究人员指出了构建、训练世界模型的五个重点方面:1)识别并准备包含目标世界信息的训练数据;2)采用一种通用表征空间来表示潜在世界状态,其含义可 能比直接观察到的数据更为丰富;3)设计能够有效对表征进行推理的架构;4)选择能正确指导模型训练的目标函数; ...
具身智能论文速递 | 强化学习、VLA、VLN、世界模型等~
具身智能之心· 2025-07-08 12:54
Core Insights - The article discusses advancements in Vision-Language-Action (VLA) models through reinforcement learning (RL) techniques, specifically the Proximal Policy Optimization (PPO) algorithm, which significantly enhances the generalization capabilities of these models [2][4]. Group 1: VLA Model Enhancements - The application of PPO has led to a 42.6% increase in task success rates in out-of-distribution (OOD) scenarios [2]. - Semantic understanding success rates improved from 61.5% to 75.0% when encountering unseen objects [2]. - In dynamic interference scenarios, success rates surged from 28.6% to 74.5% [2]. Group 2: Research Contributions - A rigorous benchmark was established to evaluate the impact of VLA fine-tuning methods on generalization across visual, semantic, and execution dimensions [4]. - PPO was identified as superior to other RL algorithms like GRPO and DPO for VLA fine-tuning, with discussions on adapting these algorithms to meet the unique needs of VLA [4]. - An efficient PPO-based fine-tuning scheme was developed, utilizing a shared actor-critic backbone network, VLA model preheating, and minimal PPO training iterations [4]. - The study demonstrated that RL's generalization capabilities in VLA for semantic understanding and entity execution outperformed supervised fine-tuning (SFT), while maintaining comparable visual robustness [4]. Group 3: NavMorph Model - The NavMorph model was introduced as a self-evolving world model for vision-and-language navigation in continuous environments, achieving a success rate of 47.9% in unseen environments [13][15]. - The model incorporates a World-aware Navigator for inferring dynamic representations of the environment and a Foresight Action Planner for optimizing navigation strategies through predictive modeling [15]. - Experiments on mainstream VLN-CE benchmark datasets showed that NavMorph significantly enhanced the performance of leading models, validating its advantages in adaptability and generalization [15].
写了两万字综述 - 视频未来帧合成:从确定性到生成性方法
自动驾驶之心· 2025-07-08 12:45
Core Insights - The article discusses Future Frame Synthesis (FFS), which aims to generate future frames based on existing content, emphasizing the synthesis aspect and expanding the scope of video frame prediction [2][5] - It highlights the transition from deterministic methods to generative approaches in FFS, underscoring the increasing importance of generative models in producing realistic and diverse predictions [5][10] Group 1: Introduction to FFS - FFS aims to generate future frames from a series of historical frames or even a single context frame, with the learning objective seen as a core component of building world models [2][3] - The key challenge in FFS is designing models that efficiently balance complex scene dynamics and temporal coherence while minimizing inference delay and resource consumption [2][3] Group 2: Methodological Approaches - Early FFS methods followed two main design approaches: pixel-based methods that struggle with object appearance and disappearance, and methods that generate future frames from scratch but often lack high-level semantic context [3][4] - The article categorizes FFS methods into deterministic, stochastic, and generative paradigms, each representing different modeling approaches [8][9] Group 3: Challenges in FFS - FFS faces long-term challenges, including the need for algorithms that balance low-level pixel fidelity with high-level scene understanding, and the lack of reliable perception and randomness evaluation metrics [11][12] - The scarcity of high-quality, high-resolution datasets limits the ability of current video synthesis models to handle diverse and unseen scenarios [18][19] Group 4: Data Sets and Their Importance - The development of video synthesis models heavily relies on the diversity, quality, and characteristics of training datasets, with high-dimensional datasets providing greater variability and stronger generalization capabilities [21][22] - The article summarizes widely used datasets in video synthesis, highlighting their scale and available supervision signals [21][24] Group 5: Evaluation Metrics - Traditional low-level metrics like PSNR and SSIM often lead to blurry predictions, prompting researchers to explore alternative evaluation metrics that align better with human perception [12][14] - Recent comprehensive evaluation systems like VBench and FVMD have been proposed to assess video generation models from multiple aspects, including perceptual quality and motion consistency [14][15]
独家对话「杭州六小龙」云深处CEO:人形机器人进家干活还要10年
36氪· 2025-07-08 09:18
Core Viewpoint - The article discusses the emergence of embodied intelligence in robotics, highlighting the advancements and future potential of companies like Yundongchu Technology in the AI-driven industrial revolution [1]. Group 1: Company Overview - Yundongchu Technology, founded in 2017, has recently completed a financing round of nearly 500 million RMB, led by various investment funds [4][5]. - The company is recognized for its four-legged robot "Jueying," which has gained attention for its ability to navigate complex terrains [4][5]. - The founder, Zhu Qiuguo, emphasizes a low-profile approach, focusing on technology development rather than personal publicity [4][5]. Group 2: Technological Advancements - The company is transitioning from hardware-focused development to software-driven application validation, marking a significant shift in its operational strategy [11]. - Zhu highlights that the breakthrough in robot stability is expected by 2024, driven by advancements in AI large models and reinforcement learning [5][15]. - The introduction of the "world model" concept aims to reduce the dependency on vast amounts of data for training robots, allowing them to generalize better in unfamiliar environments [22][24]. Group 3: Market Applications - Yundongchu Technology is exploring new market applications for its robots, particularly in logistics and delivery, aiming to enhance efficiency in the "last mile" delivery process [7][41]. - The company plans to increase the daily delivery capacity from 200 to 300 orders by integrating robots with human delivery personnel [7][41]. - Current applications of the four-legged robots include power inspection, emergency response, and security patrols, showcasing their versatility in various sectors [35][39]. Group 4: Future Prospects - The company anticipates launching humanoid robots by the second half of 2025, although Zhu estimates that it will take at least 10 years for humanoid robots to effectively perform household tasks [8][26]. - Zhu believes that four-legged robots will coexist with humanoid robots, each serving different operational needs and environments [31][32]. - The future of robotics is expected to focus on increasing intelligence levels, with a shift towards more autonomous and interactive capabilities in various settings [44][45].
感觉捕手
3 6 Ke· 2025-07-08 09:04
Group 1 - The article discusses the importance of intuitive and embodied intelligence, emphasizing that true understanding comes from experience rather than abstract reasoning [1][39][84] - It highlights the concept of "world models" in AI, which aim to enable machines to understand and interact with the physical world in a more human-like manner [23][76][84] - The text draws parallels between human cognitive processes and AI development, suggesting that both rely on a form of non-verbal, intuitive understanding [17][29][72] Group 2 - The article references the limitations of current AI systems in understanding the physical world compared to human capabilities, particularly in spatial reasoning and perception [18][22][25] - It discusses the evolution of intelligence, noting that human cognitive abilities have been shaped by millions of years of evolution, which AI is still trying to replicate [21][75] - The piece concludes with the notion that as AI develops its own "taste" through embodied experiences, it may reach a level of understanding that parallels human intuition [72][84][85]
AI大模型行业专题解读
2025-07-07 00:51
Summary of Key Points from the Conference Call Industry Overview - The conference call focuses on the AI large model industry, particularly developments related to OpenAI, Google, and NVIDIA, as well as the competitive landscape in China [1][22]. Core Insights and Arguments - **GPT-5 Release and Features**: GPT-5 is expected to be released in the second half of 2025 or early 2026, with a parameter scale of 3-4 trillion, optimized reasoning chains, and enhanced general reasoning capabilities beyond STEM logic [1][2][5]. - **OpenAI's Strategy**: OpenAI plans to offer free basic features to widen the gap with domestic models while expanding its B2B business. Despite steady price increases, user traffic continues to grow [1][3][4]. - **Google's Vivo Model**: Google's Vivo visual model, released in May, integrates image generation, animation dubbing, and lip-syncing, simplifying video production but is limited by high pricing [1][11][12]. - **Domestic Competitors**: Chinese companies like Alibaba and ByteDance are expected to develop products achieving 90% of Vivo3's performance within 3-6 months, although they face challenges in computational power [1][13][14]. - **NVIDIA's Cosmos Model**: NVIDIA's Cosmos world model is seen as a significant future direction, with a comprehensive approach from chips to systems and simulation engines [1][15][20]. Additional Important Content - **Market Dynamics**: The AI large model market is experiencing rapid advancements due to underlying technology upgrades, with a notable narrowing of the technology gap between domestic and international players [22][23]. - **Application Areas**: AI technology shows strong performance in mobile application development, industrial visual inspection, productivity enhancement, and B2B scenarios, particularly in software development, e-commerce customer service, financial management, and recruitment [3][31][32][33]. - **Pricing Trends**: OpenAI and other companies are adjusting pricing dynamically, with a general trend of decreasing prices as performance improves [7][8]. - **Challenges in Data and Computational Power**: Domestic firms have sufficient data sources but face challenges in computational resources compared to Google, which has a significant advantage in this area [14][20]. - **Future of AI Models**: The development of world models is crucial for connecting physical AI with relevant hardware, with NVIDIA leading in creating a comprehensive ecosystem for data training and simulation [17][19]. This summary encapsulates the key points discussed in the conference call, highlighting the competitive landscape, technological advancements, and market dynamics within the AI large model industry.
“反击”马斯克,奥特曼说OpenAI有“好得多”的自动驾驶技术
3 6 Ke· 2025-07-07 00:32
Group 1: Conflict Between OpenAI and Tesla - The conflict between OpenAI CEO Sam Altman and Tesla CEO Elon Musk has become a hot topic in Silicon Valley, with Musk accusing Altman of deviating from OpenAI's original mission after its commercialization [1] - Musk has filed a lawsuit against Altman for allegedly breaching the founding agreement, while also establishing xAI to compete directly with OpenAI [1] - Altman has countered Musk's claims by revealing emails that suggest Musk attempted to take control of OpenAI and has been obstructing its progress since being denied [1] Group 2: OpenAI's Autonomous Driving Technology - Altman has hinted at new technology that could enable self-driving capabilities for standard cars, claiming it to be significantly better than current approaches, including Tesla's Full Self-Driving (FSD) [3][4] - However, Altman did not provide detailed information about this technology or a timeline for its development, indicating that it is still in the early stages [5] - The technology is believed to involve OpenAI's Sora video software and its robotics team, although OpenAI has not previously explored autonomous driving directly [6][7] Group 3: Sora and Its Implications for Autonomous Driving - Sora, a video generation model released by OpenAI, can create high-fidelity videos based on text input and is seen as a potential tool for simulating and training autonomous driving systems [10] - While Sora's generated videos may not fully adhere to physical principles, they could still provide valuable data for training models, particularly in extreme scenarios [10][11] - The concept of "world models" in autonomous driving aligns with Sora's capabilities, as it aims to help AI systems understand the physical world and improve driving performance [11][21] Group 4: OpenAI's Investments and Collaborations - OpenAI has made investments in autonomous driving companies, such as a $5 million investment in Ghost Autonomy, which later failed, and a partnership with Applied Intuition to integrate AI technologies into modern vehicles [12][15] - The collaboration with Applied Intuition focuses on enhancing human-machine interaction rather than direct autonomous driving applications [15] - OpenAI's shift towards multi-modal and world models indicates a strategic expansion into spatial intelligence, which could eventually benefit autonomous driving efforts [16][24] Group 5: Industry Perspectives on AI and Autonomous Driving - Experts in the AI field, including prominent figures like Fei-Fei Li and Yann LeCun, emphasize the need for AI to possess a deeper understanding of the physical world to effectively drive vehicles [19][20] - NVIDIA's introduction of the Cosmos world model highlights the industry's focus on creating high-quality training data for autonomous systems, which could complement OpenAI's efforts [22][24] - The autonomous driving market is recognized as a multi-trillion-dollar opportunity, making it a critical area for competition between companies like OpenAI and Tesla [24]
自动驾驶黄埔军校,一个死磕技术的地方~
自动驾驶之心· 2025-07-06 12:30
Core Viewpoint - The article discusses the transition of autonomous driving technology from Level 2/3 (assisted driving) to Level 4/5 (fully autonomous driving), highlighting the challenges and opportunities in the industry as well as the evolving skill requirements for professionals in the field [2]. Industry Trends - The shift towards high-level autonomous driving is creating a competitive landscape where traditional sensor-based approaches, such as LiDAR, are being challenged by cost-effective vision-based solutions like those from Tesla [2]. - The demand for skills in reinforcement learning and advanced perception algorithms is increasing, leading to a sense of urgency among professionals to upgrade their capabilities [2]. Talent Market Dynamics - The article notes a growing anxiety among seasoned professionals as they face the need to adapt to new technologies and methodologies, while newcomers struggle with the overwhelming number of career paths available in the autonomous driving sector [2]. - The reduction in costs for LiDAR technology, exemplified by Hesai Technology's price drop to $200 and BYD's 70% price reduction, indicates a shift in the market that requires continuous learning and adaptation from industry professionals [2]. Community and Learning Resources - The establishment of the "Autonomous Driving Heart Knowledge Planet" aims to create a comprehensive learning community for professionals, offering resources and networking opportunities to help individuals navigate the rapidly changing landscape of autonomous driving technology [7]. - The community has attracted nearly 4,000 members and over 100 industry experts, providing a platform for knowledge sharing and career advancement [7]. Technical Focus Areas - The article outlines several key technical areas within autonomous driving, including end-to-end driving systems, perception algorithms, and the integration of AI models for improved performance [10][11]. - It emphasizes the importance of understanding various subfields such as multi-sensor fusion, high-definition mapping, and AI model deployment, which are critical for the development of autonomous driving technologies [7].
最新综述:从物理仿真和世界模型中学习具身智能
自动驾驶之心· 2025-07-05 13:41
Core Viewpoint - The article focuses on the advancements in embodied intelligence within robotics, emphasizing the integration of physical simulators and world models as crucial for developing robust embodied intelligence [3][5]. Group 1: Embodied Intelligence and Robotics - Embodied intelligence is highlighted as a key area of research, emphasizing the importance of physical interaction with the environment for perception, action, and cognition [5]. - The article discusses the necessity for a scientific and reasonable grading system for robotic intelligence, especially in dynamic and uncertain environments [5][6]. - A proposed grading model for intelligent robots includes five progressive levels (IR-L0 to IR-L4), covering autonomy and task handling capabilities [6][10]. Group 2: Grading System for Intelligent Robots - The grading system categorizes robots based on their task execution capabilities, decision-making depth, interaction complexity, and ethical cognition [7][10]. - Key dimensions for grading include autonomy, task processing ability, environmental adaptability, and social cognition [11]. Group 3: Physical Simulators and World Models - The article reviews the complementary roles of physical simulators and world models in enhancing robot autonomy, adaptability, and generalization capabilities [3][72]. - A resource repository is maintained to provide comprehensive insights into the development of embodied AI systems and future challenges [3]. Group 4: Key Technologies and Trends - The advancements in robotics include the integration of various technologies such as model predictive control, reinforcement learning, and imitation learning to enhance robot capabilities [24][25]. - The article discusses the evolution of world models, which simulate real-world dynamics and improve the robustness of robotic systems [45][60]. Group 5: Future Directions and Challenges - Future directions include the development of structured world models, multi-modal integration, and lightweight models for efficient inference [73][72]. - The challenges faced by the industry include high-dimensional perception, causal reasoning, and real-time processing requirements [71][73].