扩散模型
Search documents
工业界和学术界都在怎么搞端到端和VLA?
自动驾驶之心· 2025-10-17 00:03
Core Insights - The article discusses the evolution of end-to-end algorithms in autonomous driving, highlighting the transition from modular production algorithms to end-to-end and now to Vision-Language Alignment (VLA) models [1][3] - It emphasizes the rich technology stack involved in end-to-end algorithms, including BEV perception, visual language models (VLM), diffusion models, reinforcement learning, and world models [3] Summary by Sections End-to-End Algorithms - End-to-end algorithms are categorized into two main paradigms: single-stage and two-stage, with UniAD being a representative of the single-stage approach [1] - Single-stage can further branch into various subfields, particularly those based on VLA, which have seen a surge in related publications and industrial applications in recent years [1] Courses Offered - The article promotes two courses: "End-to-End and VLA Autonomous Driving Small Class" and "Practical Course on Autonomous Driving VLA and Large Models," aimed at helping individuals quickly and efficiently enter the field [3] - The "Practical Course" focuses on VLA, covering topics from VLM as an autonomous driving interpreter to modular and integrated VLA, along with detailed theoretical foundations [3][12] Instructor Team - The instructor team includes experts from both academia and industry, with backgrounds in multi-modal perception, autonomous driving VLA, and large model frameworks [8][11][14] - Notable instructors have published numerous papers in top-tier conferences and have extensive experience in research and practical applications in autonomous driving and large models [8][11][14] Target Audience - The courses are designed for individuals with a foundational understanding of autonomous driving, familiar with basic modules, and have knowledge of transformer models, reinforcement learning, and BEV perception [15][17]
VAE时代终结?谢赛宁团队「RAE」登场,表征自编码器或成DiT训练新基石
机器之心· 2025-10-14 08:24
Core Insights - The article discusses the emergence of RAE (Representation Autoencoders) as a potential replacement for VAE (Variational Autoencoders) in the field of generative models, highlighting the advancements made by the research team led by Assistant Professor Xie Saining from New York University [1][2]. Group 1: RAE Development - RAE combines pre-trained representation encoders (like DINO, SigLIP, MAE) with trained decoders to replace traditional VAE, achieving high-quality reconstruction and a semantically rich latent space [2][6]. - The new model structure addresses the limitations of VAE, such as weak representation capabilities and high computational costs associated with SD-VAE [4][13]. Group 2: Performance Metrics - RAE demonstrates superior performance in image generation tasks, achieving an FID score of 1.51 at a resolution of 256×256 without guidance, and 1.13 with guidance at both 256×256 and 512×512 resolutions [5][6]. - The study shows that RAE consistently outperforms SD-VAE in reconstruction quality, with rFID scores indicating better performance across various encoder configurations [18][20]. Group 3: Training and Architecture - The research introduces a new variant of DiT (Diffusion Transformer), named DiT^DH, which incorporates a lightweight, wide head structure to enhance the model's efficiency without significantly increasing computational costs [3][34]. - The training scheme for the RAE decoder involves using a frozen representation encoder and a ViT-based decoder, achieving reconstruction quality comparable to or better than SD-VAE [12][14]. Group 4: Scalability and Efficiency - DiT^DH exhibits improved convergence speed and computational efficiency compared to standard DiT, maintaining performance advantages across different scales of RAE [36][40]. - The model's scalability is highlighted, with DiT^DH-XL achieving a new state-of-the-art FID score of 1.13 after 400 epochs, outperforming previous models while requiring significantly less computational power [41][43]. Group 5: Noise Management Techniques - The research proposes noise-enhanced decoding to improve the robustness of the decoder against out-of-distribution challenges, which enhances the model's overall performance [29][30]. - Adjustments to noise scheduling based on the effective data dimensions of RAE are shown to significantly improve training outcomes, demonstrating the necessity of tailored noise strategies in high-dimensional latent spaces [28].
Bug变奖励:AI的小失误,揭开创造力真相
3 6 Ke· 2025-10-13 00:31
Core Insights - The article discusses the surprising creativity of AI models, particularly diffusion models, which seemingly generate novel images rather than mere copies, suggesting that their creativity is a byproduct of their architectural design [1][2][6]. Group 1: AI Creativity Mechanism - Diffusion models are designed to reconstruct images from noise, yet they produce unique compositions by combining different elements, leading to unexpected and meaningful outputs [2][4]. - The phenomenon of AI generating images with oddities, such as extra fingers, is attributed to the models' inherent limitations, which force them to improvise rather than rely solely on memory [12][19]. - The research identifies two key principles in diffusion models: locality, where the model focuses on small pixel blocks, and equivariance, which ensures that shifts in input images result in corresponding shifts in output [8][9]. Group 2: Mathematical Validation - Researchers developed the ELS (Equivariant Local Score) machine, a mathematical system that predicts how images will combine as noise is removed, achieving a remarkable 90% overlap with outputs from real diffusion models [13][18]. - This finding suggests that AI creativity is not a mysterious phenomenon but rather a predictable outcome of the operational rules of the models [18]. Group 3: Biological Parallels - The study draws parallels between AI creativity and biological processes, particularly in embryonic development, where local responses lead to self-organization, sometimes resulting in anomalies like extra fingers [19][21]. - It posits that human creativity may not be fundamentally different from AI creativity, as both stem from a limited understanding of the world and the ability to piece together experiences into new forms [21][22].
北航团队提出新的离线分层扩散框架:基于结构信息原理,实现稳定离线策略学习|NeurIPS 2025
AI前线· 2025-10-09 04:48
Core Insights - The article discusses the potential of a new framework called SIHD (Structural Information-based Hierarchical Diffusion) for offline reinforcement learning, which adapts to various tasks by analyzing embedded structural information in offline trajectories [2][3][23]. Research Background and Motivation - Offline reinforcement learning aims to train effective policies using fixed historical datasets without new interactions with the environment. The introduction of diffusion models helps mitigate extrapolation errors caused by out-of-distribution states and actions [3][4]. - Current methods face limitations due to fixed hierarchical structures and single time scales, which hinder adaptability to different task complexities and decision-making flexibility [5][6]. SIHD Framework Core Design - SIHD innovates in three areas: hierarchical construction, conditional diffusion, and regularization exploration [5]. - The framework's hierarchical construction is adaptive, allowing the data's inherent structure to dictate the hierarchy [7][9]. - The conditional diffusion model uses structural information gain as a guiding signal, enhancing stability and robustness compared to traditional methods reliant on sparse reward signals [10][11]. - A structural entropy regularizer is introduced to encourage exploration and mitigate extrapolation errors, balancing exploration and exploitation in the training objective [12][13]. Experimental Results and Analysis - SIHD was evaluated on the D4RL benchmark, demonstrating superior performance in standard offline RL tasks and long-horizon navigation tasks [14][15]. - In Gym-MuJoCo tasks, SIHD achieved optimal average returns across various data quality levels, outperforming advanced hierarchical baselines with average improvements of 3.8% and 3.9% in medium-quality datasets [16][17][18]. - In long-horizon navigation tasks, SIHD showed significant advantages, particularly in sparse reward scenarios, with notable performance improvements in Maze2D and AntMaze tasks [19][20][22]. - Ablation studies confirmed the necessity of SIHD's components, especially the adaptive multi-scale hierarchy, which is crucial for performance in long-horizon tasks [21][22]. Conclusion - The SIHD framework successfully constructs an adaptive multi-scale hierarchical diffusion model, overcoming rigid limitations of existing methods and significantly enhancing offline policy learning performance, generalization, and robustness [23]. Future research may explore more refined sub-goal conditional strategies and extend SIHD's concepts to broader diffusion-based generative models [23].
自动驾驶之心招募合伙人啦!4D标注/世界模型/模型部署等方向
自动驾驶之心· 2025-10-04 04:04
Group 1 - The article announces the recruitment of 10 outstanding partners for the autonomous driving sector, focusing on course development, paper guidance, and hardware research [2] - The main areas of expertise sought include large models, multimodal models, diffusion models, end-to-end systems, embodied interaction, joint prediction, SLAM, 3D object detection, world models, closed-loop simulation, and model deployment and quantization [3] - Candidates are preferred from universities ranked within the QS200, holding a master's degree or higher, with priority given to those with significant conference contributions [4] Group 2 - The compensation package includes resource sharing for job seeking, doctoral studies, and overseas study recommendations, along with substantial cash incentives and opportunities for entrepreneurial project collaboration [5] - Interested parties are encouraged to add WeChat for consultation, specifying "organization/company + autonomous driving cooperation inquiry" [6]
业务合伙人招募!4D标注/世界模型/VLA/模型部署等方向
自动驾驶之心· 2025-10-02 03:04
Group 1 - The article announces the recruitment of 10 outstanding partners for the autonomous driving sector, focusing on course development, paper guidance, and hardware research [2] - The main areas of expertise sought include large models, multimodal models, diffusion models, end-to-end systems, embodied interaction, joint prediction, SLAM, 3D object detection, world models, closed-loop simulation, and model deployment and quantization [3] - Candidates are preferred from universities ranked within the QS200, holding a master's degree or higher, with priority given to those with significant conference contributions [4] Group 2 - The compensation package includes resource sharing for job seeking, doctoral studies, and overseas study recommendations, along with substantial cash incentives and opportunities for entrepreneurial project collaboration [5] - Interested parties are encouraged to add WeChat for consultation, specifying "organization/company + autonomous driving cooperation inquiry" [6]
首次实现第一视角视频与人体动作同步生成!新框架攻克视角-动作对齐两大技术壁垒
量子位· 2025-10-01 01:12
Core Viewpoint - The article discusses the development of EgoTwin, a framework that successfully generates first-person perspective videos and human actions in a synchronized manner, overcoming significant challenges in perspective-action alignment and causal coupling. Group 1: Challenges in First-Person Perspective Generation - The essence of first-person perspective video is that human actions drive visual recording, where head movements determine camera position and orientation, while full-body actions affect body posture and surrounding scene changes [9]. - Two main challenges are identified: 1. Perspective alignment, where the camera trajectory in generated videos must precisely match the head trajectory derived from human actions [10]. 2. Causal interaction, where each visual frame provides spatial context for human actions, and newly generated actions alter subsequent visual frames [10]. Group 2: Innovations of EgoTwin - EgoTwin employs a diffusion Transformer architecture to create a "text-video-action" tri-modal joint generation framework, addressing the aforementioned challenges through three key innovations [12]. - The first innovation is a three-channel architecture that allows the action branch to cover only the lower half of the text and video branches, ensuring effective interaction [13]. - The second innovation involves a head-centered action representation, which directly anchors actions to the head joint, achieving precise alignment with first-person observations [20]. - The third innovation is an asynchronous diffusion training framework that balances efficiency and generation quality by adapting to the different sampling rates of video and action modalities [22]. Group 3: Performance Evaluation - EgoTwin's performance was validated through extensive testing using the Nymeria dataset, which includes 170,000 five-second "text-video-action" triplets [31]. - The evaluation metrics included traditional video and action quality indicators, as well as newly proposed consistency metrics [32]. - Results showed that EgoTwin significantly outperformed baseline models across nine metrics, with notable improvements in perspective alignment error and hand visibility consistency [32][34].
工业界大佬带队!三个月搞定端到端自动驾驶
自动驾驶之心· 2025-09-29 08:45
Core Viewpoint - 2023 is identified as the year of end-to-end production, with 2024 expected to be a significant year for this development in the automotive industry, particularly in autonomous driving technology [1][3]. Group 1: End-to-End Production - Leading new forces and manufacturers have already achieved end-to-end production [1]. - There are two main paradigms in the industry: one-stage and two-stage approaches, with UniAD being a representative of the one-stage method [1]. Group 2: Development Trends - Since last year, the one-stage end-to-end approach has rapidly evolved, leading to various derivatives such as perception-based, world model-based, diffusion model-based, and VLA-based one-stage methods [3]. - Major autonomous driving companies are focusing on self-research and mass production of end-to-end autonomous driving solutions [3]. Group 3: Course Offerings - A course titled "End-to-End and VLA Autonomous Driving" has been launched, covering cutting-edge algorithms in both one-stage and two-stage end-to-end approaches [5]. - The course aims to provide insights into the latest technologies in the field, including BEV perception, visual language models, diffusion models, and reinforcement learning [5]. Group 4: Course Structure - The course consists of several chapters, starting with an introduction to end-to-end algorithms, followed by background knowledge essential for understanding the technology stack [9][10]. - The second chapter focuses on the most frequently asked technical keywords in job interviews over the next two years [10]. - Subsequent chapters delve into two-stage end-to-end methods, one-stage end-to-end methods, and practical assignments involving RLHF fine-tuning [12][13]. Group 5: Learning Outcomes - Upon completion, participants are expected to reach a level equivalent to one year of experience as an end-to-end autonomous driving algorithm engineer [19]. - The course aims to deepen understanding of key technologies such as BEV perception, multimodal large models, and reinforcement learning, enabling participants to apply learned concepts to real projects [19].
Meta刚从OpenAI挖走了清华校友宋飏
36氪· 2025-09-26 13:35
Core Viewpoint - The recent hiring of Yang Song, a key figure in diffusion models and an early contributor to DALL·E 2, by Meta Superintelligence Labs (MSL) signals a strategic move in the AI competition, enhancing MSL's talent pool and research capabilities [2][3][11]. Group 1: Talent Acquisition and Team Structure - Yang Song's addition to MSL strengthens the "dual-core" structure of the team, with one leader managing overall strategy and the other focusing on critical paths in research [16]. - The team composition is becoming clearer, with a more structured division of research responsibilities [17]. - Since summer, over 11 researchers from OpenAI, Google, and Anthropic have joined MSL, indicating a high-frequency recruitment strategy [20]. Group 2: Industry Trends and Dynamics - The rapid turnover of talent among top AI labs is becoming more common, reflecting a shift towards project compatibility and team dynamics as key factors in employment decisions [25]. - The relationship between researchers and labs is evolving into a "mutual pursuit," where both parties seek alignment in goals and capabilities [47]. - The competition for AI talent is intensifying, with increasing demands on researchers to understand cross-modal capabilities and complete data workflows [48]. Group 3: Research Focus and Strategic Alignment - Yang Song's research on diffusion models aligns closely with MSL's strategic direction, aiming to develop universal models that can understand various data forms [28][30]. - The integration of Yang Song's expertise is expected to enhance MSL's ability to create a comprehensive AI product system, accelerating the formation of a complete technical loop from modeling to execution [32][41]. - Meta is not only attracting top talent but is also working to transform these capabilities into organizational and product-level resources [44].
AnchDrive:一种新端到端自动驾驶扩散策略(上大&博世)
自动驾驶之心· 2025-09-26 07:50
Core Insights - The article introduces AnchDrive, an end-to-end framework for autonomous driving that effectively addresses the challenges of multimodal behavior and generalization in long-tail scenarios [1][10][38] - AnchDrive utilizes a hybrid trajectory anchor approach, combining dynamic and static anchors to enhance trajectory quality and robustness in planning [10][38] Group 1: Introduction and Background - End-to-end autonomous driving algorithms have gained significant attention due to their superior scalability and adaptability compared to traditional rule-based motion planning methods [4][12] - These methods learn control signals directly from raw sensor data, reducing the complexity of modular design and minimizing cumulative perception errors [4][12] Group 2: Methodology - AnchDrive employs a multi-head trajectory decoder that dynamically generates a set of trajectory anchors, capturing behavioral diversity under local environmental conditions [8][15] - The framework integrates a large-scale static anchor set derived from human driving data, providing cross-scenario behavioral prior knowledge [8][15] Group 3: Experimental Results - In the NAVSIM v2 simulation platform, AnchDrive achieved an Extended Predictive Driver Model Score (EPDMS) of 85.5, indicating its ability to generate robust and contextually appropriate behaviors in complex driving scenarios [9][30][34] - The performance of AnchDrive was significantly higher than existing methods, with an 8.9 point increase in EPDMS compared to VADv2, while reducing the number of trajectory anchors from 8192 to just 20 [34] Group 4: Contributions - The main contributions of the article include the introduction of the AnchDrive framework, which utilizes a truncated diffusion process initialized from a hybrid trajectory anchor set, significantly improving initial trajectory quality and planning robustness [10][38] - The design of a mixed perception model with dense and sparse branches enhances the planner's understanding of obstacles and road geometry [11][18]