扩散模型
Search documents
UofT、UBC、MIT和复旦等联合发布:扩散模型驱动的异常检测与生成全面综述
机器之心· 2025-06-30 23:48
扩散模型(Diffusion Models, DMs)近年来展现出巨大的潜力,在计算机视觉和自然语言处理等诸多任务中取得了显著进展,而异常检测(Anomaly Detection, AD)作为人工智能领域的关键研究任务,在工业制造、金融风控、医疗诊断等众多实际场景中发挥着重要作用。近期,来自多伦多大学、 不列颠哥伦比亚大学 、麻省理工学院、悉尼大学、卡迪夫大学和复旦大学等知名机构的研究者合作完成题为 "Anomaly Detection and Generation with Diffusion Models: A Survey" 的长文 综述,首次聚焦于 DMs 在异常检测与生成领域的应用。该综述系统性地梳理了图像、视频、时间序列、表格和多模态异常检测任务的最新进展并从扩散模型视角 提供了全面的分类体系,结合生成式 AI 的研究动向展望了未来趋势和发展机遇,有望引导该领域的研究者和从业者。 论文标题: Anomaly Detection and Generation with Diffusion Models: A Survey 论文链接: https://arxiv.org/pdf/2506.09368 ...
ICML 2025 Spotlight | 新理论框架解锁流匹配模型的引导生成
机器之心· 2025-06-28 02:54
Core Viewpoint - The article introduces a novel energy guidance theoretical framework for flow matching models, addressing the gap in energy guidance algorithms within this context and proposing various practical algorithms suitable for different tasks [2][3][27]. Summary by Sections Research Background - Energy guidance is a crucial technique in the application of generative models, ideally altering the distribution of generated samples to align with a specific energy function while maintaining adherence to the training set distribution [7][9]. - Existing energy guidance algorithms primarily focus on diffusion models, which differ fundamentally from flow matching models, necessitating a general energy guidance theoretical framework for flow matching [9]. Method Overview - The authors derive a general flow matching energy guidance vector field from the foundational definitions of flow matching models, leading to the formulation of three categories of practical, training-free energy guidance algorithms [11][12]. - The guidance vector field is designed to direct the original vector field towards regions of lower energy function values [12]. Experimental Results - Experiments were conducted on synthetic data, offline reinforcement learning, and image linear inverse problems, demonstrating the effectiveness of the proposed algorithms [20][22]. - In synthetic datasets, the Monte Carlo sampling-based guidance algorithm achieved results closest to the ground truth distribution, validating the correctness of the flow matching guidance framework [21]. - In offline reinforcement learning tasks, the Monte Carlo sampling guidance exhibited the best performance due to the need for stable guidance samples across different time steps [23]. - For image inverse problems, the Gaussian approximation guidance and GDM showed optimal performance, while the Monte Carlo sampling struggled due to high dimensionality [25]. Conclusion - The work fills a significant gap in energy guidance algorithms for flow matching models, providing a new theoretical framework and several practical algorithms, along with theoretical analysis and experimental comparisons to guide real-world applications [27].
之心急聘!25年业务合伙人招聘,量大管饱~
自动驾驶之心· 2025-06-27 09:34
Group 1 - The article discusses the recruitment of 10 outstanding partners for the "Autonomous Driving Heart" team, focusing on the development of autonomous driving-related courses, thesis guidance, and hardware development [2][3] - The main areas of expertise sought include large models/multi-modal large models, diffusion models, VLA, end-to-end systems, embodied interaction, joint prediction, SLAM, 3D object detection, world models, closed-loop simulation 3DGS, and large model deployment and quantized perception reasoning [3] - Candidates are preferred to have a master's degree or higher from universities ranked within the QS200, with priority given to those who have significant contributions in top conferences [4] Group 2 - The company offers various benefits including resource sharing for job seeking, doctoral studies, and studying abroad recommendations, along with substantial cash incentives and opportunities for entrepreneurial project collaboration [5][6] - Interested parties are encouraged to contact the company via WeChat for consultation regarding institutional or corporate collaboration in autonomous driving [7]
具身世界模型新突破,地平线 & 极佳提出几何一致视频世界模型增强机器人策略学习
机器之心· 2025-06-26 04:35
近年来,随着人工智能从感知智能向决策智能演进, 世界模型 (World Models) 逐渐成为机器人领域的重要研究方向。世界模型旨在让智能体对环境进行建模并 预测未来状态,从而实现更高效的规划与决策。 与此同时,具身数据也迎来了爆发式关注。因为目前具身算法高度依赖于大规模的真实机器人演示数据,而这些数据的采集过程往往成本高昂、耗时费力,严重 限制了其可扩展性和泛化能力。尽管仿真平台提供了一种相对低成本的数据生成方式,但由于仿真环境与真实世界之间存在显著的视觉和动力学差异(即 sim-to- real gap),导致在仿真中训练的策略难以直接迁移到真实机器人上,从而限制了其实际应用效果。 因此如何高效获取、生成和利用高质量的具身数据,已成为当 前机器人学习领域的核心挑战之一 。 项目主页: https://horizonrobotics.github.io/robot_lab/robotransfer/ 模仿学习(Imitation Learning)已成为机器人操作领域的重要方法之一。通过让机器人 "模仿" 专家示教的行为,可以在复杂任务中快速构建有效的策略模型。然 而,这类方法通常依赖大量高质量的真实机器 ...
生成式视角重塑监督学习!标签不只是答案,更是学习指南 | ICML 2025
量子位· 2025-06-24 13:36
Core Viewpoint - A new paradigm in supervised learning called Predictive Consistency Learning (PCL) is introduced, which redefines the role of labels as auxiliary references rather than just standard answers for comparison [1][5]. Group 1: Training Process Overview - PCL aims to capture complex label representations by progressively decomposing label information, allowing the model to predict complete labels with partial label hints [5][6]. - The training process involves mapping noisy labels back to true labels, with noise levels controlled by time steps, ensuring predictions remain consistent across different noise levels [7][8]. Group 2: Noise Process - The noise process for discrete labels is modeled using a categorical distribution, while continuous labels follow a Gaussian diffusion model, introducing noise progressively [9][11]. - In cases where labels are too complex, PCL introduces Gaussian noise directly into the latent embedding space, aligning with the continuous label noise process [11]. Group 3: Testing Process Overview - After training, the model can efficiently predict by sampling from a random noise distribution, achieving results that surpass traditional supervised learning even without label hints [14][28]. - A multi-step inference strategy is employed to refine predictions, where previous predictions are perturbed with noise to serve as hints for subsequent predictions [14][28]. Group 4: Information Theory Perspective - PCL proposes a structured learning process that gradually captures information, allowing the model to learn from noisy labels while minimizing dependency on them [15][18]. - The model's goal is to minimize noise condition dependence, ensuring predictions remain consistent across varying noise levels [19]. Group 5: Experimental Results - PCL demonstrates significant improvements in prediction accuracy across various tasks, including image segmentation, graph-based predictions, and language modeling, compared to traditional supervised learning [20][25][30]. - In image segmentation, PCL outperforms traditional methods in single-step predictions and continues to improve with additional prediction steps [22][28]. - The results indicate that while more inference steps can enhance detail capture, they also risk error accumulation, necessitating a balance in the number of steps [26][28].
技术圈热议的π0/π0.5/A0,终于说清楚是什么了!功能、场景、方法论全解析~
具身智能之心· 2025-06-21 12:06
Core Insights - The article discusses the π0, π0.5, and A0 models, focusing on their architectures, advantages, and functionalities in robotic control and task execution [3][11][29]. Group 1: π0 Model Structure and Functionality - The π0 model is based on a pre-trained Vision-Language Model (VLM) and Flow Matching technology, integrating seven robots and over 68 tasks with more than 10,000 hours of data [3]. - It allows zero-shot task execution through language prompts, enabling direct control of robots without additional fine-tuning for covered tasks [4]. - The model supports complex task decomposition and multi-stage fine-tuning, enhancing the execution of intricate tasks like folding clothes [5]. - It achieves high-frequency precise operations, generating continuous action sequences at a control frequency of up to 50Hz [7]. Group 2: π0 Performance Analysis - The π0 model shows a 20%-30% higher accuracy in following language instructions compared to baseline models in tasks like table clearing and grocery bagging [11]. - For similar pre-trained tasks, it requires only 1-5 hours of data fine-tuning to achieve high success rates, and it performs twice as well on new tasks compared to training from scratch [11]. - In multi-stage tasks, π0 achieves an average task completion rate of 60%-80% through a "pre-training + fine-tuning" process, outperforming models trained from scratch [11]. Group 3: π0.5 Model Structure and Advantages - The π0.5 model employs a two-stage training framework and hierarchical architecture, enhancing its ability to generalize from diverse data sources [12][18]. - It demonstrates a 25%-40% higher success rate in tasks compared to π0, with a training speed improvement of three times due to mixed discrete-continuous action training [17]. - The model effectively handles long-duration tasks and can execute complex operations in unfamiliar environments, showcasing its adaptability [18][21]. Group 4: A0 Model Structure and Performance - The A0 model features a layered architecture that integrates high-level affordance understanding and low-level action execution, enhancing its spatial reasoning capabilities [29]. - It shows continuous performance improvement with increased training environments, achieving success rates close to baseline models when trained on 104 locations [32]. - The model's performance is significantly impacted by the removal of cross-entity and web data, highlighting the importance of diverse data sources for generalization [32]. Group 5: Overall Implications and Future Directions - The advancements in these models indicate a significant step towards practical applications of robotic systems in real-world environments, with potential expansions into service robotics and industrial automation [21][32]. - The integration of diverse data sources and innovative architectures positions these models to overcome traditional limitations in robotic task execution [18][32].
打造万人的自动驾驶黄埔军校,一个死磕技术的地方~
自动驾驶之心· 2025-06-20 14:06
Core Viewpoint - The article emphasizes the establishment of a comprehensive community for autonomous driving and embodied intelligence, aiming to gather industry professionals and facilitate rapid responses to challenges within the sector. The goal is to create a community of 10,000 members within three years, focusing on academic, product, and recruitment connections in the field [2][4]. Group 1: Community Development - The community aims to provide a platform for industry professionals to share the latest technological developments, engage in discussions, and access job opportunities [2][3]. - The initiative has already attracted notable figures from companies like Huawei and various leading researchers in the autonomous driving field [2]. - The community is designed to support newcomers by offering structured learning paths and resources to quickly build their technical knowledge [2]. Group 2: Knowledge Sharing and Resources - The "Autonomous Driving Heart Knowledge Planet" serves as a technical exchange platform, primarily for students and professionals looking to transition into the autonomous driving sector [4][11]. - The community has established connections with numerous companies for recruitment purposes, including well-known names like Xiaomi, NIO, and NVIDIA [4][11]. - Members have access to a wealth of resources, including over 5,000 pieces of content, live sessions with industry experts, and discounts on paid courses [14][18]. Group 3: Technological Focus Areas - The article outlines key technological areas to focus on by 2025, including visual large language models (VLM), end-to-end trajectory prediction, and 3D generative simulation techniques [6][10]. - The community has developed learning paths covering various subfields such as perception, mapping, and AI model deployment, ensuring comprehensive coverage of the autonomous driving technology stack [11][16]. - Regular live sessions will focus on cutting-edge topics like VLA, large models, and embodied intelligence, providing insights into practical applications and research advancements [19][18]. Group 4: Engagement and Interaction - The community encourages active participation, with weekly discussions and Q&A sessions to foster engagement among members [12][14]. - It aims to create a supportive environment for both beginners and advanced professionals, facilitating networking and collaboration opportunities [12][11]. - The platform is designed to be a dynamic space where members can freely ask questions and share knowledge, enhancing the overall learning experience [12][11].
学习端到端大模型,还不太明白VLM和VLA的区别。。。
自动驾驶之心· 2025-06-19 11:54
Core Insights - The article emphasizes the growing importance of large models (VLM) in the field of intelligent driving, highlighting their potential for practical applications and production [2][4]. Group 1: VLM and VLA - VLM (Vision-Language Model) focuses on foundational capabilities such as detection, question answering, spatial understanding, and reasoning [4]. - VLA (Vision-Language Action) is more action-oriented, aimed at trajectory prediction in autonomous driving, requiring a deep understanding of human-like reasoning and perception [4]. - It is recommended to learn VLM first before expanding to VLA, as VLM can predict trajectories through diffusion models, enhancing action capabilities in uncertain environments [4]. Group 2: Community and Resources - The article invites readers to join a knowledge-sharing community that offers comprehensive resources, including video courses, hardware, and coding materials related to autonomous driving [4]. - The community aims to build a network of professionals in intelligent driving and embodied intelligence, with a target of gathering 10,000 members in three years [4]. Group 3: Technical Directions - The article outlines four cutting-edge technical directions in the industry: Visual Language Models, World Models, Diffusion Models, and End-to-End Autonomous Driving [5]. - It provides links to various resources and papers that cover advancements in these areas, indicating a robust framework for ongoing research and development [6][31]. Group 4: Datasets and Applications - A variety of datasets are mentioned that are crucial for training and evaluating models in autonomous driving, including pedestrian detection, object tracking, and scene understanding [19][20]. - The article discusses the application of language-enhanced systems in autonomous driving, showcasing how natural language processing can improve vehicle navigation and interaction [20][21]. Group 5: Future Trends - The article highlights the potential for large models to significantly impact the future of autonomous driving, particularly in enhancing decision-making and control systems [24][25]. - It suggests that the integration of language models with driving systems could lead to more intuitive and human-like vehicle behavior [24][25].
何恺明CVPR最新讲座PPT上线:走向端到端生成建模
机器之心· 2025-06-19 09:30
Core Viewpoint - The article discusses the evolution of generative models, particularly focusing on the transition from diffusion models to end-to-end generative modeling, highlighting the potential for generative models to replicate the historical advancements seen in recognition models [6][36][41]. Group 1: Workshop Insights - The workshop led by Kaiming He at CVPR focused on the evolution of visual generative modeling beyond diffusion models [5][7]. - Diffusion models have become the dominant method in visual generative modeling, but they face limitations such as slow generation speed and challenges in simulating complex distributions [6][36]. - Kaiming He's presentation emphasized the need for end-to-end generative modeling, contrasting it with the historical layer-wise training methods prevalent before AlexNet [10][11][41]. Group 2: Recognition vs. Generation - Recognition and generation can be viewed as two sides of the same coin, where recognition abstracts features from raw data, while generation concretizes abstract representations into detailed data [41][42]. - The article highlights the fundamental differences between recognition tasks, which have a clear mapping from data to labels, and generation tasks, which involve complex, non-linear mappings from simple distributions to intricate data distributions [58]. Group 3: Flow Matching and MeanFlow - Flow Matching is presented as a promising approach to address the challenges in generative modeling by constructing ground-truth fields that are independent of specific neural network architectures [81]. - The MeanFlow framework introduced by Kaiming He aims to achieve single-step generation tasks by modeling average velocity rather than instantaneous velocity, providing a theoretical basis for network training [83][84]. - Experimental results show that MeanFlow significantly outperforms previous single-step diffusion and flow models, achieving a FID score of 3.43, which is over 50% better than the previous best [101][108]. Group 4: Future Directions - The article concludes with a discussion on the ongoing research efforts in the field, including Consistency Models, Two-time-variable Models, and revisiting Normalizing Flows, indicating that the field is still in its early stages akin to the pre-AlexNet era in recognition models [110][113].
一个md文件收获超400 star,这份综述分四大范式全面解析了3D场景生成
机器之心· 2025-06-10 08:41
Core Insights - The article discusses the advancements in 3D scene generation, highlighting a comprehensive survey that categorizes existing methods into four main paradigms: procedural methods, neural network-based 3D representation generation, image-driven generation, and video-driven generation [2][4][7]. Summary by Sections Overview of 3D Scene Generation - A survey titled "3D Scene Generation: A Survey" reviews over 300 representative papers and outlines the rapid growth in the field since 2021, driven by the rise of generative models and new 3D representations [2][4][5]. Four Main Paradigms - The four paradigms provide a clear technical roadmap for 3D scene generation, with performance metrics compared across dimensions such as realism, diversity, viewpoint consistency, semantic consistency, efficiency, controllability, and physical realism [7]. Procedural Generation - Procedural generation methods automatically construct complex 3D environments using predefined rules and constraints, widely applied in gaming and graphics engines. This category can be further divided into neural network-based generation, rule-based generation, constraint optimization, and large language model-assisted generation [8]. Image-based and Video-based Generation - Image-based generation leverages 2D image models to reconstruct 3D structures, while video-based generation treats 3D scenes as sequences of images, integrating spatial modeling with temporal consistency [9]. Challenges in 3D Scene Generation - Despite significant progress, challenges remain in achieving controllable, high-fidelity, and physically realistic 3D modeling. Key issues include uneven generation capabilities, the need for improved 3D representations, high-quality data limitations, and a lack of unified evaluation standards [10][16]. Future Directions - Future advancements should focus on higher fidelity generation, parameter control, holistic scene generation, and integrating physical constraints to ensure structural and semantic consistency. Additionally, supporting interactive scene generation and unifying perception and generation capabilities are crucial for the next generation of 3D modeling systems [12][18].