扩散模型

Search documents
本来决定去具身,现在有点犹豫了。。。
自动驾驶之心· 2025-07-05 09:12
Core Insights - The article discusses the evolving landscape of embodied intelligence, highlighting its transition from a period of hype to a more measured approach as the technology matures and is not yet at a productivity stage [2]. Group 1: Industry Trends - Embodied intelligence has gained significant attention over the past few years, but the industry is now recognizing that it is still in the early stages of development [2]. - There is a growing demand for skills in multi-sensor fusion and robotics, particularly in areas like SLAM and ROS, which are crucial for engaging with embodied intelligence [3][4]. - Many companies in the robotics sector are rapidly developing, with numerous startups receiving substantial funding, indicating a positive outlook for the industry in the coming years [3][4]. Group 2: Job Market and Skills Development - The job market for algorithm positions is competitive, with a focus on cutting-edge technologies such as end-to-end models, VLA, and reinforcement learning [3]. - Candidates with a background in robotics and a solid understanding of the latest technologies are likely to find opportunities, especially as traditional robotics remains a primary product line [4]. - The article encourages individuals to enhance their technical skills in robotics and embodied intelligence to remain competitive in the job market [3][4]. Group 3: Community and Resources - The article promotes a community platform that offers resources for learning about autonomous driving and embodied intelligence, including video courses and job postings [5]. - The community aims to gather a large number of professionals and students interested in smart driving and embodied intelligence, fostering collaboration and knowledge sharing [5]. - The platform provides access to the latest industry trends, technical discussions, and job opportunities, making it a valuable resource for those looking to enter or advance in the field [5].
ICCV 2025|降低扩散模型中的时空冗余,上交大EEdit实现免训练图像编辑加速
机器之心· 2025-07-05 02:46
Core Viewpoint - The article discusses the latest research from Professor Zhang Linfeng's team at Shanghai Jiao Tong University, introducing EEdit, a novel framework designed to enhance the efficiency of image editing by addressing spatial and temporal redundancy in diffusion models, achieving a speedup of over 2.4 times compared to previous methods [1][6][8]. Summary by Sections Research Motivation - The authors identified significant spatial and temporal redundancy in image editing tasks using diffusion models, leading to unnecessary computational overhead, particularly in non-editing areas [12][14]. - The study highlights that the inversion process incurs higher time redundancy, suggesting that reducing redundant time steps can significantly accelerate editing tasks [14]. Method Overview - EEdit employs a training-free caching acceleration framework that utilizes output feature reuse to compress the inversion process time steps and control the frequency of area marking updates through region score rewards [15][17]. - The framework is designed to adapt to various input types for editing tasks, including reference images, prompt-based editing, and drag-region guidance [10][15]. Key Features of EEdit - EEdit achieves over 2.4X acceleration in inference speed compared to the unaccelerated version and can reach up to 10X speedup compared to other image editing methods [8][9]. - The framework addresses the computational waste caused by spatial and temporal redundancy, optimizing the editing process without compromising quality [9][10]. - EEdit supports multiple input guidance types, enhancing its versatility in image editing tasks [10]. Experimental Results - The performance of EEdit was evaluated on several benchmarks, demonstrating superior efficiency and quality metrics compared to existing methods [26][27]. - EEdit outperformed other methods in terms of PSNR, LPIPS, SSIM, and CLIP metrics, showcasing its competitive edge in both speed and quality [27][28]. - The spatial locality caching algorithm (SLoC) used in EEdit was found to be more effective than other caching methods, achieving better acceleration and foreground preservation [29].
物理学家靠生物揭开AI创造力来源:起因竟是“技术缺陷”
量子位· 2025-07-04 04:40
Core Viewpoint - The creativity exhibited by AI, particularly in diffusion models, is hypothesized to be a result of the model architecture itself, rather than a flaw or limitation [1][3][19]. Group 1: Background and Hypothesis - AI systems, especially diffusion models like DALL·E and Stable Diffusion, are designed to replicate training data but often produce novel images instead [3][4]. - Researchers have been puzzled by the apparent creativity of these models, questioning how they generate new samples rather than merely memorizing data [8][6]. - The hypothesis presented by physicists Mason Kamb and Surya Ganguli suggests that the noise reduction process in diffusion models may lead to information loss, akin to a puzzle missing its instructions [8][9]. Group 2: Mechanisms of Creativity - The study draws parallels between the self-assembly processes in biological systems and the functioning of diffusion models, particularly focusing on local interactions and symmetry [11][14]. - The concepts of locality and equivariance in diffusion models are seen as both limitations and sources of creativity, as they force the model to focus on smaller pixel groups without a complete picture [15][19]. - The researchers developed a system called the Equivariant Local Score Machine (ELS) to validate their hypothesis, which demonstrated a 90% accuracy in matching outputs of trained diffusion models [18][19]. Group 3: Implications and Further Questions - The findings suggest that the creativity of diffusion models may be an emergent property of their operational dynamics, rather than a separate, higher-level phenomenon [19][21]. - There remain questions regarding the creativity of other AI systems, such as large language models, which do not rely on the same mechanisms of locality and equivariance [21][22]. - The research indicates that both human and AI creativity may stem from an incomplete understanding of the world, leading to novel and valuable outputs [21][22].
画到哪,动到哪!字节跳动发布视频生成「神笔马良」ATI,已开源!
机器之心· 2025-07-02 10:40
Core Viewpoint - The article discusses the development of ATI, a new controllable video generation framework by ByteDance, which allows users to create dynamic videos by drawing trajectories on static images, transforming user input into explicit control signals for object and camera movements [2][4]. Group 1: Introduction to ATI - Angtian Wang, a researcher at ByteDance, focuses on video generation and 3D vision, highlighting the advancements in video generation tasks due to diffusion models and transformer architectures [1]. - The current mainstream methods face a significant bottleneck in providing effective and intuitive motion control for users, limiting creative expression and practical application [2]. Group 2: Methodology of ATI - ATI accepts two basic inputs: a static image and a set of user-drawn trajectories, which can be any shape, including lines and curves [6]. - The Gaussian Motion Injector encodes these trajectories into motion vectors in latent space, guiding the video generation process frame by frame [6][14]. - The model uses Gaussian weights to ensure that it can "see" the drawn trajectories and understand their relation to the generated video [8][14]. Group 3: Features and Capabilities - Users can draw trajectories for key actions like running or jumping, with ATI accurately sampling and encoding joint movements to generate natural motion sequences [19]. - ATI can handle up to 8 independent trajectories simultaneously, ensuring that object identities remain distinct during complex interactions [21]. - The system allows for synchronized camera movements, enabling users to create dynamic videos with cinematic techniques like panning and tilting [23][25]. Group 4: Performance and Applications - ATI demonstrates strong cross-domain generalization, supporting various artistic styles such as realistic films, cartoons, and watercolor renderings [28]. - Users can create non-realistic motion effects, such as flying or stretching, providing creative possibilities for sci-fi or fantasy scenes [29]. - The high-precision model based on Wan2.1-I2V-14B can generate videos comparable to real footage, while a lightweight version is available for real-time interactions in resource-constrained environments [30]. Group 5: Open Source and Community - The Wan2.1-I2V-14B model version of ATI has been open-sourced on Hugging Face, facilitating high-quality, controllable video generation for researchers and developers [32]. - Community support is growing, with tools like ComfyUI-WanVideoWrapper available to optimize model performance on consumer-grade GPUs [32].
免费约饭!加拿大ICML 2025,相聚机器之心人才晚宴
机器之心· 2025-07-01 09:34
Core Viewpoint - The AI field continues to develop rapidly in 2025, with significant breakthroughs in image and video generation technologies, particularly through diffusion models that enhance image synthesis quality and enable synchronized audio generation in video content [1][2]. Group 1: AI Technology Advancements - The use of diffusion models has led to unprecedented improvements in image synthesis quality, enhancing resolution, style control, and semantic understanding [2]. - Video generation technology has evolved, exemplified by Google's Veo 3, which achieves native audio synchronization, marking a significant advancement in video generation capabilities [2]. Group 2: Academic Collaboration and Events - The ICML conference, a leading academic event in the AI field, will take place from July 13 to July 19, 2025, in Vancouver, Canada, showcasing top research achievements [4]. - The "Yunfan・ICML 2025 AI Talent Meetup" is organized to facilitate informal discussions among professionals, focusing on cutting-edge technologies and talent dialogue [5][7]. Group 3: Event Details - The meetup will feature various engaging activities, including talks by young scholars, talent showcases, interactive experiences, institutional presentations, and networking dinners, aimed at fostering discussions on key issues in technology and application [7][8]. - The event is scheduled for July 15, 2025, from 16:00 to 20:30, with a capacity of 200 participants [8].
UofT、UBC、MIT和复旦等联合发布:扩散模型驱动的异常检测与生成全面综述
机器之心· 2025-06-30 23:48
扩散模型(Diffusion Models, DMs)近年来展现出巨大的潜力,在计算机视觉和自然语言处理等诸多任务中取得了显著进展,而异常检测(Anomaly Detection, AD)作为人工智能领域的关键研究任务,在工业制造、金融风控、医疗诊断等众多实际场景中发挥着重要作用。近期,来自多伦多大学、 不列颠哥伦比亚大学 、麻省理工学院、悉尼大学、卡迪夫大学和复旦大学等知名机构的研究者合作完成题为 "Anomaly Detection and Generation with Diffusion Models: A Survey" 的长文 综述,首次聚焦于 DMs 在异常检测与生成领域的应用。该综述系统性地梳理了图像、视频、时间序列、表格和多模态异常检测任务的最新进展并从扩散模型视角 提供了全面的分类体系,结合生成式 AI 的研究动向展望了未来趋势和发展机遇,有望引导该领域的研究者和从业者。 论文标题: Anomaly Detection and Generation with Diffusion Models: A Survey 论文链接: https://arxiv.org/pdf/2506.09368 ...
ICML 2025 Spotlight | 新理论框架解锁流匹配模型的引导生成
机器之心· 2025-06-28 02:54
Core Viewpoint - The article introduces a novel energy guidance theoretical framework for flow matching models, addressing the gap in energy guidance algorithms within this context and proposing various practical algorithms suitable for different tasks [2][3][27]. Summary by Sections Research Background - Energy guidance is a crucial technique in the application of generative models, ideally altering the distribution of generated samples to align with a specific energy function while maintaining adherence to the training set distribution [7][9]. - Existing energy guidance algorithms primarily focus on diffusion models, which differ fundamentally from flow matching models, necessitating a general energy guidance theoretical framework for flow matching [9]. Method Overview - The authors derive a general flow matching energy guidance vector field from the foundational definitions of flow matching models, leading to the formulation of three categories of practical, training-free energy guidance algorithms [11][12]. - The guidance vector field is designed to direct the original vector field towards regions of lower energy function values [12]. Experimental Results - Experiments were conducted on synthetic data, offline reinforcement learning, and image linear inverse problems, demonstrating the effectiveness of the proposed algorithms [20][22]. - In synthetic datasets, the Monte Carlo sampling-based guidance algorithm achieved results closest to the ground truth distribution, validating the correctness of the flow matching guidance framework [21]. - In offline reinforcement learning tasks, the Monte Carlo sampling guidance exhibited the best performance due to the need for stable guidance samples across different time steps [23]. - For image inverse problems, the Gaussian approximation guidance and GDM showed optimal performance, while the Monte Carlo sampling struggled due to high dimensionality [25]. Conclusion - The work fills a significant gap in energy guidance algorithms for flow matching models, providing a new theoretical framework and several practical algorithms, along with theoretical analysis and experimental comparisons to guide real-world applications [27].
之心急聘!25年业务合伙人招聘,量大管饱~
自动驾驶之心· 2025-06-27 09:34
Group 1 - The article discusses the recruitment of 10 outstanding partners for the "Autonomous Driving Heart" team, focusing on the development of autonomous driving-related courses, thesis guidance, and hardware development [2][3] - The main areas of expertise sought include large models/multi-modal large models, diffusion models, VLA, end-to-end systems, embodied interaction, joint prediction, SLAM, 3D object detection, world models, closed-loop simulation 3DGS, and large model deployment and quantized perception reasoning [3] - Candidates are preferred to have a master's degree or higher from universities ranked within the QS200, with priority given to those who have significant contributions in top conferences [4] Group 2 - The company offers various benefits including resource sharing for job seeking, doctoral studies, and studying abroad recommendations, along with substantial cash incentives and opportunities for entrepreneurial project collaboration [5][6] - Interested parties are encouraged to contact the company via WeChat for consultation regarding institutional or corporate collaboration in autonomous driving [7]
具身世界模型新突破,地平线 & 极佳提出几何一致视频世界模型增强机器人策略学习
机器之心· 2025-06-26 04:35
近年来,随着人工智能从感知智能向决策智能演进, 世界模型 (World Models) 逐渐成为机器人领域的重要研究方向。世界模型旨在让智能体对环境进行建模并 预测未来状态,从而实现更高效的规划与决策。 与此同时,具身数据也迎来了爆发式关注。因为目前具身算法高度依赖于大规模的真实机器人演示数据,而这些数据的采集过程往往成本高昂、耗时费力,严重 限制了其可扩展性和泛化能力。尽管仿真平台提供了一种相对低成本的数据生成方式,但由于仿真环境与真实世界之间存在显著的视觉和动力学差异(即 sim-to- real gap),导致在仿真中训练的策略难以直接迁移到真实机器人上,从而限制了其实际应用效果。 因此如何高效获取、生成和利用高质量的具身数据,已成为当 前机器人学习领域的核心挑战之一 。 项目主页: https://horizonrobotics.github.io/robot_lab/robotransfer/ 模仿学习(Imitation Learning)已成为机器人操作领域的重要方法之一。通过让机器人 "模仿" 专家示教的行为,可以在复杂任务中快速构建有效的策略模型。然 而,这类方法通常依赖大量高质量的真实机器 ...
生成式视角重塑监督学习!标签不只是答案,更是学习指南 | ICML 2025
量子位· 2025-06-24 13:36
Core Viewpoint - A new paradigm in supervised learning called Predictive Consistency Learning (PCL) is introduced, which redefines the role of labels as auxiliary references rather than just standard answers for comparison [1][5]. Group 1: Training Process Overview - PCL aims to capture complex label representations by progressively decomposing label information, allowing the model to predict complete labels with partial label hints [5][6]. - The training process involves mapping noisy labels back to true labels, with noise levels controlled by time steps, ensuring predictions remain consistent across different noise levels [7][8]. Group 2: Noise Process - The noise process for discrete labels is modeled using a categorical distribution, while continuous labels follow a Gaussian diffusion model, introducing noise progressively [9][11]. - In cases where labels are too complex, PCL introduces Gaussian noise directly into the latent embedding space, aligning with the continuous label noise process [11]. Group 3: Testing Process Overview - After training, the model can efficiently predict by sampling from a random noise distribution, achieving results that surpass traditional supervised learning even without label hints [14][28]. - A multi-step inference strategy is employed to refine predictions, where previous predictions are perturbed with noise to serve as hints for subsequent predictions [14][28]. Group 4: Information Theory Perspective - PCL proposes a structured learning process that gradually captures information, allowing the model to learn from noisy labels while minimizing dependency on them [15][18]. - The model's goal is to minimize noise condition dependence, ensuring predictions remain consistent across varying noise levels [19]. Group 5: Experimental Results - PCL demonstrates significant improvements in prediction accuracy across various tasks, including image segmentation, graph-based predictions, and language modeling, compared to traditional supervised learning [20][25][30]. - In image segmentation, PCL outperforms traditional methods in single-step predictions and continues to improve with additional prediction steps [22][28]. - The results indicate that while more inference steps can enhance detail capture, they also risk error accumulation, necessitating a balance in the number of steps [26][28].