Diffusion Model

Search documents
100+自动驾驶数据集,这5个你总得知道吧?
自动驾驶之心· 2025-06-22 01:35
Core Viewpoint - The article emphasizes the growing importance of autonomous driving technology and highlights the availability of over 100 high-quality datasets for developers and researchers in the field. It introduces five key datasets that cover various tasks from perception to visual odometry, providing valuable resources for both beginners and experienced engineers [2]. Dataset Summaries 1. KITTI Dataset - The KITTI dataset is one of the most classic and widely used benchmark datasets in the autonomous driving field. It was collected in Karlsruhe, Germany, using high-precision sensors such as stereo color/gray cameras, Velodyne 3D LiDAR, and GPS/IMU. The dataset includes annotations for various perception tasks, including stereo vision, optical flow, visual odometry, and 3D object detection and tracking, making it a standard for evaluating vehicle vision algorithms [3]. 2. nuScenes Dataset - nuScenes is a large-scale multi-sensor dataset released by Motional, covering 1,000 continuous driving scenes in Boston and Singapore, totaling approximately 15 hours of data. It includes a full suite of sensors: six cameras, five millimeter-wave radars, one top-mounted LiDAR, and IMU/GPS. The dataset provides around 1.4 million high-resolution camera images and 390,000 LiDAR scans, annotated with 3D bounding boxes for 23 object categories, making it suitable for research on complex urban road scenarios [5][7]. 3. Waymo Open Dataset - The Waymo Open Dataset, released by Google Waymo, is one of the largest open data resources for autonomous driving. It consists of two main parts: a perception dataset with 2,030 scenes of high-resolution camera and LiDAR data, and a motion dataset with 103,354 vehicle trajectories and corresponding 3D map information. This extensive multi-sensor dataset covers various times, weather conditions, and urban environments, serving as a benchmark for target detection, tracking, and trajectory prediction research [10][12]. 4. PathTrack Dataset - PathTrack is a dataset focused on person tracking, containing over 15,000 trajectories across 720 sequences. It utilizes a re-trained existing person matching network, significantly reducing the classification error rate. The dataset is suitable for 2D/3D object detection, tracking, and trajectory prediction tasks [13][14][15]. 5. ApolloScape Dataset - ApolloScape, released by Baidu Apollo, is a massive autonomous driving dataset characterized by its large volume and high annotation accuracy. It reportedly exceeds similar datasets in size by over ten times, containing hundreds of thousands of high-resolution images with pixel-level semantic segmentation annotations. ApolloScape defines 26 different semantic categories and includes complex road scenarios, making it applicable for perception, map construction, and simulation training [17][19].
数据减少超千倍,500 美金就可训练一流视频模型,港城、华为Pusa来了
机器之心· 2025-06-19 02:28
Core Viewpoint - The article discusses the revolutionary advancements in video generation through the introduction of the Frame-aware Video Diffusion Model (FVDM) and its practical application in the Pusa project, which significantly reduces training costs and enhances video generation capabilities [2][3][37]. Group 1: FVDM and Pusa Project - FVDM introduces a vectorized timestep variable (VTV) that allows each frame to have an independent temporal evolution path, addressing the limitations of traditional scalar timesteps in video generation [2][18]. - The Pusa project, developed in collaboration with Huawei's Hong Kong Research Institute, serves as a direct application and validation of FVDM, exploring a low-cost method for fine-tuning large-scale pre-trained video models [3][37]. - Pusa achieves superior results compared to the official Wan I2V model while reducing training costs by over 200 times (from at least $100,000 to $500) and data requirements by over 2500 times [5][37]. Group 2: Technical Innovations - The Pusa project utilizes non-destructive fine-tuning on pre-trained models like Wan-T2V 14B, allowing for effective video generation without compromising the original model's capabilities [5][29]. - The introduction of a probabilistic timestep sampling training strategy (PTSS) in FVDM enhances convergence speed and improves performance compared to the original model [30][31]. - Pusa's VTV mechanism enables diverse video generation tasks by allowing different frames to have distinct noise perturbation controls, thus facilitating more nuanced video generation [35][36]. Group 3: Community Engagement and Future Prospects - The complete codebase, training datasets, and training code for Pusa have been open-sourced to encourage community contributions and collaboration, aiming to enhance performance and explore new possibilities in video generation [17][37]. - The article emphasizes the potential of Pusa to lead the video generation field into a new era characterized by low costs and high flexibility [36][37].
挑战 next token prediction,Diffusion LLM 够格吗?
机器之心· 2025-06-08 02:11
Group 1 - The article discusses the potential of Diffusion LLMs, particularly Gemini Diffusion, as a significant breakthrough in AI, challenging traditional autoregressive models [3][4][5] - Gemini Diffusion demonstrates high generation efficiency, achieving an average sampling speed of 1479 TPS and up to 2000 TPS in encoding tasks, outperforming Gemini 2.0 Flash-Lite by 4-5 times [4][6] - The parallel generation mechanism of the diffusion architecture allows for efficient processing, which could lead to reduced computational costs compared to autoregressive models [6][7] Group 2 - Mary Meeker emphasizes that the speed of AI development surpasses that of the internet era, highlighting the cost disparity between AI model training and inference [1][2] - The article suggests that the rise of open-source models in China may impact the global supply chain, indicating a shift in competitive dynamics within the industry [1][2] - The balance between computational investment and commercial returns is crucial for enterprises as AI inference costs decline [1][2]
三位顶流AI技术人罕见同台,谈了谈AI行业最大的「罗生门」
3 6 Ke· 2025-05-28 11:59
Core Insights - The AI industry is currently experiencing a significant debate over the effectiveness of pre-training models versus first principles, with notable figures like Ilya from OpenAI suggesting that pre-training has reached its limits [1][2] - The shift from a consensus-driven approach to exploring non-consensus methods is evident, as companies and researchers seek innovative solutions in AI [6][7] Group 1: Industry Trends - The AI landscape is witnessing a transition from a focus on pre-training to exploring alternative methodologies, with companies like Sand.AI and NLP LAB leading the charge in applying multi-modal architectures to language and video models [3][4] - The emergence of new models, such as Dream 7B, demonstrates the potential of applying diffusion models to language tasks, outperforming larger models like DeepSeek V3 [3][4] - The consensus around pre-training is being challenged, with some experts arguing that it is not yet over, as there remains untapped data that could enhance model performance [38][39] Group 2: Company Perspectives - Ant Group's Qwen team, led by Lin Junyang, has faced criticism for being conservative, yet they emphasize that their extensive experimentation has led to valuable insights, ultimately reaffirming the effectiveness of the Transformer architecture [5][15] - The exploration of Mixture of Experts (MoE) models is ongoing, with the team recognizing the potential for scalability while also addressing the challenges of training stability [16][20] - The industry is increasingly focused on optimizing model efficiency and effectiveness, with a particular interest in achieving a balance between model size and performance [19][22] Group 3: Technical Innovations - The integration of different model architectures, such as using diffusion models for language generation, reflects a broader trend of innovation in AI [3][4] - The challenges of training models with long sequences and the need for effective optimization strategies are critical areas of focus for researchers [21][22] - The potential for future breakthroughs lies in leveraging increased computational power to revisit previously unviable techniques, suggesting a cycle of innovation driven by advancements in hardware [40][41]
ICML 2025 Spotlight | 用傅里叶分解探讨图像对抗扰动,代码已开源
机器之心· 2025-05-18 04:25
本文作者分别来自中国科学院大学和中国科学院计算技术研究所。第一作者裴高政为中国科学院大学博士二年级学生,本工作共同通讯作者是中国科学院大学马 坷副教授和黄庆明教授。 对抗净化旨在测试阶段将对抗图像还原为其原始的干净图像。现有的基于扩散模型的对抗净化策略试图通过前向过程将对抗扰动淹没在各向同性噪声中,随后通 过逆向过程恢复干净图像。 然而,现有策略在时域(即像素空间)无法对干净像素与对抗扰动进行解耦,导致破坏对抗扰动的同时不可避免地损害原始干净图像 的语义信息。 因此,本文从时域转向频域进行研究。具体来说,本文利用傅里叶分解技术将图像分解为幅度谱和相位谱,探讨了对抗扰动的分布特征:结果表明,对抗扰动更 倾向于破坏高频幅度谱和相位谱。基于这一实验观察,本文提出在扩散模型的逆向过程中注入原始样本的低频信息作为先验,以引导干净样本的生成。这种方法 不仅能够有效去除对抗扰动,同时极大地保留了原始图像的语义内容和结构信息,使得净化后的图像尽可能保持与干净样本的语义相似性。 论文题目:Diffusion-based Adversarial Purification from the Perspective of the F ...
CVPR 2025 Oral | DiffFNO:傅里叶神经算子助力扩散,开启任意尺度超分辨率新篇章
机器之心· 2025-05-04 04:57
本文由圣路易斯华盛顿大学与北京大学联合完成,第一作者为圣路易斯华盛顿大学的刘晓一,他在北京大学访问期间完成了该项研究;通讯作者为北京大学计算 机学院唐浩助理教授 / 研究员。 从单张低分辨率(LR)图像恢复出高分辨率(HR)图像 —— 即 "超分辨率"(SR)—— 已成为计算机视觉领域的重要挑战。近年来,随着医疗影像、卫星遥感、 视频监控和游戏渲染等应用对图像细节的需求不断提升,该技术的应用愈发广泛。传统深度学习超分模型(如 SRCNN、EDSR)在固定放大倍数下表现优异,但 要么无法支持任意放大尺度,要么在大倍率和复杂纹理场景中常出现细节模糊和伪影。扩散模型能有效恢复高频细节,却因需多次迭代去噪而推理缓慢,难以满 足实时应用需求。为彻底打破 "高质量重建" 与 "快速推理" 之间的矛盾,算子学习在运算效率和分辨率不变性方面的提升为该领域带来了新的机遇。 圣路易斯华盛顿大学和北京大学团队提出的 DiffFNO(Diffusion Fourier Neural Operator)以神经算子赋能扩散架构。该方法支持高质、高效、任何连续倍率(如 2.1、11.5 等)的超分。它的优秀表现来源于三大组件:【1】加权傅 ...