Workflow
VAE
icon
Search documents
舍弃 VAE,预训练语义编码器能让 Diffusion 走得更远吗?
机器之心· 2025-11-02 01:30
Group 1 - The article discusses the limitations of Variational Autoencoders (VAE) in the diffusion model paradigm and explores the potential of using pretrained semantic encoders to enhance diffusion processes [1][7][8] - The shift from VAE to pretrained semantic encoders like DINO and MAE aims to address issues such as semantic entanglement, computational efficiency, and the disconnection between generative and perceptual tasks [9][10][11] - RAE and SVG are two approaches that prioritize semantic representation over compression, leveraging the strong prior knowledge from pretrained visual models to improve efficiency and generative quality [10][11] Group 2 - The article highlights the trend of moving from static image generation to more complex multimodal content, indicating that the traditional VAE + diffusion framework is becoming a bottleneck for next-generation generative models [8][9] - The computational burden of VAE is significant, with examples showing that the VAE encoder in Stable Diffusion 2.1 requires 135.59 GFLOPs, surpassing the 86.37 GFLOPs needed for the core diffusion U-Net network [8][9] - The discussion includes the implications of the "lazy and rich" business principle in the AI era, suggesting a shift in value from knowledge storage to "anti-consensus" thinking among human experts [3]
Diffusion/VAE/RL 数学原理
自动驾驶之心· 2025-07-29 00:52
Core Viewpoint - The article discusses the principles and applications of Diffusion Models and Variational Autoencoders (VAE) in the context of machine learning, particularly focusing on their mathematical foundations and training methodologies. Group 1: Diffusion Models - The training objective of the network is to fit the mean and variance of two Gaussian distributions during the denoising process [7] - The KL divergence term is crucial for fitting the theoretical values and the network's predicted values in the denoising process [9] - The process of transforming the uncertain variable \(x_0\) into the uncertain noise \(\epsilon\) is iteratively predicted [15] Group 2: Variational Autoencoders (VAE) - VAE assumes that the latent distribution follows a Gaussian distribution, which is essential for its generative capabilities [19] - The training of VAE is transformed into a combination of reconstruction loss and KL divergence constraint loss to prevent the latent space from degenerating into a sharp distribution [26] - Minimizing the KL loss corresponds to maximizing the Evidence Lower Bound (ELBO) [27] Group 3: Reinforcement Learning (RL) - The Markov Decision Process (MDP) framework is utilized, which includes states and actions in a sequential manner [35] - The semantic representation aims to approach a pulse distribution, while the generated representation is expected to follow a Gaussian distribution [36] - Policy gradient methods are employed to enable the network to learn the optimal action given a state [42]
LeCun发布最新世界模型:首次实现16秒连贯场景预测,具身智能掌握第一视角!还打脸用了VAE
量子位· 2025-06-30 06:38
Core Viewpoint - Yann LeCun, a prominent figure in AI and deep learning, is focusing on developing a new model called PEVA, which aims to enhance embodied agents' predictive capabilities, allowing them to anticipate actions similarly to humans [2][10]. Group 1: PEVA Model Development - The PEVA model enables embodied agents to learn predictive abilities, achieving coherent scene predictions for up to 16 seconds [2][6]. - The model integrates structured action representation with 48-dimensional kinematic data of human joints and a conditional diffusion Transformer [3][20]. - PEVA utilizes first-person perspective video and full-body pose trajectories as inputs, moving away from abstract control signals [4][12]. Group 2: Technical Innovations - The model addresses computational efficiency and delay issues in long-sequence action prediction through random time jumps and cross-historical frame attention [5][24]. - PEVA captures both "overall movement" and "fine joint movements" using high-dimensional structured data, which traditional models fail to represent accurately [16][18]. - The architecture employs a hierarchical tree structure for motion encoding, ensuring translation and rotation invariance [25]. Group 3: Performance Metrics - PEVA outperforms baseline models in various tasks, showing lower LPIPS and FID values, indicating higher visual similarity and better generation quality [33][35]. - In single-step predictions, PEVA's LPIPS value is 0.303, and FID is 62.29, demonstrating its effectiveness compared to the CDiT baseline [33][35]. - The model's ability to predict visual changes within 2 seconds and generate coherent videos for up to 16 seconds marks a significant advancement in embodied AI [40]. Group 4: Practical Applications - PEVA can intelligently plan actions by evaluating multiple options and selecting the most appropriate sequence, mimicking human trial-and-error planning [42]. - The model's capabilities could lead to more efficient robotic systems, such as vacuum cleaners that can anticipate obstacles and navigate more effectively [51].