Workflow
图像生成
icon
Search documents
RAE的终极形态?北大&阿里提出UniLIP: 将CLIP拓展到重建、生成和编辑
机器之心· 2025-11-02 08:01
Core Insights - The article discusses the innovative UniLIP model, which addresses the trade-off between semantic understanding and pixel detail retention in unified multimodal models [2][4][32] - UniLIP achieves state-of-the-art (SOTA) performance in various benchmarks while maintaining or slightly improving understanding capabilities compared to larger models [5][26] Methodology - UniLIP employs a two-stage training framework with self-distillation loss to enhance image reconstruction capabilities without sacrificing original understanding performance [4][11] - The first stage involves aligning the decoder while freezing the CLIP model, focusing on learning to reconstruct images from fixed CLIP features [9][11] - The second stage jointly trains CLIP and applies self-distillation to ensure feature consistency while injecting pixel details [11][12] Performance Metrics - UniLIP models (1B and 3B parameters) achieved SOTA results in benchmarks such as GenEval (0.90), WISE (0.63), and ImgEdit (3.94) [5][26][27] - In image reconstruction, UniLIP outperformed previous quantization methods and demonstrated significant advantages in generation efficiency [22][24] Architectural Design - The architecture of UniLIP integrates InternVL3 and SANA, utilizing InternViT as the CLIP encoder and a pixel decoder from DC-AE [20] - The model is designed with a connector structure that maintains consistency with large language models (LLMs) [20] Training Data - UniLIP's training data includes 38 million pre-training samples and 60,000 instruction fine-tuning samples for generation, along with 1.5 million editing samples [21] Image Generation and Editing - UniLIP excels in both image generation and editing tasks, achieving high scores in benchmarks due to its rich feature representation and precise semantic alignment [26][27][30] - The dual-condition architecture effectively connects MLLM with diffusion models, ensuring high fidelity and consistency in generated and edited images [18][32]
VAE再被补刀!清华快手SVG扩散模型亮相,训练提效6200%,生成提速3500%
量子位· 2025-10-28 05:12
Core Viewpoint - The article discusses the transition from Variational Autoencoders (VAE) to new models like SVG developed by Tsinghua University and Kuaishou, highlighting significant improvements in training efficiency and generation speed, as well as addressing the limitations of VAE in semantic entanglement [1][4][10]. Group 1: VAE Limitations and New Approaches - VAE is being abandoned due to its semantic entanglement issue, where adjusting one feature affects others, complicating the generation process [4][8]. - The SVG model achieves a 62-fold improvement in training efficiency and a 35-fold increase in generation speed compared to traditional methods [3][10]. - The RAE approach focuses solely on enhancing generation performance by reusing pre-trained encoders, while SVG aims for multi-task versatility by constructing a feature space that integrates semantics and details [11][12]. Group 2: SVG Model Details - SVG utilizes the DINOv3 pre-trained model for semantic extraction, effectively distinguishing features of different categories like cats and dogs, thus resolving semantic entanglement [14]. - A lightweight residual encoder is added to capture high-frequency details that DINOv3 may overlook, ensuring a comprehensive feature representation [14]. - The distribution alignment mechanism is crucial for maintaining the integrity of semantic structures while integrating detail features, as evidenced by a significant increase in FID values when this mechanism is removed [15][16]. Group 3: Performance Metrics - In experiments, SVG outperformed traditional VAE models in various metrics, achieving a FID score of 6.57 on the ImageNet dataset after 80 epochs, compared to 22.58 for the VAE-based SiT-XL [18]. - The model's efficiency is further demonstrated with a FID score dropping to 1.92 after 1400 epochs, nearing the performance of top-tier generative models [18]. - SVG's feature space is versatile, allowing for direct application in tasks like image classification and semantic segmentation without the need for fine-tuning, achieving an 81.8% Top-1 accuracy on ImageNet-1K [22].
谢赛宁新作:VAE退役,RAE当立
量子位· 2025-10-14 08:16
Core Viewpoint - The era of Variational Autoencoders (VAE) is coming to an end, with Representation Autoencoders (RAE) set to take over in the field of diffusion models [1][3]. Summary by Sections RAE Introduction - RAE is a new type of autoencoder designed for training diffusion Transformers (DiT), utilizing pre-trained representation encoders (like DINO, SigLIP, MAE) paired with lightweight decoders, replacing the traditional VAE [3][9]. Advantages of RAE - RAE provides high-quality reconstruction results and a semantically rich latent space, supporting scalable transformer-based architectures. It achieves faster convergence without the need for additional representation alignment losses [4][10]. Performance Metrics - At a resolution of 256×256, the FID score without guidance is 1.51, and with guidance, it is 1.13 for both 256×256 and 512×512 resolutions [6]. Limitations of VAE - VAE has outdated backbone networks, leading to overly complex architectures, requiring 450 GFLOPs compared to only 22 GFLOPs for a simple ViT-B encoder [7]. - The compressed latent space of VAE (only 4 channels) severely limits information capacity, resulting in minimal improvement in information carrying ability [7]. - VAE's weak representation capability, relying solely on reconstruction training, leads to low feature quality and slows down convergence, negatively impacting generation quality [7]. RAE's Design and Training - RAE combines pre-trained representation encoders with trained decoders without requiring additional training or alignment phases, and it does not introduce auxiliary loss functions [9]. - RAE outperforms SD-VAE in reconstruction quality despite its simplicity [10]. Model Comparisons - RAE models such as DINOv2-B, SigLIP2-B, and MAE-B show significant improvements in rFID and Top-1 accuracy compared to SD-VAE [11]. Adjustments for Diffusion Models - RAE requires simple adjustments for effective performance in high-dimensional spaces, including a wide DiT design, noise scheduling, and noise injection in the decoder training [13][17]. - The DiT-XL model trained with RAE surpasses REPA without any auxiliary losses or additional training phases, achieving convergence speeds up to 16 times faster than REPA based on SD-VAE [18][19]. Scalability and Efficiency - The new architecture enhances the scalability of DiT in terms of training computation and model size, outperforming both standard DiT based on RAE and traditional methods based on VAE [24].
字节开源图像生成“六边形战士”,一个模型搞定人物/主体/风格保持
量子位· 2025-09-04 04:41
Core Viewpoint - Byte's UXO team has developed and open-sourced a unified framework called USO, which addresses the multi-indicator consistency problem in image generation, enabling simultaneous style transfer and subject retention across various tasks [1][19]. Group 1: Model Capabilities - USO can effectively manage subject, character, or style retention using a single model and just one reference image [7]. - The framework allows for diverse applications, such as generating cartoon characters in different scenarios, like driving a car or reading in a café, while maintaining high image quality comparable to commercial models [8][10][12][14]. - USO has been evaluated using a newly designed USO-Bench, which assesses performance across subject-driven, style-driven, and mixed generation tasks, outperforming several contemporary models [17][19]. Group 2: Performance Metrics - In the performance comparison, USO achieved a subject-driven generation score of 0.623 and a style-driven generation score of 0.557, placing it at the top among various models [18]. - User studies indicated that USO received high ratings across all evaluation dimensions, particularly in subject consistency, style consistency, and image quality [19]. Group 3: Innovative Techniques - USO employs a "cross-task self-decoupling" paradigm, enhancing the model's learning capabilities by allowing it to learn features relevant to different task types [21]. - The architecture is based on the open-source model FLUX.1 dev, incorporating style alignment training and content-style decoupling training [22]. - The introduction of a Style Reward Learning (SRL) algorithm, designed for Flow Matching, further promotes the decoupling of content and style through a mathematically mapped reward function [24][25]. Group 4: Data Framework - The team has created a cross-task data synthesis framework, innovatively constructing triplet data that includes both layout-changing and layout-preserving elements [30].
Nano Banana官方提示词来了,附完整代码示例
量子位· 2025-09-03 05:49
Core Viewpoint - The article discusses the rising popularity of the Nano-banana tool, highlighting its innovative features and the official guidelines released by Google to help users effectively utilize this technology [1][8]. Group 1: Features of Nano-banana - Nano-banana allows users to generate high-quality images from text descriptions, edit existing images with text prompts, and create new scenes using multiple images [15]. - The tool supports iterative refinement, enabling users to gradually adjust images until they achieve the desired outcome [15]. - It can accurately render text in images, making it suitable for logos, charts, and posters [15]. Group 2: Guidelines for Effective Use - Google emphasizes the importance of providing detailed scene descriptions rather than just listing keywords to generate better and more coherent images [9][10]. - Users are encouraged to think like photographers by considering camera angles, lighting, and fine details to achieve realistic images [19][20]. - The article provides specific prompt structures for various types of images, including photorealistic shots, stylized illustrations, product photography, and comic panels [20][24][35][43]. Group 3: Examples and Applications - The article showcases examples of images generated by Nano-banana, such as a cat dining in a luxurious restaurant under a starry sky, demonstrating the tool's capability to create detailed and imaginative scenes [14][17]. - It also includes code snippets for developers to integrate the image generation capabilities into their applications, highlighting the accessibility of the technology [21][29][35].
光学AI图像生成器能耗降至毫焦级
Ke Ji Ri Bao· 2025-08-29 00:32
Core Insights - A research team from the University of California, Los Angeles, has developed a new type of image generator that uses light beams instead of traditional computing hardware, significantly reducing energy consumption to one hundred-thousandth of standard AI tools, requiring only a few millijoules [1][2] Group 1: Technology Overview - Traditional digital diffusion models require hundreds to thousands of iterations to generate images, while the new system only needs initial encoding without additional computation [2] - The system utilizes a digital encoder trained on publicly available image datasets to create static encodings that can be converted into images [2] - The encoding is physically imprinted onto a laser beam using a Spatial Light Modulator (SLM), allowing for instant image presentation when the laser passes through a second SLM [2] Group 2: Performance and Applications - In tests, the new system generated simple images and Van Gogh-style paintings, achieving results comparable to traditional image generators [2] - The energy consumption for generating a Van Gogh-style image was approximately a few millijoules, while traditional diffusion models required hundreds to thousands of joules [2] - The low power characteristics of this system make it particularly suitable for applications in wearable devices, such as AI glasses [2]
腾讯申请图像生成相关专利,可对图像生成的逐步引导和稳健控制
Jin Rong Jie· 2025-08-16 09:19
Core Insights - Tencent Technology (Shenzhen) Co., Ltd. has applied for a patent titled "Image Generation Method, Device, Equipment, Medium, and Product" with publication number CN120495475A, filed on May 2025 [1] - The patent describes a method for generating images based on object input text, which includes processes for denoising random noise images and enhancing text prompts to create target images [1] Company Overview - Tencent Technology (Shenzhen) Co., Ltd. was established in 2000 and is located in Shenzhen, primarily engaged in software and information technology services [1] - The company has a registered capital of 2 million USD and has invested in 15 enterprises, participated in 264 bidding projects, and holds 5000 trademark and patent records, along with 534 administrative licenses [1]
Lumina-mGPT 2.0:自回归模型华丽复兴,媲美顶尖扩散模型
机器之心· 2025-08-12 00:15
Core Viewpoint - Lumina-mGPT 2.0 is an innovative stand-alone autoregressive image model that integrates various tasks such as text-to-image generation, subject-driven generation, and controllable generation, showcasing significant advancements in image generation technology [5][9][21]. Group 1: Core Technology and Breakthroughs - Lumina-mGPT 2.0 employs a fully independent training architecture, utilizing a pure decoder Transformer model, which allows for two parameter versions (2 billion and 7 billion) and avoids biases from pre-trained models [4][5]. - The model incorporates a high-quality image tokenizer, SBER-MoVQGAN, which was selected based on its optimal reconstruction quality on the MS-COCO dataset [7]. - A unified multi-task processing framework is introduced, enabling seamless support for various tasks including text-to-image generation and image editing [9]. Group 2: Efficient Inference Strategies - The model introduces two optimizations to enhance generation speed while maintaining quality, including model quantization to 4-bit integers and a sampling method that reduces GPU memory consumption by 60% [11][13]. - The optimizations allow for parallel decoding, significantly accelerating the generation process [13]. Group 3: Experimental Results - In text-to-image generation benchmarks, Lumina-mGPT 2.0 achieved a GenEval score of 0.80, ranking it among the top generative models, particularly excelling in tests involving "two objects" and "color attributes" [14][15]. - The model demonstrated superior performance in the Graph200K multi-task benchmark, confirming the feasibility of a pure autoregressive model for multi-modal generation tasks [17]. Group 4: Future Directions - Despite optimizations, Lumina-mGPT 2.0 still faces challenges with sampling time, which affects user experience, indicating a need for further enhancements [21]. - The focus will expand from multi-modal generation to include multi-modal understanding, aiming to improve overall functionality and performance [21].
Qwen新开源,把AI生图里的文字SOTA拉爆了
量子位· 2025-08-05 01:40
Core Viewpoint - The article discusses the release of Qwen-Image, a 20 billion parameter image generation model that excels in complex text rendering and image editing capabilities [3][28]. Group 1: Model Features - Qwen-Image is the first foundational image generation model in the Tongyi Qianwen series, utilizing the MMDiT architecture [4][3]. - It demonstrates exceptional performance in complex text rendering, supporting multi-line layouts and fine-grained detail presentation in both English and Chinese [28][32]. - The model also possesses consistent image editing capabilities, allowing for style transfer, modifications, detail enhancement, text editing, and pose adjustments [27][28]. Group 2: Performance Evaluation - Qwen-Image has achieved state-of-the-art (SOTA) performance across various public benchmark tests, including GenEval, DPG, OneIG-Bench for image generation, and GEdit, ImgEdit, GSO for image editing [29][30]. - In particular, it has shown significant superiority in Chinese text rendering compared to existing advanced models [33]. Group 3: Training Strategy - The model employs a progressive training strategy that transitions from non-text to text rendering, gradually moving from simple to complex text inputs, which enhances its native text rendering capabilities [34]. Group 4: Practical Applications - The article includes practical demonstrations of Qwen-Image's capabilities, such as generating illustrations, PPTs, and promotional images, showcasing its ability to accurately integrate text with visuals [11][21][24].
开源!通义千问推出系列中首个图像生成基础模型Qwen-Image
Hua Er Jie Jian Wen· 2025-08-04 21:09
Core Insights - The article discusses the launch of Qwen-Image, a 20 billion parameter MMDiT model, which is the first foundational model for image generation in the Tongyi Qwen series, achieving significant advancements in complex text rendering and precise image editing [1] Group 1 - Qwen-Image is a foundational model specifically designed for image generation [1] - The model has made notable progress in rendering complex text and editing images accurately [1]