Workflow
ICCV 2025 | 小红书AIGC团队提出图像和视频换脸新算法DynamicFace
机器之心·2025-08-12 03:10

Core Viewpoint - The article discusses the innovative DynamicFace method for high-quality and consistent face swapping in images and videos, leveraging diffusion models and composable 3D facial priors to enhance identity and motion consistency in generated content [6][7][21]. Group 1: Technology Overview - DynamicFace integrates diffusion models with composable 3D facial priors to achieve high-quality face swapping, addressing challenges in maintaining identity and motion consistency [7][9]. - The method explicitly decouples facial conditions into five independent representations: identity, pose, expression, lighting, and background, enhancing the accuracy of generated images and videos [9][10]. - A dual-stream injection mechanism is designed to ensure high fidelity in identity retention, utilizing a Face Former for global identity consistency and a ReferenceNet for fine-grained texture transfer [10][11]. Group 2: Industry Applications - In the film industry, directors can use a single still image of an actor to create real-time digital doubles for complex expressions and lighting adjustments, reducing the need for costly reshoots [6]. - The gaming industry benefits from personalized character creation, allowing players to upload selfies to generate customizable 3D avatars with realistic expressions and movements [6]. - In social media and e-commerce, content creators can produce various promotional videos using a single brand image, while virtual influencers can maintain consistent appearances during live streams [6]. Group 3: Performance Comparison - DynamicFace outperforms existing state-of-the-art methods in both identity consistency and motion consistency, achieving a 99.20% ID retrieval rate and significantly lower pose and expression discrepancies compared to competitors [23][24]. - The quantitative experiments conducted on FaceForensics++ and FFHQ datasets demonstrate DynamicFace's superior performance in maintaining high-quality facial generation while ensuring motion accuracy [24]. Group 4: Future Implications - The article suggests that the refined decoupling approach of DynamicFace could inspire future work in controllable generation, potentially leading to advancements in various applications within the digital content creation space [28].