Core Viewpoint - The article introduces MixGRPO, a new framework that combines Stochastic Differential Equations (SDE) and Ordinary Differential Equations (ODE) to enhance the efficiency and performance of image generation processes [1][81]. Group 1: MixGRPO Framework - MixGRPO simplifies the optimization process in Markov Decision Processes (MDP) by utilizing a mixed sampling strategy, which improves both efficiency and performance [1][17]. - The framework shows significant improvements in human preference alignment across multiple dimensions, outperforming DanceGRPO with a training time reduction of nearly 50% [2][60]. - MixGRPO-Flash, a faster variant of MixGRPO, further reduces training time by 71% while maintaining similar performance levels [2][60]. Group 2: Performance Metrics - In comparative studies, MixGRPO achieved a higher Unified Reward score of 3.418, compared to DanceGRPO's 3.397, indicating better alignment with human preferences [60]. - MixGRPO-Flash demonstrated an average iteration time of 112.372 seconds, significantly lower than DanceGRPO's 291.284 seconds [60]. Group 3: Sampling Strategy - The MixGRPO framework employs a hybrid sampling method, where SDE sampling is used within a defined interval during the denoising process, while ODE sampling is applied outside this interval [14][20]. - This approach allows for a reduction in computational overhead and optimization difficulty, while ensuring that the sampling process remains aligned with the marginal distributions of SDE and ODE [30][81]. Group 4: Sliding Window Strategy - A sliding window strategy is introduced to optimize the denoising steps, allowing the model to focus on specific time steps during training [32][35]. - The research team identified key hyperparameters for the sliding window, including window size and movement intervals, which significantly impact performance [34][70]. Group 5: High-Order ODE Solvers - The integration of high-order ODE solvers, such as DPM-Solver++, enhances the sampling speed during the GRPO training process, balancing computational cost and performance [45][76]. - The experiments indicated that a second-order midpoint method was optimal for the high-order solver settings [76]. Group 6: Experimental Validation - The experiments utilized the HPDv2 dataset, which includes diverse prompts, demonstrating that MixGRPO can achieve effective human preference alignment with a limited number of training prompts [49][50]. - The results from various reward models confirmed the robustness of MixGRPO, showing superior performance in both single and multi-reward settings [56][82].
训练时间减半,性能不降反升!腾讯混元开源图像生成高效强化方案MixGRPO