Workflow
Tread策略
icon
Search documents
DiT突遭怒喷,谢赛宁淡定回应
量子位· 2025-08-20 07:48
Core Viewpoint - The article discusses the recent criticisms of the DiT (Diffusion Transformers) model, which is considered a cornerstone in the diffusion model field, highlighting the importance of scientific scrutiny and empirical validation in research [3][10]. Group 1: Criticism of DiT - A user has raised multiple concerns about DiT, claiming it is flawed both mathematically and in its structure, even questioning the presence of Transformer elements in DiT [4][12]. - The criticisms are based on a paper titled "TREAD: Token Routing for Efficient Architecture-agnostic Diffusion Training," which introduces a strategy that allows early-layer tokens to be passed to deeper layers without modifying the architecture or adding parameters [12][14]. - The critic argues that the rapid decrease in FID (Fréchet Inception Distance) during training indicates that DiT's architecture has inherent properties that allow it to easily learn the dataset [15]. - The Tread model reportedly trains 14 times faster than DiT after 400,000 iterations and 37 times faster at its best performance after 7 million iterations, suggesting that significant performance improvements may undermine previous methods [16][17]. - The critic also suggests that if parts of the network are disabled during training, it could render the network ineffective [19]. - It is noted that the more network units in DiT that are replaced with identity mappings during training, the better the model evaluation results [20]. - The architecture of DiT is said to require logarithmic scaling to represent the signal-to-noise ratio differences during the diffusion process, indicating potential issues with output dynamics [23]. - Concerns are raised regarding the Adaptive Layer Normalization method, suggesting that DiT processes conditional inputs through a standard MLP (Multi-Layer Perceptron) without clear Transformer characteristics [25][26]. Group 2: Response from Xie Saining - Xie Saining, the author of DiT, responded to the criticisms, asserting that the Tread model's findings do not invalidate DiT [27]. - He acknowledges the Tread model's contributions but emphasizes that its effectiveness is due to regularization enhancing feature robustness, not because DiT is incorrect [28]. - Saining highlights that Lightning DiT, an upgraded version of DiT, remains a powerful option and should be prioritized when conditions allow [29]. - He also states that there is no evidence to suggest that the post-layer normalization in DiT causes issues [30]. - Saining summarizes improvements made over the past year, focusing on internal representation learning and various methods for enhancing model training [32]. - He mentions that the sd-vae (stochastic depth variational autoencoder) is a significant concern for DiT, particularly regarding its high computational cost for processing images at 256×256 resolution [34].