2025-04-14 08:45:57 +08:00
2025-04-09 11:26:23 +08:00
2025-04-13 09:23:09 +08:00
2025-04-09 11:26:23 +08:00
2025-04-14 08:45:47 +08:00
2025-04-09 11:01:16 +08:00
2025-04-11 19:40:27 +08:00
2025-04-11 19:10:11 +08:00
2025-04-13 09:08:31 +08:00
2025-04-09 11:01:16 +08:00
2025-04-14 08:45:57 +08:00
2025-04-14 08:45:31 +08:00

DDT: Decoupled Diffusion Transformer

arXiv arXiv
PWC

PWC

Introduction

We decouple diffusion transformer into encoder-decoder design, and surpresingly that a more substantial encoder yields performance improvements as model size increases.

  • We achieves 1.26 FID on ImageNet256x256 Benchmark with DDT-XL/2(22en6de).
  • We achieves 1.28 FID on ImageNet512x512 Benchmark with DDT-XL/2(22en6de).
  • As a byproduct, our DDT can reuse encoder among adjacent steps to accelerate inference.

Visualizations

Checkpoints

We take the off-shelf VAE to encode image into latent space, and train the decoder with DDT.

Dataset Model Params FID HuggingFace
ImageNet256 DDT-XL/2(22en6de) 675M 1.26 🤗
ImageNet512 DDT-XL/2(22en6de) 675M 1.28 🤗

Online Demos

We provide online demos for DDT-XL/2(22en6de) on HuggingFace Spaces.

HF spases: https://huggingface.co/spaces/MCG-NJU/DDT

Usages

We use ADM evaluation suite to report FID.

# for installation
pip install -r requirements.txt

By default, the main.py will use all available GPUs. You can specify the GPU(s) to use with CUDA_VISIBLE_DEVICES. or specify the number of GPUs to use with as :

# in configs/repa_improved_ddt_xlen22de6_256.yaml
trainer:
  default_root_dir: universal_flow_workdirs
  accelerator: auto
  strategy: auto
  devices: auto
  # devices: 0,
  # devices: 0,1
  num_nodes: 1

By default, the save_image_callbacks will only save the first 100 images and npz file(to calculate FID with ADM suite). You can change the number of images to save with as :

# in configs/repa_improved_ddt_xlen22de6_256.yaml
callbacks:
- class_path: src.callbacks.model_checkpoint.CheckpointHook
  init_args:
    every_n_train_steps: 10000
    save_top_k: -1
    save_last: true
- class_path: src.callbacks.save_images.SaveImagesHook
  init_args:
     save_dir: val
     max_save_num: 0
     # max_save_num: 100
# for inference
python main.py predict -c configs/repa_improved_ddt_xlen22de6_256.yaml --ckpt_path=XXX.ckpt
# for training
# extract image latent (optional)
python3 tools/cache_imlatent4.py
# train
python main.py fit -c configs/repa_improved_ddt_xlen22de6_256.yaml

Reference

@article{wang2025ddt,
  title={DDT: Decoupled Diffusion Transformer},
  author={Wang, Shuai and Tian, Zhi and Huang, Weilin and Wang, Limin},
  journal={arXiv preprint arXiv:2504.05741},
  year={2025}
}

Acknowledgement

The code is mainly built upon FlowDCN, we also borrow ideas from the REPA, MAR and SiT.

Description
No description provided
Readme 17 MiB
Languages
Python 99.8%
Shell 0.2%