docs: update README.md
surpresingly -> surprisingly
This commit is contained in:
committed by
GitHub
parent
45d5e8db5b
commit
39a5a47961
@@ -8,7 +8,7 @@
|
||||
|
||||
[](https://paperswithcode.com/sota/image-generation-on-imagenet-256x256?p=ddt-decoupled-diffusion-transformer-1)
|
||||
## Introduction
|
||||
We decouple diffusion transformer into encoder-decoder design, and surpresingly that a **more substantial encoder yields performance improvements as model size increases**.
|
||||
We decouple diffusion transformer into encoder-decoder design, and surprisingly that a **more substantial encoder yields performance improvements as model size increases**.
|
||||

|
||||
* We achieves **1.26 FID** on ImageNet256x256 Benchmark with DDT-XL/2(22en6de).
|
||||
* We achieves **1.28 FID** on ImageNet512x512 Benchmark with DDT-XL/2(22en6de).
|
||||
@@ -122,4 +122,4 @@ python main.py fit -c configs/repa_improved_ddt_xlen22de6_256.yaml
|
||||
```
|
||||
|
||||
## Acknowledgement
|
||||
The code is mainly built upon [FlowDCN](https://github.com/MCG-NJU/FlowDCN), we also borrow ideas from the [REPA](https://github.com/sihyun-yu/REPA), [MAR](https://github.com/LTH14/mar) and [SiT](https://github.com/willisma/SiT).
|
||||
The code is mainly built upon [FlowDCN](https://github.com/MCG-NJU/FlowDCN), we also borrow ideas from the [REPA](https://github.com/sihyun-yu/REPA), [MAR](https://github.com/LTH14/mar) and [SiT](https://github.com/willisma/SiT).
|
||||
|
||||
Reference in New Issue
Block a user