update README
This commit is contained in:
@@ -2,6 +2,9 @@
|
|||||||
## Introduction
|
## Introduction
|
||||||
We decouple diffusion transformer into encoder-decoder design, and surpresingly that a **more substantial encoder yields performance improvements as model size increases**.
|
We decouple diffusion transformer into encoder-decoder design, and surpresingly that a **more substantial encoder yields performance improvements as model size increases**.
|
||||||

|

|
||||||
|
* We achieves **1.26 FID** on ImageNet256x256 Benchmark with DDT-XL/2(22en6de).
|
||||||
|
* We achieves **1.28 FID** on ImageNet512x512 Benchmark with DDT-XL/2(22en6de).
|
||||||
|
* As a byproduct, our DDT can reuse encoder among adjacent steps to accelerate inference.
|
||||||
## Visualizations
|
## Visualizations
|
||||||

|

|
||||||
## Usgae
|
## Usgae
|
||||||
|
|||||||
Reference in New Issue
Block a user