Improving Sample Quality of Diffusion Models Using
Self-Attention Guidance (ICCV 2023)
Susung Hong
Gyuseong Lee
Wooseok Jang
Seungryong Kim
Korea University, Seoul, Korea

[Paper]
[Code]
[SD Demo🤗]

Qualitative comparisons between unguided (top row) and self-attention-guided (ours, bottom row) samples. Without any further training, image label or additional module, self-attention guidance can guide pretrained diffusion models to generate images with the higher quality. It is based on the finding that the self-attention maps in pretrained diffusion models are highly related to quality of the generated images. For detailed explanation, please refer to our Paper and Supplementary Material.

Abstract

Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.


Animated Results

Unselected qualitative comparison of our method to unguided ADM (Dhariwal and Nichol 2021).

Unselected qualitative comparison of our method to unguided Stable Diffusion (Rombach et al. 2022).

For the more detailed method, results, analyses, and powerful ablation study,
please refer to our Paper and Supplementary Material!


Paper and Supplementary Material

S. Hong, G. Lee,
W. Jang, S. Kim.
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.