Diffusion Model for Dense Matching

Jisu Nam, Gyuseong Lee, Sunwoo Kim, Hyeonsu Kim, Hyoungwon Cho, Seyeon Kim, Seungryong Kim
Korea University
Source Image Source Image
Target Image Target Image
Diff Match Image Diff Match Image
GT Image GT Image
Source
Target
DiffMatch
GT

Visualization of the reverse diffusion process for dense correspondence: (from left to right) source and target images, warped source images by estimated correspondences as evolving time steps, and ground-truth. The source image is progressively warped into the target image through an iterative denoising process.


Visualizing the effectiveness of the proposed DiffMatch: (a) source images, (b) target images, and warped source images using estimated correspondences by (c-d) state-of-the-art approaches, (e) our DiffMatch, and (f) ground-truth. Compared to previous methods that discriminatively estimate correspondences, our diffusion-based generative framework effectively learns the matching field manifold, resulting in better estimating correspondences particularly at textureless regions, repetitive patterns, and large displacements.

Abstract

The objective for establishing dense correspondence between paired images consists of two terms: a data term and a prior term. While conventional techniques focused on defining hand-designed prior terms, which are difficult to formulate, recent approaches have focused on learning the data term with deep neural networks without explicitly modeling the prior, assuming that the model itself has the capacity to learn an optimal prior from a large-scale dataset. The performance improvement was obvious, however, they often fail to address inherent ambiguities of matching, such as textureless regions, repetitive patterns, large displacements, or noises. To address this, we propose DiffMatch, a novel conditional diffusion-based framework designed to explicitly model both the data and prior terms for dense matching. This is accomplished by leveraging a conditional denoising diffusion model that explicitly takes matching cost and injects the prior within generative process. However, limited resolution of the diffusion model is a major hindrance. We address this with a cascaded pipeline, starting with a low-resolution model, followed by a super-resolution model that successively upsamples and incorporates finer details to the matching field. Our experimental results demonstrate significant performance improvements of our method over existing approaches, and the ablation studies validate our design choices along with the effectiveness of each component.


Framework


Overall network architecture of DiffMatch. Given source and target images, our conditional diffusion-based network estimates the dense correspondence between the two images. We leverage two conditions: the initial correspondence and the local matching cost, which finds long-range matching and embeds local pixel-wise interactions, respectively.


Matching Results

Qualitative results on HPatches: the source images are warped to the target images using predicted correspondences.

Qualitative results on ETH3D: the source images are warped to the target images using predicted correspondences.

Qualitative results on HPatches using corruptions in ImageNet-C: the source images are warped to the target images using predicted correspondences.

Qualitative results on ETH3D using corruptions in ImageNet-C: the source images are warped to the target images using predicted correspondences.



BibTeX

@misc{nam2023diffmatch,
      title={DiffMatch: Diffusion Model for Dense Matching}, 
      author={Jisu Nam and Gyuseong Lee and Sunwoo Kim and Hyeonsu Kim and Hyoungwon Cho and Seyeon Kim and Seungryong Kim},
      year={2023},
      eprint={2305.19094},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}