Do inpainting yourself: Generative facial inpainting guided by exemplars

Wanglong Lu, Hanli Zhao, Xianta Jiang, Xiaogang Jin, Yong Liang Yang, Kaijie Shi

Research output: Contribution to journalArticlepeer-review

Abstract

We present EXE-GAN, a novel exemplar-guided facial inpainting framework using generative adversarial networks. Our approach not only preserves the quality of the input facial image but also completes the image with exemplar-like facial attributes. We achieve this by simultaneously leveraging the global style of the input image, the stochastic style generated from the random latent code, and the exemplar style of the exemplar image. We introduce a novel attribute similarity metric to encourage the networks to learn the style of facial attributes from the exemplar in a self-supervised way. To guarantee the natural transition across the boundaries of inpainted regions, we introduce a novel spatial variant gradient backpropagation technique to adjust the loss gradients based on the spatial location. We extensively evaluate EXE-GAN on public CelebA-HQ and FFHQ datasets with practical applications, which demonstrates the superior visual quality of facial inpainting. The source code is available at https://github.com/LonglongaaaGo/EXE-GAN.

Original languageEnglish
Article number128996
JournalNeurocomputing
Volume617
Early online date28 Nov 2024
DOIs
Publication statusPublished - 7 Feb 2025

Bibliographical note

The data involved in this study are all public data, which can be downloaded through public channels.

Keywords

  • Facial image inpainting
  • Generative adversarial networks
  • Image generation
  • Image inpainting

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Do inpainting yourself: Generative facial inpainting guided by exemplars'. Together they form a unique fingerprint.

Cite this