GRIG: Data-efficient generative residual image inpainting

Wanglong Lu, Xiaogang Jin, Xianta Jiang, Yongliang Yang, Minglun Gong, Kaijie Shi, Tao Wang, Hanli Zhao

Research output: Contribution to journalArticlepeer-review

Abstract

Image inpainting is the task of filling in missing or masked regions of an image with semantically meaningful content. Recent methods have shown significant improvement in dealing with large missing regions. However, these methods usually require large training datasets to achieve satisfactory results, and there has been limited research into training such models on a small number of samples. To address this, we present a novel data-efficient generative residual image inpainting method that produces high-quality inpainting results. The core idea is to use an iterative residual reasoning method that incorporates convolutional neural networks (CNNs) for feature extraction and transformers for global reasoning within generative adversarial networks, along with image-level and patch-level discriminators. We also propose a novel forged patch adversarial training strategy to create faithful textures and detailed appearances. Extensive evaluation shows that our method outperforms previous methods on the data-efficient image inpainting task, both quantitatively and qualitatively.
Original languageEnglish
JournalComputational Visual Media
Publication statusAcceptance date - 3 Feb 2024

Data Availability Statement

The data involved in this study are all public data, which can be downloaded through public channels.

Fingerprint

Dive into the research topics of 'GRIG: Data-efficient generative residual image inpainting'. Together they form a unique fingerprint.

Cite this