Datasets
Standard Dataset
DGNet
- Citation Author(s):
- Submitted by:
- Jia Zhang
- Last updated:
- Wed, 11/22/2023 - 08:54
- DOI:
- 10.21227/ssne-a129
- License:
- Categories:
- Keywords:
Abstract
Image inpainting is a great challenge when reconstructed with realistic textures and required to enhance the consistency of semantic structures in large-scale missing regions. However, popular structural-prior guided methods rely mainly on the structural features, which directly accumulate and propagate random noise, causing inconsistencies in contextual semantics within the flled regions and poor network robustness. To solve this issue, this paper presents a dual generative network (DGNet) guided by specifc semantic structures, including an auxiliary network Ns and an inpainting network Ninp. Here Ns provides the structural prior information to Ninp for reconstructing the texture details of images. Additionally, we provide a spatial perceptual attention (SPA) module to construct the spatial dependency among global and local features, which eliminates
the semantic error margins in flled regions. Experiments demonstrate that DGNet signifcantly outperforms other state-of-the-art approaches on three real inpainting datasets and achieves good inpainting results on the Mogao Grottoes Mural dataset.
All experimental data on DGNet.