Title
MSE-Net: generative image inpainting with multi-scale encoder
Abstract
Image inpainting methods based on deep convolutional neural networks (DCNN), especially generative adversarial networks (GAN), have made tremendous progress, due to their forceful representation capabilities. These methods can generate visually reasonable contents and textures; however, the existing deep models based on a single receptive field type usually not only cause image artifacts and content mismatches but also ignore the correlation between the hole region and long-distance spatial locations in the image. To address the above problems, in this paper, we propose a new generative model based on GAN, which is composed of a two-stage encoder–decoder with a Multi-Scale Encoder Network (MSE-Net) and a new Contextual Attention Model based on the Absolute Value (CAM-AV). The former utilizes different-size convolution kernels to encode features, which improves the ability of abstract feature characterization. The latter uses a new search algorithm to enhance the matching of features in the network. Our network is a fully convolutional network that can complete holes of arbitrary size, number, and spatial location in the image. Experiments with regular and irregular inpainting on different datasets including CelebA and Places2 demonstrate that the proposed method achieves higher quality inpainting results with reasonable contents than the most existing state-of-the-art methods.
Year
DOI
Venue
2022
10.1007/s00371-021-02143-0
The Visual Computer
Keywords
DocType
Volume
Image inpainting, Generative adversarial network, Encoder–decoder, Contextual Attention
Journal
38
Issue
ISSN
Citations 
8
0178-2789
0
PageRank 
References 
Authors
0.34
18
7
Name
Order
Citations
PageRank
Yizhong Yang100.68
Zhihang Cheng200.34
Haotian Yu300.34
Yongqiang Zhang44611.22
Xin Cheng517.17
Zhang Zhang658.09
Guangjun Xie787.32