Title
Generative image inpainting with neural features.
Abstract
In this paper, we propose an image inpainting approach based on generative adversarial networks (GANs). The model consists of an inpainting network, two discriminative networks, local and global respectively, and a novel neural feature network. The inpainting network generates image content to regress the missing parts with an encoder-decoder. The two discriminative networks jointly guide the synthesized content to be consistent both locally and globally. The neural feature network, which constrains feature smoothness using feature maps of lower layers of deep neural network, serves as an effective extra regularization term for the inpainting network, ensuring the generated images to preserve structure consistence. Via extensive experiments and comparison with traditional patch matching approaches, we qualitatively and quantitatively demonstrate that our approach can make good use of the features information of images, and perform efficient and realistic inpainting.
Year
Venue
Field
2018
ICIMCS
Computer vision,Pattern recognition,Computer science,Image content,Inpainting,Regularization (mathematics),Artificial intelligence,Generative grammar,Smoothness,Artificial neural network,Discriminative model
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
26
5
Name
Order
Citations
PageRank
Haolin Liu1267.19
Chenyu Li2141.74
Shiming Ge310624.60
Shengwei Zhao400.68
Xin Jin516217.20