Title
Pixel-Based Facial Expression Synthesis
Abstract
Facial expression synthesis has achieved remarkable advances with the advent of Generative Adversarial Networks (GANs). However, GAN-based approaches mostly generate photo-realistic results as long as the testing data distribution is close to the training data distribution. The quality of GAN results significantly degrades when testing images are from a slightly different distribution. Moreover, recent work has shown that facial expressions can be synthesized by changing localized face regions. In this work, we propose a pixel-based facial expression synthesis method in which each output pixel observes only one input pixel. The proposed method achieves good generalization capability by leveraging only a few hundred training images. Experimental results demonstrate that the proposed method performs comparably well against state-of-the-art GANs on in-dataset images and significantly better on out-of-dataset images. In addition, the proposed model is two orders of magnitude smaller which makes it suitable for deployment on resource-constrained devices.
Year
DOI
Venue
2020
10.1109/ICPR48806.2021.9413065
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)
Keywords
DocType
ISSN
Facial expression synthesis, Image-to-image translation, Generative Adversarial Network, Pixel-based, Regression, Kernel Regression
Conference
1051-4651
Citations 
PageRank 
References 
0
0.34
0
Authors
2
Name
Order
Citations
PageRank
Arbish Akram100.34
Nazar Khan2156.38