Title
A Study Of Gender Bias In Face Presentation Attack And Its Mitigation
Abstract
In biometric systems, the process of identifying or verifying people using facial data must be highly accurate to ensure a high level of security and credibility. Many researchers investigated the fairness of face recognition systems and reported demographic bias. However, there was not much study on face presentation attack detection technology (PAD) in terms of bias. This research sheds light on bias in face spoofing detection by implementing two phases. First, two CNN (convolutional neural network)-based presentation attack detection models, ResNet50 and VGG16 were used to evaluate the fairness of detecting imposer attacks on the basis of gender. In addition, different sizes of Spoof in the Wild (SiW) testing and training data were used in the first phase to study the effect of gender distribution on the models' performance. Second, the debiasing variational autoencoder (DB-VAE) (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) was applied in combination with VGG16 to assess its ability to mitigate bias in presentation attack detection. Our experiments exposed minor gender bias in CNN-based presentation attack detection methods. In addition, it was proven that imbalance in training and testing data does not necessarily lead to gender bias in the model's performance. Results proved that the DB-VAE approach (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) succeeded in mitigating bias in detecting spoof faces.
Year
DOI
Venue
2021
10.3390/fi13090234
FUTURE INTERNET
Keywords
DocType
Volume
gender bias, presentation attack detection, debiasing variational autoencoder, convolutional neural network
Journal
13
Issue
Citations 
PageRank 
9
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Norah Alshareef100.34
Xiaohong Yuan216926.72
Kaushik Roy331.46
Mustafa Atay4859.85