Title
Adversarial Removal of Gender from Deep Image Representations.
Abstract
In this work we analyze visual recognition tasks such as object and action recognition, and demonstrate the extent to which these tasks are correlated with features corresponding to a protected variable such as gender. We introduce the concept of natural leakage to measure the intrinsic reliance of a task on a protected variable. We further show that machine learning models of visual recognition trained for these tasks tend to exacerbate the reliance on gender features. To address this, we use adversarial training to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network. Experiments on two datasets: the COCO dataset (objects), and the imSitu dataset (actions), show reductions in the extent to which models rely on gender features while maintaining most of the accuracy of the original models. These results even surpass a strong baseline that blurs or removes people from images using ground-truth annotations. Moreover, we provide convincing interpretable visual evidence through an autoencoder-augmented model showing that this approach is performing semantically meaningful removal of gender features, and thus can also be used to remove gender attributes directly from images.
Year
Venue
DocType
2018
arXiv: Computer Vision and Pattern Recognition
Journal
Volume
Citations 
PageRank 
abs/1811.08489
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Tianlu Wang1384.68
Jieyu Zhao2505.89
Mark Yatskar317611.14
Kai-Wei Chang44735276.81
Vicente Ordonez5141869.65