Title
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
Abstract
Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.00894
CVPR
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
37
7
Name
Order
Citations
PageRank
Zeyu Wang133.76
Qinami Klint200.34
Karakozis Yannis300.34
Kyle Genova4293.83
Nair Prem500.34
Hata Kenji600.34
Olga Russakovsky75337237.13