Enhancing fairness in face detection in computer vision systems by demographic bias mitigation
2022
Fairness has become an important agenda in computer vision and artificial intelligence. Recent studies have shown that many computer vision models and datasets exhibit demographic biases and proposed mitigation strategies. These works attempt to address accuracy disparity, spurious correlations, or unbalanced representations in datasets in tasks such as face recognition, verification and expression and attribute classification. These tasks, however, all require face detection as the first preprocessing step, and surprisingly, there has been little effort in identifying or mitigating biases in face detection. Biased face detectors themselves pose a threat against fair and ethical AI systems, and their biases may be further passed on to subsequent downstream tasks such as face recognition in a computer vision pipeline. This paper therefore investigates the problem of biases in face detection, focusing on accuracy disparity of detectors between demographic groups including gender, age group, and skin tone. We collect perceived demographic attributes on a popular face detection benchmark dataset, WIDER FACE, report skewed demographic distributions, and compare detection performance between groups. In order to mitigate the biases, we apply three mitigation methods that have been introduced in the recent literature and also propose two novel methods. Experimental results show that these methods are effective in reducing demographic biases. We also discuss how the effectiveness varies by demographic attributes, detection easiness, and multiple detectors, which will shed light on this new topic of addressing face detection bias.
Research areas