A Fair Model is Not Fair in a Biased Environment

Yuya Sato, Soshi Maeda, Muku akasaka, Masakatsu Nishigaki, Tetsushi Ohki
APSIPA ASC: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. , Nov.2022.
[ Paper ]

Abstract

Facial images contain sensitive attributes such as skin color, and the elimination of them from the input in the face recognition is not easy. In addition, the input data includes the influence of the environment in which the system is actually used, so the interaction between sensitive attributes and the environment may make it inherently difficult for the facial feature extractor to extract facial features. Therefore, studies on the fairness of face recognition should consider the fairness of environmental factors. Common datasets used to evaluate the fairness of face recognition includes a variety of environmental factors, and the fairness evaluated by these datasets are usually the fairness in a typical shooting environment. However, a dataset that includes only extremely biased environmental factors potentially results in less equity among attributes. We construct a dataset with pseudo-biased environmental factors by dynamically changing environmental factors such as brightness in the test data. The results also show that the biased environmental factors deteriorate the fairness inter-attribute. Also, we showed that the distinguished attributes in terms of fairness in a biased environment vary based on the architecture of the model and the training dataset.

Updated: