首页    期刊浏览 2025年02月22日 星期六
登录注册

文章基本信息

  • 标题:A Study of Gender Bias in Face Presentation Attack and Its Mitigation
  • 本地全文:下载
  • 作者:Norah Alshareef ; Xiaohong Yuan ; Kaushik Roy
  • 期刊名称:Future Internet
  • 电子版ISSN:1999-5903
  • 出版年度:2021
  • 卷号:13
  • 期号:9
  • 页码:234
  • DOI:10.3390/fi13090234
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:In biometric systems, the process of identifying or verifying people using facial data must be highly accurate to ensure a high level of security and credibility. Many researchers investigated the fairness of face recognition systems and reported demographic bias. However, there was not much study on face presentation attack detection technology (PAD) in terms of bias. This research sheds light on bias in face spoofing detection by implementing two phases. First, two CNN (convolutional neural network)-based presentation attack detection models, ResNet50 and VGG16 were used to evaluate the fairness of detecting imposer attacks on the basis of gender. In addition, different sizes of Spoof in the Wild (SiW) testing and training data were used in the first phase to study the effect of gender distribution on the models’ performance. Second, the debiasing variational autoencoder (DB-VAE) (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) was applied in combination with VGG16 to assess its ability to mitigate bias in presentation attack detection. Our experiments exposed minor gender bias in CNN-based presentation attack detection methods. In addition, it was proven that imbalance in training and testing data does not necessarily lead to gender bias in the model’s performance. Results proved that the DB-VAE approach (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) succeeded in mitigating bias in detecting spoof faces.
国家哲学社会科学文献中心版权所有