Polarized Image Translation from Nonpolarized Cameras for Multimodal Face Anti-Spoofing

Yu Tian, Yalin Huang, Kunbo Zhang*, Yue Liu*, Zhenan Sun

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In face antispoofing, it is desirable to have multimodal images to demonstrate liveness cues from various perspectives. However, in most face recognition scenarios, only a single modality, namely visible lighting (VIS) facial images is available. This paper first investigates the possibility of generating polarized (Polar) images from VIS cameras without changing the existing recognition devices to improve the accuracy and robustness of Presentation Attack Detection (PAD) in face biometrics. A novel multimodal face antispoofing framework is proposed based on the machine-learning relationship between VIS and Polar images of genuine faces. Specifically, a dual-modal central differential convolutional network (CDCN) is developed to capture the inherent spoofing features between the VIS and the generated Polar modalities. Quantitative and qualitative experimental results show that our proposed framework not only generates realistic Polar face images but also improves the state-of-the-art face anti-spoofing results on the VIS modal database (i.e. CASIA-SURF). Moreover, a polar face database, CASIA-Polar, has been constructed and will be shared with the public at https://biometrics.idealtest.org to inspire future applications within the biometric anti-spoofing field.

Original languageEnglish
Pages (from-to)5651-5664
Number of pages14
JournalIEEE Transactions on Information Forensics and Security
Volume18
DOIs
Publication statusPublished - 2023

Keywords

  • Face antispoofing
  • image translation
  • multimodal
  • polarization

Fingerprint

Dive into the research topics of 'Polarized Image Translation from Nonpolarized Cameras for Multimodal Face Anti-Spoofing'. Together they form a unique fingerprint.

Cite this