Abstract
In face antispoofing, it is desirable to have multimodal images to demonstrate liveness cues from various perspectives. However, in most face recognition scenarios, only a single modality, namely visible lighting (VIS) facial images is available. This paper first investigates the possibility of generating polarized (Polar) images from VIS cameras without changing the existing recognition devices to improve the accuracy and robustness of Presentation Attack Detection (PAD) in face biometrics. A novel multimodal face antispoofing framework is proposed based on the machine-learning relationship between VIS and Polar images of genuine faces. Specifically, a dual-modal central differential convolutional network (CDCN) is developed to capture the inherent spoofing features between the VIS and the generated Polar modalities. Quantitative and qualitative experimental results show that our proposed framework not only generates realistic Polar face images but also improves the state-of-the-art face anti-spoofing results on the VIS modal database (i.e. CASIA-SURF). Moreover, a polar face database, CASIA-Polar, has been constructed and will be shared with the public at https://biometrics.idealtest.org to inspire future applications within the biometric anti-spoofing field.
Original language | English |
---|---|
Pages (from-to) | 5651-5664 |
Number of pages | 14 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 18 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Face antispoofing
- image translation
- multimodal
- polarization