Polarized Image Translation from Nonpolarized Cameras for Multimodal Face Anti-Spoofing

Yu Tian, Yalin Huang, Kunbo Zhang*, Yue Liu*, Zhenan Sun

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

In face antispoofing, it is desirable to have multimodal images to demonstrate liveness cues from various perspectives. However, in most face recognition scenarios, only a single modality, namely visible lighting (VIS) facial images is available. This paper first investigates the possibility of generating polarized (Polar) images from VIS cameras without changing the existing recognition devices to improve the accuracy and robustness of Presentation Attack Detection (PAD) in face biometrics. A novel multimodal face antispoofing framework is proposed based on the machine-learning relationship between VIS and Polar images of genuine faces. Specifically, a dual-modal central differential convolutional network (CDCN) is developed to capture the inherent spoofing features between the VIS and the generated Polar modalities. Quantitative and qualitative experimental results show that our proposed framework not only generates realistic Polar face images but also improves the state-of-the-art face anti-spoofing results on the VIS modal database (i.e. CASIA-SURF). Moreover, a polar face database, CASIA-Polar, has been constructed and will be shared with the public at https://biometrics.idealtest.org to inspire future applications within the biometric anti-spoofing field.

源语言英语
页(从-至)5651-5664
页数14
期刊IEEE Transactions on Information Forensics and Security
18
DOI
出版状态已出版 - 2023

指纹

探究 'Polarized Image Translation from Nonpolarized Cameras for Multimodal Face Anti-Spoofing' 的科研主题。它们共同构成独一无二的指纹。

引用此