Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems

Meng Shen*, Hao Yu, Liehuang Zhu*, Ke Xu, Qi Li, Jiankun Hu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

34 Citations (Scopus)

Abstract

Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems. Recent studies, however, show that DNNs are vulnerable to adversarial examples, which potentially mislead DNN-based FR systems in the physical world. Existing attacks either generate perturbations working merely in the digital world, or rely on customized equipment to generate perturbations that are not robust in the ever-changing physical environment. In this paper, we propose FaceAdv, a physical-world attack that crafts adversarial stickers to deceive FR systems. It mainly consists of a sticker generator and a convertor, where the former can craft several stickers with different shapes while the latter aims to digitally attach stickers to human faces and provide feedback to the generator to improve the effectiveness. We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking three typical FR systems (i.e., ArcFace, CosFace and FaceNet). The results show that compared with a state-of-the-art attack, FaceAdv can significantly improve the success rates of both dodging and impersonating attacks. We also conduct comprehensive evaluations to demonstrate the robustness of FaceAdv.

Original languageEnglish
Article number9505665
Pages (from-to)4063-4077
Number of pages15
JournalIEEE Transactions on Information Forensics and Security
Volume16
DOIs
Publication statusPublished - 2021

Keywords

  • Adversarial examples
  • adversarial stickers
  • face recognition systems

Fingerprint

Dive into the research topics of 'Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems'. Together they form a unique fingerprint.

Cite this