Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems

Meng Shen*, Hao Yu, Liehuang Zhu*, Ke Xu, Qi Li, Jiankun Hu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

36 引用 (Scopus)

摘要

Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems. Recent studies, however, show that DNNs are vulnerable to adversarial examples, which potentially mislead DNN-based FR systems in the physical world. Existing attacks either generate perturbations working merely in the digital world, or rely on customized equipment to generate perturbations that are not robust in the ever-changing physical environment. In this paper, we propose FaceAdv, a physical-world attack that crafts adversarial stickers to deceive FR systems. It mainly consists of a sticker generator and a convertor, where the former can craft several stickers with different shapes while the latter aims to digitally attach stickers to human faces and provide feedback to the generator to improve the effectiveness. We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking three typical FR systems (i.e., ArcFace, CosFace and FaceNet). The results show that compared with a state-of-the-art attack, FaceAdv can significantly improve the success rates of both dodging and impersonating attacks. We also conduct comprehensive evaluations to demonstrate the robustness of FaceAdv.

源语言英语
文章编号9505665
页(从-至)4063-4077
页数15
期刊IEEE Transactions on Information Forensics and Security
16
DOI
出版状态已出版 - 2021

指纹

探究 'Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems' 的科研主题。它们共同构成独一无二的指纹。

引用此