Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding

Huipeng Zhou, Yajie Wang*, Yu An Tan, Shangbo Wu, Yuhang Zhao, Quanxin Zhang, Yuanzhang Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, deep neural networks (DNNs) have made significant progress on face recognition (FR). However, DNNs have been found to be vulnerable to adversarial examples, leading to fatal consequences in real-world applications. This article focuses on improving the transferability of adversarial examples against FR models. We propose gradient eroding (GE) to make the gradient of the residual blocks more diverse, by eroding the back-propagation dynamically. We also propose a novel black-box adversarial attack named corrasion attack based on GE. Extensive experiments demonstrate that our approach can effectively improve the transferability of adversarial attacks against FR models. Our approach overperforms 29.35% in fooling rate than state-of-the-art black-box attacks. Leveraging adversarial training with adversarial examples generated by us, the robustness of models can be improved by up to 43.2%. Besides, corrasion attack successfully breaks two online FR systems, achieving a highest fooling rate of 89.8%.

Original languageEnglish
Pages (from-to)412-419
Number of pages8
JournalIEEE Transactions on Artificial Intelligence
Volume5
Issue number1
DOIs
Publication statusPublished - 1 Jan 2024

Keywords

  • Adversarial example
  • black-box attack
  • face recognition (FR)
  • transfer attack
  • transferability

Fingerprint

Dive into the research topics of 'Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding'. Together they form a unique fingerprint.

Cite this