Towards a physical-world adversarial patch for blinding object detection models

Yajie Wang, Haoran Lv, Xiaohui Kuang, Gang Zhao, Yu an Tan, Quanxin Zhang, Jingjing Hu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

52 Citations (Scopus)

Abstract

As one of the core components of the computer vision, the object detection model plays a vital role in various security-sensitive systems. However, it has been proved that the object detection model is vulnerable to the adversarial attack. In this paper, we propose a novel adversarial patch attack against object detection models. Our attack can make the object of a specific class invisible to object detection models. We design the detection score to measure the detection model's output and generate the adversarial patch by minimizing the detection score. We successfully suppress the model's inference and fool several state-of-the-art object detection models. We triumphantly achieve a minimum recall of 11.02% and a maximum fooling rate of 81.00% and demonstrates the high transferability of adversarial patch between different architecture and datasets. Finally, we successfully fool a real-time object detection system in the physical world, demonstrating the feasibility of transferring the digital adversarial patch to the physical world. Our work illustrates the vulnerability of the object detection model against the adversarial patch attack in both the digital and physical world.

Original languageEnglish
Pages (from-to)459-471
Number of pages13
JournalInformation Sciences
Volume556
DOIs
Publication statusPublished - May 2021

Keywords

  • Adversarial attack
  • Adversarial patch
  • Deep neural network
  • Object detection model

Fingerprint

Dive into the research topics of 'Towards a physical-world adversarial patch for blinding object detection models'. Together they form a unique fingerprint.

Cite this