An adversarial attack on DNN-based black-box object detectors

Yajie Wang, Yu an Tan, Wenjiao Zhang, Yuhang Zhao, Xiaohui Kuang*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

46 Citations (Scopus)

Abstract

Object detection models play an essential role in various IoT devices as one of the core components. Scientific experiments have proven that object detection models are vulnerable to adversarial examples. Heretofore, some attack methods against object detection models have been proposed, but the existing attack methods can only attack white-box models or a specific type of black-box models. In this paper, we propose a novel black-box attack method called Evaporate Attack, which can successfully attack both regression-based and region-based detection models. To perform an effective attack on different types of object detection models, we design an optimization algorithm, which can generate adversarial examples only utilizes the position and label information of the model's prediction. Evaporate Attack can hide objects from detection models without any interior information of the model. This scenario is much practical in real-world faced by the attacker. Our approach achieves an 84% fooling rate on regression-based YOLOv3 and a 48% fooling rate on region-based Faster R–CNN, under the premise that all objects are hidden.

Original languageEnglish
Article number102634
JournalJournal of Network and Computer Applications
Volume161
DOIs
Publication statusPublished - 1 Jul 2020

Keywords

  • Adversarial example
  • Black-box attack
  • Deep neural network
  • Object detector

Fingerprint

Dive into the research topics of 'An adversarial attack on DNN-based black-box object detectors'. Together they form a unique fingerprint.

Cite this