A CMA-ES-Based Adversarial Attack Against Black-Box Object Detectors

L. Y.U. Haoran*, T. A.N. Yu'an*, X. U.E. Yuan*, W. A.N.G. Yajie*, X. U.E. Jingfeng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Object detection is one of the essential tasks of computer vision. Object detectors based on the deep neural network have been used more and more widely in safe-sensitive applications, like face recognition, video surveillance, autonomous driving, and other tasks. It has been proved that object detectors are vulnerable to adversarial attacks. We propose a novel black-box attack method, which can successfully attack regression-based and region-based object detectors. We introduce methods to reduce search dimensions, reduce the dimension of optimization problems and reduce the number of queries by using the Covariance matrix adaptation Evolution strategy (CMA-ES) as the primary method to generate adversarial examples. Our method only adds adversarial perturbations in the object box to achieve a precise attack. Our proposed attack can hide the specified object with an attack success rate of 86% and an average number of queries of 5, 124, and hide all objects with a success rate of 74% and an average number of queries of 6, 154. Our work illustrates the effectiveness of the CMA-ES method to generate adversarial examples and proves the vulnerability of the object detectors against the adversarial attacks.

Original languageEnglish
Pages (from-to)406-412
Number of pages7
JournalChinese Journal of Electronics
Volume30
Issue number3
DOIs
Publication statusPublished - May 2021

Keywords

  • Adversarial example
  • Black-box attack
  • Deep neural network
  • Object detector

Fingerprint

Dive into the research topics of 'A CMA-ES-Based Adversarial Attack Against Black-Box Object Detectors'. Together they form a unique fingerprint.

Cite this