An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers

Yutian Zhou, Yu an Tan, Quanxin Zhang, Xiaohui Kuang*, Yahong Han, Jingjing Hu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.

Original languageEnglish
Pages (from-to)1616-1629
Number of pages14
JournalMobile Networks and Applications
Volume26
Issue number4
DOIs
Publication statusPublished - Aug 2021

Keywords

  • Adversarial examples
  • Black-box attack
  • Evolutionary algorithm
  • Neural networks

Fingerprint

Dive into the research topics of 'An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers'. Together they form a unique fingerprint.

Cite this