An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers

Yutian Zhou, Yu an Tan, Quanxin Zhang, Xiaohui Kuang*, Yahong Han, Jingjing Hu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.

源语言英语
页(从-至)1616-1629
页数14
期刊Mobile Networks and Applications
26
4
DOI
出版状态已出版 - 8月 2021

指纹

探究 'An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers' 的科研主题。它们共同构成独一无二的指纹。

引用此