Abstract
Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.
Original language | English |
---|---|
Pages (from-to) | 1616-1629 |
Number of pages | 14 |
Journal | Mobile Networks and Applications |
Volume | 26 |
Issue number | 4 |
DOIs | |
Publication status | Published - Aug 2021 |
Keywords
- Adversarial examples
- Black-box attack
- Evolutionary algorithm
- Neural networks