Abstract
Object detection is one of the essential tasks of computer vision. Object detectors based on the deep neural network have been used more and more widely in safe-sensitive applications, like face recognition, video surveillance, autonomous driving, and other tasks. It has been proved that object detectors are vulnerable to adversarial attacks. We propose a novel black-box attack method, which can successfully attack regression-based and region-based object detectors. We introduce methods to reduce search dimensions, reduce the dimension of optimization problems and reduce the number of queries by using the Covariance matrix adaptation Evolution strategy (CMA-ES) as the primary method to generate adversarial examples. Our method only adds adversarial perturbations in the object box to achieve a precise attack. Our proposed attack can hide the specified object with an attack success rate of 86% and an average number of queries of 5, 124, and hide all objects with a success rate of 74% and an average number of queries of 6, 154. Our work illustrates the effectiveness of the CMA-ES method to generate adversarial examples and proves the vulnerability of the object detectors against the adversarial attacks.
Original language | English |
---|---|
Pages (from-to) | 406-412 |
Number of pages | 7 |
Journal | Chinese Journal of Electronics |
Volume | 30 |
Issue number | 3 |
DOIs | |
Publication status | Published - May 2021 |
Keywords
- Adversarial example
- Black-box attack
- Deep neural network
- Object detector