摘要
Deep learning models are being widely used in almost every field of computing and information processing. The advantages offered by these models are unparalleled, however, similar to any other computing discipline, they are also vulnerable to security threats. A compromised deep neural network can significantly impact its robustness and accuracy. In this work, we present a novel targeted attack method against state-of-the-art object detection models YOLO v3 and AWS Rekognition in a black-box environment. We present an improved attack method using Discrete Cosine Transform based on boundary attack plus plus mechanism, and apply it on attacking object detectors offline and online. By querying the victim detection models along with transforming the images from the spatial domain into the frequency domain, we ensure that any specified object in an image can be successfully recognized as any other desired class by YOLO v3 and AWS Rekognition. The results prove that our method has significant boosting effects on boundary attacks in offline and online object detection systems.
源语言 | 英语 |
---|---|
页(从-至) | 596-607 |
页数 | 12 |
期刊 | Information Sciences |
卷 | 546 |
DOI | |
出版状态 | 已出版 - 6 2月 2021 |