摘要
Deep neural networks(DNNs) are widely used in AI-controlled Cyber-Physical Systems (CPS) to controll cars, robotics, water treatment plants and railways. However, DNNs have vulnerabilities to well-designed input samples that are called adversarial examples. Adversary attack is one of the important techniques for detecting and improving the security of neural networks. Existing attacks, including state-of-the-art black-box attack have a lower success rate and make invalid queries that are not beneficial to obtain the direction of generating adversarial examples. For these reasons, this paper proposed a CMA-ES-based adversarial attack on black-box DNNs. Firstly, an efficient method to reduce the number of invalid queries is introduced. Secondly, a black-box attack of generating adversarial examples to fit a high-dimensional independent Gaussian distribution of the local solution space is proposed. Finally, a new CMA-based perturbation compression method is applied to make the process of reducing perturbation smoother. Experimental results on ImageNet classifiers show that the proposed attack has a higher success-rate than the state-of-the-art black-box attack but reduce the number of queries by 30% equally.
源语言 | 英语 |
---|---|
文章编号 | 8917642 |
页(从-至) | 172938-172947 |
页数 | 10 |
期刊 | IEEE Access |
卷 | 7 |
DOI | |
出版状态 | 已出版 - 2019 |