Adversarial Attacks against Traffic Sign Detection for Autonomous Driving

Feiyang Xu, Ying Li*, Chao Yang, Weida Wang, Bin Xu

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Deep neural networks play a crucial role in 2D object detection based on visual data, but they are also vulnerable to adversarial samples. Attackers manipulate low-resolution images to execute data poisoning attacks. This paper introduces a method to generate realistic high-resolution adversarial samples aimed at compromising traffic sign detection models. Specifically, we propose a high-resolution adversarial sample framework built upon generative adversarial networks. Subsequently, an adversarial traffic sign detection model is developed to investigate the impact of data poisoning. To enhance the model's robustness, we conduct adversarial training. Experimental results demonstrate the efficacy of our data poisoning approach in misleading the detection model. Furthermore, the detection model exhibits improved robustness against such attacks following adversarial training.

源语言英语
主期刊名Proceedings of the 2023 7th CAA International Conference on Vehicular Control and Intelligence, CVCI 2023
出版商Institute of Electrical and Electronics Engineers Inc.
ISBN(电子版)9798350340488
DOI
出版状态已出版 - 2023
活动7th CAA International Conference on Vehicular Control and Intelligence, CVCI 2023 - Changsha, 中国
期限: 27 10月 202329 10月 2023

出版系列

姓名Proceedings of the 2023 7th CAA International Conference on Vehicular Control and Intelligence, CVCI 2023

会议

会议7th CAA International Conference on Vehicular Control and Intelligence, CVCI 2023
国家/地区中国
Changsha
时期27/10/2329/10/23

指纹

探究 'Adversarial Attacks against Traffic Sign Detection for Autonomous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此