The security of machine learning in an adversarial setting: A survey

Xianmin Wang, Jing Li, Xiaohui Kuang, Yu an Tan, Jin Li*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

154 引用 (Scopus)

摘要

Machine learning (ML) methods have demonstrated impressive performance in many application fields such as autopilot, facial recognition, and spam detection. Traditionally, ML models are trained and deployed in a benign setting, in which the testing and training data have identical statistical characteristics. However, this assumption usually does not hold in the sense that the ML model is designed in an adversarial setting, where some statistical properties of the data can be tampered with by a capable adversary. Specifically, it has been observed that adversarial examples (also known as adversarial input perambulations) elaborately crafted during training/test phases can seriously undermine the ML performance. The susceptibility of ML models in adversarial settings and the corresponding countermeasures have been studied by many researchers in both academic and industrial communities. In this work, we present a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings. First, we analyze the ML security model to develop a blueprint for this interdisciplinary research area. Then, we review adversarial attack methods and discuss the defense strategies against them. Finally, relying upon the reviewed work, we provide prospective relevant future works for designing more secure ML models.

源语言英语
页(从-至)12-23
页数12
期刊Journal of Parallel and Distributed Computing
130
DOI
出版状态已出版 - 8月 2019

指纹

探究 'The security of machine learning in an adversarial setting: A survey' 的科研主题。它们共同构成独一无二的指纹。

引用此