The security of machine learning in an adversarial setting: A survey

Xianmin Wang, Jing Li, Xiaohui Kuang, Yu an Tan, Jin Li*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

154 Citations (Scopus)

Abstract

Machine learning (ML) methods have demonstrated impressive performance in many application fields such as autopilot, facial recognition, and spam detection. Traditionally, ML models are trained and deployed in a benign setting, in which the testing and training data have identical statistical characteristics. However, this assumption usually does not hold in the sense that the ML model is designed in an adversarial setting, where some statistical properties of the data can be tampered with by a capable adversary. Specifically, it has been observed that adversarial examples (also known as adversarial input perambulations) elaborately crafted during training/test phases can seriously undermine the ML performance. The susceptibility of ML models in adversarial settings and the corresponding countermeasures have been studied by many researchers in both academic and industrial communities. In this work, we present a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings. First, we analyze the ML security model to develop a blueprint for this interdisciplinary research area. Then, we review adversarial attack methods and discuss the defense strategies against them. Finally, relying upon the reviewed work, we provide prospective relevant future works for designing more secure ML models.

Original languageEnglish
Pages (from-to)12-23
Number of pages12
JournalJournal of Parallel and Distributed Computing
Volume130
DOIs
Publication statusPublished - Aug 2019

Keywords

  • Adversarial attack
  • Adversarial example
  • Adversarial setting
  • Machine learning
  • Security model

Fingerprint

Dive into the research topics of 'The security of machine learning in an adversarial setting: A survey'. Together they form a unique fingerprint.

Cite this