TY - GEN
T1 - An Adversarial Attack Method for Multivariate Time Series Classification Based on AdvGAN
AU - Wang, Yubo
AU - He, Hui
AU - Zhang, Peng
AU - Ma, Yuanchi
AU - Lei, Zhongxiang
AU - Niu, Zhendong
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025
Y1 - 2025
N2 - Considering the complexity of time series data and real-world applications, multivariate time series classification models are vulnerable to adversarial attacks. Although existing white-box attack strategies have made progress in generating adversarial samples, they rely on access to the target model’s parameters, training data, and gradients. Therefore, we apply AdvGAN framework for multivariate time series classification. AdvGAN is designed as a framework based on Generative Adversarial Networks (GANs), encompassing a generator, discriminator. The generator creates multivariate perturbations, and the perturbations combine with original data to form adversarial samples. The discriminator assesses the authenticity of these samples. These samples are then used to evaluate the security of the target model. We conducts experiments across three University of East Anglia (UEA) and University of California Riverside (UCR) datasets, employing the Multivariate Long Short Term Memory Fully Convolutional Network (MLSTM_FCN) as the target model for adversarial attack testing. The results indicate that our designed attack method effectively enhances the success rate of adversarial attacks while maintaining a similar level of Mean Squared Error (MSE) between the generated adversarial samples and the original samples.
AB - Considering the complexity of time series data and real-world applications, multivariate time series classification models are vulnerable to adversarial attacks. Although existing white-box attack strategies have made progress in generating adversarial samples, they rely on access to the target model’s parameters, training data, and gradients. Therefore, we apply AdvGAN framework for multivariate time series classification. AdvGAN is designed as a framework based on Generative Adversarial Networks (GANs), encompassing a generator, discriminator. The generator creates multivariate perturbations, and the perturbations combine with original data to form adversarial samples. The discriminator assesses the authenticity of these samples. These samples are then used to evaluate the security of the target model. We conducts experiments across three University of East Anglia (UEA) and University of California Riverside (UCR) datasets, employing the Multivariate Long Short Term Memory Fully Convolutional Network (MLSTM_FCN) as the target model for adversarial attack testing. The results indicate that our designed attack method effectively enhances the success rate of adversarial attacks while maintaining a similar level of Mean Squared Error (MSE) between the generated adversarial samples and the original samples.
KW - Adversarial Attack
KW - Generative Adversarial Network
KW - Multivariate Time Series Classification
UR - http://www.scopus.com/inward/record.url?scp=86000713282&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-2409-6_19
DO - 10.1007/978-981-96-2409-6_19
M3 - Conference contribution
AN - SCOPUS:86000713282
SN - 9789819624089
T3 - Lecture Notes in Electrical Engineering
SP - 194
EP - 202
BT - Proceedings of the 2023 International Conference on Wireless Communications, Networking and Applications
A2 - Siarry, Patrick
A2 - Jabbar, M.A.
A2 - Cheung, Simon King Sing
A2 - Li, Xiaolong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th International Conference on Wireless Communications, Networking and Applications, WCNA 2023
Y2 - 29 December 2023 through 31 December 2023
ER -