TY - GEN
T1 - Free Adversarial Training with Layerwise Heuristic Learning
AU - Zhang, Haitao
AU - Shi, Yucheng
AU - Dong, Benyu
AU - Han, Yahong
AU - Li, Yuanzhang
AU - Kuang, Xiaohui
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Due to the existence of adversarial attacks, various applications that employ deep neural networks (DNNs) have been under threat. Adversarial training enhances robustness of DNN-based systems by augmenting training data with adversarial samples. Projected gradient descent adversarial training (PGD AT), one of the promising defense methods, can resist strong attacks. We propose “free” adversarial training with layerwise heuristic learning (LHFAT) to remedy these problems. To reduce heavy computation cost, we couple model parameter updating with projected gradient descent (PGD) adversarial example updating while retraining the same mini-batch of data, where we “free” and unburden extra updates. Learning rate reflects weight updating speed. Weight gradient indicates weight updating efficiency. If weights are frequently updated towards opposite directions in one training epoch, then there are redundant updates. For higher level of weight updating efficiency, we design a new learning scheme, layerwise heuristic learning, which accelerates training convergence by restraining redundant weight updating and boosting efficient weight updating of layers according to weight gradient information. We demonstrate that LHFAT yields better defense performance on CIFAR-10 with approximately 8% GPU training time of PGD AT and LHFAT is also validated on ImageNet. We have released the code for our proposed method LHFAT at https://github.com/anonymous530/LHFAT.
AB - Due to the existence of adversarial attacks, various applications that employ deep neural networks (DNNs) have been under threat. Adversarial training enhances robustness of DNN-based systems by augmenting training data with adversarial samples. Projected gradient descent adversarial training (PGD AT), one of the promising defense methods, can resist strong attacks. We propose “free” adversarial training with layerwise heuristic learning (LHFAT) to remedy these problems. To reduce heavy computation cost, we couple model parameter updating with projected gradient descent (PGD) adversarial example updating while retraining the same mini-batch of data, where we “free” and unburden extra updates. Learning rate reflects weight updating speed. Weight gradient indicates weight updating efficiency. If weights are frequently updated towards opposite directions in one training epoch, then there are redundant updates. For higher level of weight updating efficiency, we design a new learning scheme, layerwise heuristic learning, which accelerates training convergence by restraining redundant weight updating and boosting efficient weight updating of layers according to weight gradient information. We demonstrate that LHFAT yields better defense performance on CIFAR-10 with approximately 8% GPU training time of PGD AT and LHFAT is also validated on ImageNet. We have released the code for our proposed method LHFAT at https://github.com/anonymous530/LHFAT.
UR - http://www.scopus.com/inward/record.url?scp=85117102826&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-87358-5_10
DO - 10.1007/978-3-030-87358-5_10
M3 - Conference contribution
AN - SCOPUS:85117102826
SN - 9783030873578
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 120
EP - 131
BT - Image and Graphics - 11th International Conference, ICIG 2021, Proceedings
A2 - Peng, Yuxin
A2 - Hu, Shi-Min
A2 - Gabbouj, Moncef
A2 - Zhou, Kun
A2 - Elad, Michael
A2 - Xu, Kun
PB - Springer Science and Business Media Deutschland GmbH
T2 - 11th International Conference on Image and Graphics, ICIG 2021
Y2 - 6 August 2021 through 8 August 2021
ER -