TY - GEN
T1 - Decrease the Prompt Uncertainty
T2 - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024
AU - Weng, Jinta
AU - Zhang, Zhaoguang
AU - Yaqi, Jing
AU - Niu, Chenxu
AU - Huang, Heyan
AU - Hu, Yue
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - With few-shot learning abilities, pre-trained language models (PLMs) have achieved remarkable success in classification tasks. However, recent studies have shown that the performance of PLM is vulnerable due to different prompts and the instability of the prompt-based learning process. To address this challenge, we explore appropriate perturbation addition of adversarial training and integrate the global knowledge of the full-parameter fine-tuned pre-trained language model(PLM). Specifically, we propose an adversarial prompt learning model (ATPET) and ATPET with fine-tuning(ATPET-FT), incorporating ATPET with fine-tuning knowledge into the prompt learning process. Through extensive experiments on several few-shot classification tasks and challenging data settings, we demonstrate that our methods consistently improve the robustness while maintaining the effectiveness of PLMs.
AB - With few-shot learning abilities, pre-trained language models (PLMs) have achieved remarkable success in classification tasks. However, recent studies have shown that the performance of PLM is vulnerable due to different prompts and the instability of the prompt-based learning process. To address this challenge, we explore appropriate perturbation addition of adversarial training and integrate the global knowledge of the full-parameter fine-tuned pre-trained language model(PLM). Specifically, we propose an adversarial prompt learning model (ATPET) and ATPET with fine-tuning(ATPET-FT), incorporating ATPET with fine-tuning knowledge into the prompt learning process. Through extensive experiments on several few-shot classification tasks and challenging data settings, we demonstrate that our methods consistently improve the robustness while maintaining the effectiveness of PLMs.
UR - http://www.scopus.com/inward/record.url?scp=85217878068&partnerID=8YFLogxK
U2 - 10.1109/SMC54092.2024.10831613
DO - 10.1109/SMC54092.2024.10831613
M3 - Conference contribution
AN - SCOPUS:85217878068
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 1230
EP - 1236
BT - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 6 October 2024 through 10 October 2024
ER -