TY - GEN
T1 - CONSPROMPT
T2 - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024
AU - Weng, Jinta
AU - Deng, Yifan
AU - Li, Donghao
AU - You, Hao
AU - Hu, Yue
AU - Huang, Heyan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Prompt has become an effective linguistic tool for utilizing pre-trained language models. However, in few-shot scenarios, subtle changes of prompt's design always make the result widely different, and the prompt learning methods are also easy to overfit the limited samples. To alleviate this, we explore utilizing suitable contrastive samples and multi-degree contrastive learning methods to improve the robustness of prompt's representation. Therefore, the proposed Consprompt combined with prompt encoding network, contrastive sampling modules, and contrastive scoring modules, is introduced to realize differential contrastive learning. Our results exhibit the state-of-the-art performance in different few-shot settings, and the ablation experiments also certify the effectiveness of utilizing multi-degree contrastive learning in prompt-based fine-tuning process.
AB - Prompt has become an effective linguistic tool for utilizing pre-trained language models. However, in few-shot scenarios, subtle changes of prompt's design always make the result widely different, and the prompt learning methods are also easy to overfit the limited samples. To alleviate this, we explore utilizing suitable contrastive samples and multi-degree contrastive learning methods to improve the robustness of prompt's representation. Therefore, the proposed Consprompt combined with prompt encoding network, contrastive sampling modules, and contrastive scoring modules, is introduced to realize differential contrastive learning. Our results exhibit the state-of-the-art performance in different few-shot settings, and the ablation experiments also certify the effectiveness of utilizing multi-degree contrastive learning in prompt-based fine-tuning process.
KW - contrastive learning
KW - few-shot learning
KW - Pre-trained language model
KW - Prompt learning
UR - http://www.scopus.com/inward/record.url?scp=85195381525&partnerID=8YFLogxK
U2 - 10.1109/ICASSP48485.2024.10448403
DO - 10.1109/ICASSP48485.2024.10448403
M3 - Conference contribution
AN - SCOPUS:85195381525
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 6835
EP - 6839
BT - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 April 2024 through 19 April 2024
ER -