摘要
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge. This motivates us to check the hypothesis that prompt-tuning is also a promising choice for long-tailed classification, since the tail classes are intuitively few-shot ones. To achieve this aim, we conduct empirical studies to examine the hypothesis. The results demonstrate that prompt-tuning makes pretrained language models at least good long-tailed learners. For intuitions on why prompt-tuning can achieve good performance in long-tailed classification, we carry out in-depth analyses by progressively bridging the gap between prompt-tuning and commonly used finetuning. The summary is that the classifier structure and parameterization form the key to making good long-tailed learners, in comparison with the less important input structure. Finally, we verify the applicability of our finding to few-shot classification.
源语言 | 英语 |
---|---|
页 | 3298-3312 |
页数 | 15 |
出版状态 | 已出版 - 2022 |
活动 | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Abu Dhabi, 阿拉伯联合酋长国 期限: 7 12月 2022 → 11 12月 2022 |
会议
会议 | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 |
---|---|
国家/地区 | 阿拉伯联合酋长国 |
市 | Abu Dhabi |
时期 | 7/12/22 → 11/12/22 |