TY - JOUR
T1 - INSNER
T2 - A generative instruction-based prompting method for boosting performance in few-shot NER
AU - Zhao, Peiwen
AU - Feng, Chong
AU - Li, Peiguang
AU - Dong, Guanting
AU - Wang, Sirui
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2025/5
Y1 - 2025/5
N2 - Most existing Named Entity Recognition (NER) methods require a large scale of labeled data and exhibit poor performance in low-resource scenarios. Thus in this paper, we propose INSNER, a generative INStruction-based prompting method for few-shot NER. Specifically, we introduce a unified instruction to guide the model in extracting correct entities in response to the instruction, and construct synthetic verbalizers, which support complex types, to encourage effective knowledge transfer. We organize the NER results in natural language form, which mitigates the gap between pre-training and fine-tuning of language models. Furthermore, to facilitate the model to learn task-related knowledge and rich label semantics, we introduce entity-oriented prompt-tuning as an auxiliary task. We conduct in-domain and cross-domain experiments in few-shot settings on 4 datasets, and extensive analyses to validate the effectiveness and generalization ability of INSNER. Experimental results demonstrate that INSNER significantly outperforms current methods in few-shot settings, especially huge improvements(+12.0% F1) over the powerful ChatGPT in MIT Movie Complex both under a 10-shot setting.
AB - Most existing Named Entity Recognition (NER) methods require a large scale of labeled data and exhibit poor performance in low-resource scenarios. Thus in this paper, we propose INSNER, a generative INStruction-based prompting method for few-shot NER. Specifically, we introduce a unified instruction to guide the model in extracting correct entities in response to the instruction, and construct synthetic verbalizers, which support complex types, to encourage effective knowledge transfer. We organize the NER results in natural language form, which mitigates the gap between pre-training and fine-tuning of language models. Furthermore, to facilitate the model to learn task-related knowledge and rich label semantics, we introduce entity-oriented prompt-tuning as an auxiliary task. We conduct in-domain and cross-domain experiments in few-shot settings on 4 datasets, and extensive analyses to validate the effectiveness and generalization ability of INSNER. Experimental results demonstrate that INSNER significantly outperforms current methods in few-shot settings, especially huge improvements(+12.0% F1) over the powerful ChatGPT in MIT Movie Complex both under a 10-shot setting.
KW - Few-shot learning
KW - Information extraction
KW - Named Entity Recognition
KW - Prompt-based learning
UR - http://www.scopus.com/inward/record.url?scp=85214138297&partnerID=8YFLogxK
U2 - 10.1016/j.ipm.2024.104040
DO - 10.1016/j.ipm.2024.104040
M3 - Article
AN - SCOPUS:85214138297
SN - 0306-4573
VL - 62
JO - Information Processing and Management
JF - Information Processing and Management
IS - 3
M1 - 104040
ER -