INSNER: A generative instruction-based prompting method for boosting performance in few-shot NER

Peiwen Zhao, Chong Feng*, Peiguang Li, Guanting Dong, Sirui Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Most existing Named Entity Recognition (NER) methods require a large scale of labeled data and exhibit poor performance in low-resource scenarios. Thus in this paper, we propose INSNER, a generative INStruction-based prompting method for few-shot NER. Specifically, we introduce a unified instruction to guide the model in extracting correct entities in response to the instruction, and construct synthetic verbalizers, which support complex types, to encourage effective knowledge transfer. We organize the NER results in natural language form, which mitigates the gap between pre-training and fine-tuning of language models. Furthermore, to facilitate the model to learn task-related knowledge and rich label semantics, we introduce entity-oriented prompt-tuning as an auxiliary task. We conduct in-domain and cross-domain experiments in few-shot settings on 4 datasets, and extensive analyses to validate the effectiveness and generalization ability of INSNER. Experimental results demonstrate that INSNER significantly outperforms current methods in few-shot settings, especially huge improvements(+12.0% F1) over the powerful ChatGPT in MIT Movie Complex both under a 10-shot setting.

Original languageEnglish
Article number104040
JournalInformation Processing and Management
Volume62
Issue number3
DOIs
Publication statusPublished - May 2025

Keywords

  • Few-shot learning
  • Information extraction
  • Named Entity Recognition
  • Prompt-based learning

Cite this