Classifier-adaptation knowledge distillation framework for relation extraction and event detection with imbalanced data

Dandan Song*, Jing Xu, Jinhui Pang, Heyan Huang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

20 引用 (Scopus)

摘要

Fundamental information extraction tasks, such as relation extraction and event detection, suffer from a data imbalance problem. To alleviate this problem, existing methods rely mostly on well-designed loss functions to reduce the negative influence of imbalanced data. However, this approach requires additional hyper-parameters and limits scalability. Furthermore, these methods can only benefit specific tasks and do not provide a unified framework across relation extraction and event detection. In this paper, a Classifier-Adaptation Knowledge Distillation (CAKD) framework is proposed to address these issues, thus improving relation extraction and event detection performance. The first step is to exploit sentence-level identification information across relation extraction and event detection, which can reduce identification errors caused by the data imbalance problem without relying on additional hyper-parameters. Moreover, this sentence-level identification information is used by a teacher network to guide the baseline model's training by sharing its classifier. Like an instructor, the classifier improves the baseline model's ability to extract this sentence-level identification information from raw texts, thus benefiting overall performance. Experiments were conducted on both relation extraction and event detection using the Text Analysis Conference Relation Extraction Dataset (TACRED) and Automatic Content Extraction (ACE) 2005 English datasets, respectively. The results demonstrate the effectiveness of the proposed framework.

源语言英语
页(从-至)222-238
页数17
期刊Information Sciences
573
DOI
出版状态已出版 - 9月 2021

指纹

探究 'Classifier-adaptation knowledge distillation framework for relation extraction and event detection with imbalanced data' 的科研主题。它们共同构成独一无二的指纹。

引用此