A Survey of Knowledge Enhanced Pre-Trained Language Models

Linmei Hu*, Zeyi Liu, Ziwang Zhao, Lei Hou, Liqiang Nie, Juanzi Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

30 引用 (Scopus)

摘要

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

源语言英语
页(从-至)1413-1430
页数18
期刊IEEE Transactions on Knowledge and Data Engineering
36
4
DOI
出版状态已出版 - 1 4月 2024

指纹

探究 'A Survey of Knowledge Enhanced Pre-Trained Language Models' 的科研主题。它们共同构成独一无二的指纹。

引用此