Multi-granularity semantic representation model for relation extraction

Ming Lei*, Heyan Huang, Chong Feng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

In natural language, a group of words constitute a phrase and several phrases constitute a sentence. However, existing transformer-based models for sentence-level tasks abstract sentence-level semantics from word-level semantics directly, which override phrase-level semantics so that they may be not favorable for capturing more precise semantics. In order to resolve this problem, we propose a novel multi-granularity semantic representation (MGSR) model for relation extraction. This model can bridge the semantic gap between low-level semantic abstraction and high-level semantic abstraction by learning word-level, phrase-level, and sentence-level multi-granularity semantic representations successively. We segment a sentence into entity chunks and context chunks according to an entity pair. Thus, the sentence is represented as a non-empty segmentation set. The entity chunks are noun phrases, and the context chunks contain the key phrases expressing semantic relations. Then, the MGSR model utilizes inter-word, inner-chunk and inter-chunk three kinds of different self-attention mechanisms, respectively, to learn the multi-granularity semantic representations. The experiments on two standard datasets demonstrate our model outperforms the previous models.

Original languageEnglish
Pages (from-to)6879-6889
Number of pages11
JournalNeural Computing and Applications
Volume33
Issue number12
DOIs
Publication statusPublished - Jun 2021

Keywords

  • Deep learning
  • Information extraction
  • Natural language processing
  • Relation extraction

Fingerprint

Dive into the research topics of 'Multi-granularity semantic representation model for relation extraction'. Together they form a unique fingerprint.

Cite this