TY - JOUR
T1 - A Flat Tree-based Transformer for Nested Named Entity Recognition
AU - Mao, Hongli
AU - Mao, Xian Ling
AU - Tang, Hanlin
AU - Gao, Xiaoyan
AU - Xu, Chun
AU - Huang, Heyan
N1 - Publisher Copyright:
© 2025
PY - 2025/6/7
Y1 - 2025/6/7
N2 - Nested Named Entity Recognition (nested NER), which aims to extract entities from overlapping spans, is a challenging task in natural language processing. Recently, span-based methods have achieved the best performance in this task by modeling the one-dimensional text sequence as a two-dimensional matrix. However, existing span-based methods are hard to accurately identify entity boundaries due to the interference of confusing spans, and they also fail to consider nested entity dependency relationships. To address these limitations, we propose a novel Flat Tree-based Transformer that utilizes the constituency parsing tree to assist nested NER. Specifically, we first employ the constituency parsing tree to generate candidate spans and only tree nodes are selected as potential entities, which can eliminate confusing spans and obtain a clearer and more discrete distribution of the entity boundaries. Then the Flat Tree-based Transformer leverages two self-attention units to capture nested dependency relationships among candidate spans, incorporating parsing tree structure while not affecting the Transformer's parallelizability. Extensive experiments on five widely-adopted datasets demonstrate our proposed method performs better than previous state-of-the-art baselines methods.
AB - Nested Named Entity Recognition (nested NER), which aims to extract entities from overlapping spans, is a challenging task in natural language processing. Recently, span-based methods have achieved the best performance in this task by modeling the one-dimensional text sequence as a two-dimensional matrix. However, existing span-based methods are hard to accurately identify entity boundaries due to the interference of confusing spans, and they also fail to consider nested entity dependency relationships. To address these limitations, we propose a novel Flat Tree-based Transformer that utilizes the constituency parsing tree to assist nested NER. Specifically, we first employ the constituency parsing tree to generate candidate spans and only tree nodes are selected as potential entities, which can eliminate confusing spans and obtain a clearer and more discrete distribution of the entity boundaries. Then the Flat Tree-based Transformer leverages two self-attention units to capture nested dependency relationships among candidate spans, incorporating parsing tree structure while not affecting the Transformer's parallelizability. Extensive experiments on five widely-adopted datasets demonstrate our proposed method performs better than previous state-of-the-art baselines methods.
KW - Constituency parsing tree
KW - Flat Tree-Based Transformer
KW - Nested Named Entity Recognition
KW - Span-based methods
UR - http://www.scopus.com/inward/record.url?scp=105003242058&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2025.113405
DO - 10.1016/j.knosys.2025.113405
M3 - Article
AN - SCOPUS:105003242058
SN - 0950-7051
VL - 318
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 113405
ER -