TY - GEN
T1 - WLINKER
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
AU - Xu, Yongxiu
AU - Zhou, Chuan
AU - Huang, Heyan
AU - Yu, Jing
AU - Hu, Yue
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - Relational triplet extraction (RTE) is a fundamental task for automatically extracting information from unstructured text, which has attracted growing interest in recent years. However, it remains challenging due to the difficulty in extracting the overlapping relational triplets. Existing approaches for overlapping RTE, either suffer from exposure bias or designing complex tagging scheme. In light of these limitations, we take an innovative perspective on RTE by modeling it as a word linking problem that learns to link from subject words to object words for each relation type. To this end, we propose a simple but effective multi-task learning model, WLinker, which can extract overlapping relational triplets in an end-to-end fashion. Specifically, we perform word link prediction based on multi-level biaffine attention for leaning the word-level correlations under each relation type. Additionally, our model joint entity detection and word link prediction tasks by a multi-task framework, which combines the local sequential and global dependency structures of words in sentence and captures the implicit interactions between the two tasks. Extensive experiments are conducted on two benchmark datasets NYT and WebNLG. The results demonstrate the effectiveness of WLinker, in comparison with a range of previous state-of-the-art baselines.
AB - Relational triplet extraction (RTE) is a fundamental task for automatically extracting information from unstructured text, which has attracted growing interest in recent years. However, it remains challenging due to the difficulty in extracting the overlapping relational triplets. Existing approaches for overlapping RTE, either suffer from exposure bias or designing complex tagging scheme. In light of these limitations, we take an innovative perspective on RTE by modeling it as a word linking problem that learns to link from subject words to object words for each relation type. To this end, we propose a simple but effective multi-task learning model, WLinker, which can extract overlapping relational triplets in an end-to-end fashion. Specifically, we perform word link prediction based on multi-level biaffine attention for leaning the word-level correlations under each relation type. Additionally, our model joint entity detection and word link prediction tasks by a multi-task framework, which combines the local sequential and global dependency structures of words in sentence and captures the implicit interactions between the two tasks. Extensive experiments are conducted on two benchmark datasets NYT and WebNLG. The results demonstrate the effectiveness of WLinker, in comparison with a range of previous state-of-the-art baselines.
KW - Multi-task learning
KW - Overlapping relations
KW - Relational triplet extraction
KW - Text mining
UR - http://www.scopus.com/inward/record.url?scp=85131256472&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9746958
DO - 10.1109/ICASSP43922.2022.9746958
M3 - Conference contribution
AN - SCOPUS:85131256472
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 6357
EP - 6361
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -