TY - GEN
T1 - Retrieval-Augmented Document-Level Event Extraction with Cross-Attention Fusion
AU - Xu, Yuting
AU - Feng, Chong
AU - Wang, Bo
AU - Huang, Jing
AU - Qi, Xinmu
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2024.
PY - 2024
Y1 - 2024
N2 - Document-level event extraction intends to extract event records from an entire document. Current approaches adopt an entity-centric workflow, wherein the effectiveness of event extraction heavily relies on the input representation. Nonetheless, the input representations derived from earlier approaches exhibit incongruities when applied to the task of event extraction. To mitigate these discrepancies, we propose a Retrieval-Augmented Document-level Event Extraction (RADEE) method that leverages instances from the training dataset as supplementary event-informed knowledge. Specifically, the most similar training instance containing event records is retrieved and then concatenated with the input to enhance the input representation. To effectively integrate information from retrieved instances while minimizing noise interference, we introduce a fusion layer based on cross-attention mechanism. Experimental results obtained from a comprehensive evaluation of a large-scale document-level event extraction dataset reveal that our proposed method surpasses the performance of all baseline models. Furthermore, our approach exhibits improved performance even in low-resource settings, emphasizing its effectiveness and adaptability.
AB - Document-level event extraction intends to extract event records from an entire document. Current approaches adopt an entity-centric workflow, wherein the effectiveness of event extraction heavily relies on the input representation. Nonetheless, the input representations derived from earlier approaches exhibit incongruities when applied to the task of event extraction. To mitigate these discrepancies, we propose a Retrieval-Augmented Document-level Event Extraction (RADEE) method that leverages instances from the training dataset as supplementary event-informed knowledge. Specifically, the most similar training instance containing event records is retrieved and then concatenated with the input to enhance the input representation. To effectively integrate information from retrieved instances while minimizing noise interference, we introduce a fusion layer based on cross-attention mechanism. Experimental results obtained from a comprehensive evaluation of a large-scale document-level event extraction dataset reveal that our proposed method surpasses the performance of all baseline models. Furthermore, our approach exhibits improved performance even in low-resource settings, emphasizing its effectiveness and adaptability.
KW - Cross-attention fusion
KW - Document-level event extraction
KW - Retrieval-augmented
UR - http://www.scopus.com/inward/record.url?scp=85177884726&partnerID=8YFLogxK
U2 - 10.1007/978-981-99-7596-9_16
DO - 10.1007/978-981-99-7596-9_16
M3 - Conference contribution
AN - SCOPUS:85177884726
SN - 9789819975952
T3 - Communications in Computer and Information Science
SP - 218
EP - 229
BT - Social Media Processing - 11th Chinese National Conference, SMP 2023, Proceedings
A2 - Wu, Feng
A2 - He, Xiangnan
A2 - Huang, Xuanjing
A2 - Tang, Jiliang
A2 - Zhao, Shu
A2 - Li, Daifeng
A2 - Zhang, Jing
PB - Springer Science and Business Media Deutschland GmbH
T2 - 11th Chinese National Conference on Social Media Processing, SMP 2023
Y2 - 23 November 2023 through 26 November 2023
ER -