TY - GEN
T1 - Multi-hop Reading Comprehension Learning Method Based on Answer Contrastive Learning
AU - You, Hao
AU - Huang, Heyan
AU - Hu, Yue
AU - Xu, Yongxiu
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Multi-hop reading comprehension generally requires the model to give the answer and complete the prediction of supporting facts. However, previous works mainly focus on the interaction between question and context, and ignore the problem that many entities or short spans in sentences are similar to the true answer, so they do not take advantage of the differentiation information between true and plausible answers. To solve the above problems, we propose a learning method based on answer contrastive learning for multi-hop reading comprehension, which makes full use of answer judgment information to reduce the interference of confusing information to the model. Specifically, similar entity and random span data augmentation methods are proposed firstly from the perspective of answer for contrastive learning. Secondly, we implement multi-task joint learning by combining answer contrastive learning and graph neural network model through a shared encoder, and use several subtasks to mine shared information to assist in answer extraction and supporting fact prediction. Especially, the learning method forces the model to pay more attention to the true answer information through answer contrastive learning, which helps the model distinguish the start and end positions of answers. We validate our proposed learning method on the HotpotQA dataset, and the experimental results show that it performs better than the competitive baselines on several evaluation metrics.
AB - Multi-hop reading comprehension generally requires the model to give the answer and complete the prediction of supporting facts. However, previous works mainly focus on the interaction between question and context, and ignore the problem that many entities or short spans in sentences are similar to the true answer, so they do not take advantage of the differentiation information between true and plausible answers. To solve the above problems, we propose a learning method based on answer contrastive learning for multi-hop reading comprehension, which makes full use of answer judgment information to reduce the interference of confusing information to the model. Specifically, similar entity and random span data augmentation methods are proposed firstly from the perspective of answer for contrastive learning. Secondly, we implement multi-task joint learning by combining answer contrastive learning and graph neural network model through a shared encoder, and use several subtasks to mine shared information to assist in answer extraction and supporting fact prediction. Especially, the learning method forces the model to pay more attention to the true answer information through answer contrastive learning, which helps the model distinguish the start and end positions of answers. We validate our proposed learning method on the HotpotQA dataset, and the experimental results show that it performs better than the competitive baselines on several evaluation metrics.
KW - Contrastive Learning
KW - Graph Neural Network
KW - Multi-hop Reading Comprehension
KW - Pre-trained Model
KW - Question Answering
UR - http://www.scopus.com/inward/record.url?scp=85173061939&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-40292-0_11
DO - 10.1007/978-3-031-40292-0_11
M3 - Conference contribution
AN - SCOPUS:85173061939
SN - 9783031402913
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 124
EP - 139
BT - Knowledge Science, Engineering and Management - 16th International Conference, KSEM 2023, Proceedings
A2 - Jin, Zhi
A2 - Jiang, Yuncheng
A2 - Ma, Wenjun
A2 - Buchmann, Robert Andrei
A2 - Ghiran, Ana-Maria
A2 - Bi, Yaxin
PB - Springer Science and Business Media Deutschland GmbH
T2 - Knowledge Science, Engineering and Management - 16th International Conference, KSEM 2023, Proceedings
Y2 - 16 August 2023 through 18 August 2023
ER -