TY - GEN
T1 - Multiple Perspective Answer Reranking for Multi-passage Reading Comprehension
AU - Ren, Mucheng
AU - Huang, Heyan
AU - Wei, Ran
AU - Liu, Hongyu
AU - Bai, Yu
AU - Wang, Yang
AU - Gao, Yang
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - This study focuses on multi-passage Machine Reading Comprehension (MRC) task. Prior work has shown that retriever, reader pipeline model could improve overall performance. However, the pipeline model relies heavily on retriever component since inferior retrieved documents would significantly degrade the performance. In this study, we proposed a new multi-perspective answer reranking technique that considers all documents to verify the confidence of candidate answers; such nuanced technique can carefully distinguish candidate answers to improve performance. Specifically, we rearrange the order of traditional pipeline model and make a posterior answer reranking instead of prior passage reranking. In addition, new proposed pre-trained language model BERT is also introduced here. Experiments with Chinese multi-passage dataset DuReader show that our model achieves competitive performance.
AB - This study focuses on multi-passage Machine Reading Comprehension (MRC) task. Prior work has shown that retriever, reader pipeline model could improve overall performance. However, the pipeline model relies heavily on retriever component since inferior retrieved documents would significantly degrade the performance. In this study, we proposed a new multi-perspective answer reranking technique that considers all documents to verify the confidence of candidate answers; such nuanced technique can carefully distinguish candidate answers to improve performance. Specifically, we rearrange the order of traditional pipeline model and make a posterior answer reranking instead of prior passage reranking. In addition, new proposed pre-trained language model BERT is also introduced here. Experiments with Chinese multi-passage dataset DuReader show that our model achieves competitive performance.
KW - Answer reranking
KW - BERT
KW - Machine Reading Comprehension
UR - http://www.scopus.com/inward/record.url?scp=85075823744&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-32236-6_67
DO - 10.1007/978-3-030-32236-6_67
M3 - Conference contribution
AN - SCOPUS:85075823744
SN - 9783030322359
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 736
EP - 747
BT - Natural Language Processing and Chinese Computing - 8th CCF International Conference, NLPCC 2019, Proceedings
A2 - Tang, Jie
A2 - Kan, Min-Yen
A2 - Zhao, Dongyan
A2 - Li, Sujian
A2 - Zan, Hongying
PB - Springer
T2 - 8th CCF International Conference on Natural Language Processing and Chinese Computing, NLPCC 2019
Y2 - 9 October 2019 through 14 October 2019
ER -