TY - GEN
T1 - Analysis and Improvement of External Knowledge Usage in Machine Multi-Choice Reading Comprehension Tasks
AU - Jiang, Yichuan
AU - Huang, Heyan
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Machine reading comprehension (MRC) and multi-choice task is an important branch of natural language processing. With the advent of pre-trained language models, such as Bert, Roberta, fine tuning model parameters according to different downstream tasks has become the mainstream of current research directions. By using pre-trained language models, sufficient and effective training samples are the key to ensure the high performance of the model to a certain degree. At the same time, compared with the thinking patterns of human beings, adding effective external knowledge to training data can also help machines to understand natural language better. In current research, such external knowledge has various ways to combine with the original data. In this paper, we believe that an effective way of external knowledge combination can help machines greatly improve their performance in MRC such as multi-choice and question-and-answering (QA) tasks. Therefore, we design some special experiments and compare various knowledge fusion methods' performance, analyze the effect of different methods and select the most effective way to put forward relevant opinions. The accuracy of the most effective way to use external knowledge is seven percentage points higher than our baseline.
AB - Machine reading comprehension (MRC) and multi-choice task is an important branch of natural language processing. With the advent of pre-trained language models, such as Bert, Roberta, fine tuning model parameters according to different downstream tasks has become the mainstream of current research directions. By using pre-trained language models, sufficient and effective training samples are the key to ensure the high performance of the model to a certain degree. At the same time, compared with the thinking patterns of human beings, adding effective external knowledge to training data can also help machines to understand natural language better. In current research, such external knowledge has various ways to combine with the original data. In this paper, we believe that an effective way of external knowledge combination can help machines greatly improve their performance in MRC such as multi-choice and question-and-answering (QA) tasks. Therefore, we design some special experiments and compare various knowledge fusion methods' performance, analyze the effect of different methods and select the most effective way to put forward relevant opinions. The accuracy of the most effective way to use external knowledge is seven percentage points higher than our baseline.
KW - Machine reading comprehension
KW - comparative experiment
KW - external knowledge
KW - multi-choice task
UR - http://www.scopus.com/inward/record.url?scp=85102569681&partnerID=8YFLogxK
U2 - 10.1109/MLBDBI51377.2020.00022
DO - 10.1109/MLBDBI51377.2020.00022
M3 - Conference contribution
AN - SCOPUS:85102569681
T3 - Proceedings - 2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence, MLBDBI 2020
SP - 85
EP - 88
BT - Proceedings - 2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence, MLBDBI 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd International Conference on Machine Learning, Big Data and Business Intelligence, MLBDBI 2020
Y2 - 23 October 2020 through 25 October 2020
ER -