TY - GEN
T1 - Multi-task Learning for Low-Resource Second Language Acquisition Modeling
AU - Hu, Yong
AU - Huang, Heyan
AU - Lan, Tian
AU - Wei, Xiaochi
AU - Nie, Yuxiang
AU - Qi, Jiarui
AU - Yang, Liner
AU - Mao, Xian Ling
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Second language acquisition (SLA) modeling is to predict whether second language learners could correctly answer the questions according to what they have learned, which is a fundamental building block of the personalized learning system. However, as far as we know, almost all existing methods cannot work well in low-resource scenarios due to lacking of training data. Fortunately, there are some latent common patterns among different language-learning tasks, which gives us an opportunity to solve the low-resource SLA modeling problem. Inspired by this idea, we propose a novel SLA modeling method, which learns the latent common patterns among different language-learning datasets by multi-task learning and are further applied to improving the prediction performance in low-resource scenarios. Extensive experiments show that the proposed method performs much better than the state-of-the-art baselines in the low-resource scenario. Meanwhile, it also obtains improvement slightly in the non-low-resource scenario.
AB - Second language acquisition (SLA) modeling is to predict whether second language learners could correctly answer the questions according to what they have learned, which is a fundamental building block of the personalized learning system. However, as far as we know, almost all existing methods cannot work well in low-resource scenarios due to lacking of training data. Fortunately, there are some latent common patterns among different language-learning tasks, which gives us an opportunity to solve the low-resource SLA modeling problem. Inspired by this idea, we propose a novel SLA modeling method, which learns the latent common patterns among different language-learning datasets by multi-task learning and are further applied to improving the prediction performance in low-resource scenarios. Extensive experiments show that the proposed method performs much better than the state-of-the-art baselines in the low-resource scenario. Meanwhile, it also obtains improvement slightly in the non-low-resource scenario.
KW - Multi-task learning
KW - Second language acquisition modeling
UR - http://www.scopus.com/inward/record.url?scp=85093951836&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60259-8_44
DO - 10.1007/978-3-030-60259-8_44
M3 - Conference contribution
AN - SCOPUS:85093951836
SN - 9783030602581
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 603
EP - 611
BT - Web and Big Data - 4th International Joint Conference, APWeb-WAIM 2020, Proceedings
A2 - Wang, Xin
A2 - Zhang, Rui
A2 - Lee, Young-Koo
A2 - Sun, Le
A2 - Moon, Yang-Sae
PB - Springer Science and Business Media Deutschland GmbH
T2 - 4th Asia-Pacific Web and Web-Age Information Management, Joint Conference on Web and Big Data, APWeb-WAIM 2020
Y2 - 18 September 2020 through 20 September 2020
ER -