Abstract
Language models have been used in many natural language processing applications. In recent years, the recurrent neural network based language models have defeated the conventional n-gram based techniques. However, it is difficult for neural network architectures to use linguistic annotations. We try to incorporate part-of-speech features in recurrent neural network language model, and use them to predict the next word. Specifically, we proposed a parallel structure which contains two recurrent neural networks, one for word sequence modeling and another for part-of-speech sequence modeling. The state of part-of-speech network helped improve the word sequence's prediction. Experiments show that the proposed method performs better than the traditional recurrent network on perplexity and is better at reranking machine translation outputs.
Original language | English |
---|---|
Pages | 140-147 |
Number of pages | 8 |
Publication status | Published - 2019 |
Event | 31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017 - Cebu City, Philippines Duration: 16 Nov 2017 → 18 Nov 2017 |
Conference
Conference | 31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017 |
---|---|
Country/Territory | Philippines |
City | Cebu City |
Period | 16/11/17 → 18/11/17 |