摘要
Language models have been used in many natural language processing applications. In recent years, the recurrent neural network based language models have defeated the conventional n-gram based techniques. However, it is difficult for neural network architectures to use linguistic annotations. We try to incorporate part-of-speech features in recurrent neural network language model, and use them to predict the next word. Specifically, we proposed a parallel structure which contains two recurrent neural networks, one for word sequence modeling and another for part-of-speech sequence modeling. The state of part-of-speech network helped improve the word sequence's prediction. Experiments show that the proposed method performs better than the traditional recurrent network on perplexity and is better at reranking machine translation outputs.
源语言 | 英语 |
---|---|
页 | 140-147 |
页数 | 8 |
出版状态 | 已出版 - 2019 |
活动 | 31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017 - Cebu City, 菲律宾 期限: 16 11月 2017 → 18 11月 2017 |
会议
会议 | 31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017 |
---|---|
国家/地区 | 菲律宾 |
市 | Cebu City |
时期 | 16/11/17 → 18/11/17 |