3 引用 (Scopus)

摘要

Language models have been used in many natural language processing applications. In recent years, the recurrent neural network based language models have defeated the conventional n-gram based techniques. However, it is difficult for neural network architectures to use linguistic annotations. We try to incorporate part-of-speech features in recurrent neural network language model, and use them to predict the next word. Specifically, we proposed a parallel structure which contains two recurrent neural networks, one for word sequence modeling and another for part-of-speech sequence modeling. The state of part-of-speech network helped improve the word sequence's prediction. Experiments show that the proposed method performs better than the traditional recurrent network on perplexity and is better at reranking machine translation outputs.

源语言英语
140-147
页数8
出版状态已出版 - 2019
活动31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017 - Cebu City, 菲律宾
期限: 16 11月 201718 11月 2017

会议

会议31st Pacific Asia Conference on Language, Information and Computation, PACLIC 2017
国家/地区菲律宾
Cebu City
时期16/11/1718/11/17

指纹

探究 'A parallel recurrent neural network for language modeling with POS tags' 的科研主题。它们共同构成独一无二的指纹。

引用此