Enhanced encoder for non-autoregressive machine translation

Shuheng Wang, Shumin Shi*, Heyan Huang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Non-autoregressive machine translation aims to speed up the decoding procedure by discarding the autoregressive model and generating the target words independently. Because non-autoregressive machine translation fails to exploit target-side information, the ability to accurately model source representations is critical. In this paper, we propose an approach to enhance the encoder’s modeling ability by using a pre-trained BERT model as an extra encoder. With a different tokenization method, the BERT encoder and the Raw encoder can model the source input from different aspects. Furthermore, having a gate mechanism, the decoder can dynamically determine which representations contribute to the decoding process. Experimental results on three translation tasks show that our method can significantly improve the performance of non-autoregressive MT, and surpass the baseline non-autoregressive models. On the WMT14 EN→DE translation task, our method achieves 27.87 BLEU with a single decoding step. This is a comparable result with the baseline autoregressive Transformer model which obtains a score of 27.8 BLEU.

源语言英语
页(从-至)595-609
页数15
期刊Machine Translation
35
4
DOI
出版状态已出版 - 12月 2021

指纹

探究 'Enhanced encoder for non-autoregressive machine translation' 的科研主题。它们共同构成独一无二的指纹。

引用此