Alleviating repetitive tokens in non-autoregressive machine translation with unlikelihood training

Shuheng Wang, Shumin Shi*, Heyan Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, significant progress has been made in the field of non-autoregressive machine translations. However, the accuracy of non-autoregressive models still lags behind their autoregressive counterparts. This discrepancy can be attributed to the abundance of repetitive tokens in the target sequences generated by non-autoregressive models. In this study, we delve into this phenomenon and propose a novel approach to train a non-autoregressive model using unlikelihood loss. We evaluate our method on three widely used benchmark tasks. The experimental results demonstrating that our proposed approach significantly reduces the number of repetitive tokens while improving the overall performance of non-autoregressive machine translations. Compared to the baseline model ”Mask-Predict”, the average number of repetitions on IWSLT 14 DE→EN valid set is reduced from 0.48 to 0.17, resulting in a remarkable 62% decrease.

Original languageEnglish
Pages (from-to)4681-4688
Number of pages8
JournalSoft Computing
Volume28
Issue number5
DOIs
Publication statusPublished - Mar 2024

Keywords

  • Machine translation
  • Non-autoregressive
  • Repetitive tokens
  • Unlikelihood training

Fingerprint

Dive into the research topics of 'Alleviating repetitive tokens in non-autoregressive machine translation with unlikelihood training'. Together they form a unique fingerprint.

Cite this