Abstract
In recent years, significant progress has been made in the field of non-autoregressive machine translations. However, the accuracy of non-autoregressive models still lags behind their autoregressive counterparts. This discrepancy can be attributed to the abundance of repetitive tokens in the target sequences generated by non-autoregressive models. In this study, we delve into this phenomenon and propose a novel approach to train a non-autoregressive model using unlikelihood loss. We evaluate our method on three widely used benchmark tasks. The experimental results demonstrating that our proposed approach significantly reduces the number of repetitive tokens while improving the overall performance of non-autoregressive machine translations. Compared to the baseline model ”Mask-Predict”, the average number of repetitions on IWSLT 14 DE→EN valid set is reduced from 0.48 to 0.17, resulting in a remarkable 62% decrease.
Original language | English |
---|---|
Pages (from-to) | 4681-4688 |
Number of pages | 8 |
Journal | Soft Computing |
Volume | 28 |
Issue number | 5 |
DOIs | |
Publication status | Published - Mar 2024 |
Keywords
- Machine translation
- Non-autoregressive
- Repetitive tokens
- Unlikelihood training