Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation | 0 | 0.34 | 2022 |
One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation | 0 | 0.34 | 2022 |
Sequence-Level Training for Non-Autoregressive Neural Machine Translation | 0 | 0.34 | 2021 |
Modeling Coverage for Non-Autoregressive Neural Machine Translation | 0 | 0.34 | 2021 |
Minimizing The Bag-Of-Ngrams Difference For Non-Autoregressive Neural Machine Translation | 0 | 0.34 | 2020 |
Modeling Fluency And Faithfulness For Diverse Neural Machine Translation | 0 | 0.34 | 2020 |
Generating Diverse Translation from Model Distribution with Dropout. | 0 | 0.34 | 2020 |
Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation. | 0 | 0.34 | 2018 |