Rethinking Perturbations in Encoder-Decoders for Fast Training | 0 | 0.34 | 2021 |
SHAPE - Shifted Absolute Position Embedding for Transformers. | 0 | 0.34 | 2021 |
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution. | 0 | 0.34 | 2021 |
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution | 0 | 0.34 | 2020 |
Tohoku-AIP-NTT at WMT 2020 News Translation Task. | 0 | 0.34 | 2020 |
A Self-Refinement Strategy for Noise Reduction in Grammatical Error Correction | 0 | 0.34 | 2020 |
An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction | 0 | 0.34 | 2019 |
Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models. | 0 | 0.34 | 2018 |
Mixture of Expert/Imitator Networks: Scalable Semi-Supervised Learning Framework | 0 | 0.34 | 2018 |
Reducing Odd Generation from Neural Headline Generation. | 0 | 0.34 | 2018 |
Source-side Prediction for Neural Headline Generation. | 0 | 0.34 | 2017 |