ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation. | 0 | 0.34 | 2022 |
Exploiting Inactive Examples for Natural Language Generation With Data Rejuvenation | 0 | 0.34 | 2022 |
Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation | 0 | 0.34 | 2022 |
Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation | 0 | 0.34 | 2022 |
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation. | 0 | 0.34 | 2022 |
Learning to refine source representations for neural machine translation | 0 | 0.34 | 2022 |
On the diversity of multi-head attention | 1 | 0.40 | 2021 |
Context-aware Self-Attention Networks for Natural Language Processing | 0 | 0.34 | 2021 |
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation. | 0 | 0.34 | 2021 |
Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation | 0 | 0.34 | 2021 |
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning | 0 | 0.34 | 2021 |
On the Language Coverage Bias for Neural Machine Translation. | 0 | 0.34 | 2021 |
Order-Agnostic Cross Entropy For Non-Autoregressive Machine Translation | 0 | 0.34 | 2021 |
RAST - Domain-Robust Dialogue Rewriting as Sequence Tagging. | 0 | 0.34 | 2021 |
Understanding and Improving Lexical Choice in Non-Autoregressive Translation | 0 | 0.34 | 2021 |
Progressive Multi-Granularity Training for Non-Autoregressive Translation. | 0 | 0.34 | 2021 |
On the Copying Behaviors of Pre-Training for Neural Machine Translation. | 0 | 0.34 | 2021 |
VN Network: Embedding Newly Emerging Entities with Virtual Neighbors | 0 | 0.34 | 2020 |
Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation. | 0 | 0.34 | 2020 |
Go From The General To The Particular: Multi-Domain Translation With Domain Transformation Networks | 0 | 0.34 | 2020 |
Context-Aware Cross-Attention for Non-Autoregressive Translation | 0 | 0.34 | 2020 |
Exploiting Deep Representations for Natural Language Processing | 0 | 0.34 | 2020 |
Emotion Classification by Jointly Learning to Lexiconize and Classify. | 0 | 0.34 | 2020 |
Tencent Neural Machine Translation Systems for the WMT20 News Translation Task. | 1 | 0.35 | 2020 |
On the Sparsity of Neural Machine Translation Models. | 0 | 0.34 | 2020 |
Auxiliary Template-Enhanced Generative Compatibility Modeling | 0 | 0.34 | 2020 |
DukeNet: A Dual Knowledge Interaction Network for Knowledge-Grounded Conversation | 6 | 0.56 | 2020 |
EmpDG - Multi-resolution Interactive Empathetic Dialogue Generation. | 0 | 0.34 | 2020 |
Tencent AI Lab Machine Translation Systems for WMT20 Chat Translation Task. | 0 | 0.34 | 2020 |
Tencent AI Lab Machine Translation Systems for the WMT20 Biomedical Translation Task. | 1 | 0.35 | 2020 |
Multi-Granularity Self-Attention for Neural Machine Translation | 0 | 0.34 | 2019 |
Modeling Recurrence for Transformer. | 2 | 0.37 | 2019 |
Dynamic Past and Future for Neural Machine Translation. | 3 | 0.37 | 2019 |
Self-Attention with Structural Position Representations | 1 | 0.39 | 2019 |
Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement | 1 | 0.35 | 2019 |
Information Aggregation for Multi-Head Attention with Routing-by-Agreement. | 0 | 0.34 | 2019 |
Convolutional Self-Attention Networks. | 0 | 0.34 | 2019 |
Context-Aware Self-Attention Networks. | 0 | 0.34 | 2019 |
Retrieval-guided Dialogue Response Generation via a Matching-to-Generation Framework | 2 | 0.35 | 2019 |
One Model to Learn Both: Zero Pronoun Prediction and Translation | 1 | 0.35 | 2019 |
Towards Understanding Neural Machine Translation with Word Importance | 1 | 0.36 | 2019 |
Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons | 0 | 0.34 | 2019 |
Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory. | 0 | 0.34 | 2018 |
Learning to Remember Translation History with a Continuous Cache. | 10 | 0.66 | 2018 |
Translating Pro-Drop Languages with Reconstruction Models. | 3 | 0.38 | 2018 |
Learning to Refine Source Representations for Neural Machine Translation. | 0 | 0.34 | 2018 |
Target Foresight Based Attention for Neural Machine Translation. | 1 | 0.34 | 2018 |
Incorporating Statistical Machine Translation Word Knowledge Into Neural Machine Translation. | 3 | 0.40 | 2018 |
Generative Stock Question Answering. | 0 | 0.34 | 2018 |
Modeling Past and Future for Neural Machine Translation. | 5 | 0.38 | 2018 |