Name
Playground
About
FAQ
GitHub
Playground
Shortest Path Finder
Community Detector
Connected Papers
Author Trending
Nan Hu
ariana polyviou
Daniel P. Kennedy
Jazmín Ivonne Gutiérrez-Maya
Roland Zumkeller
Maximilian Dürr
Dan Graur
Liangliang Shang
Chen Ma
Barbara Aquilani
Home
/
Author
/
XING WANG
Author Info
Open Visualization
Name
Affiliation
Papers
XING WANG
Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou, China
27
Collaborators
Citations
PageRank
45
58
10.07
Referers
Referees
References
158
542
283
Search Limit
100
542
Publications (27 rows)
Collaborators (45 rows)
Referers (100 rows)
Referees (100 rows)
Title
Citations
PageRank
Year
Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation
0
0.34
2022
Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
0
0.34
2022
Learning to refine source representations for neural machine translation
0
0.34
2022
On the diversity of multi-head attention
1
0.40
2021
Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
0
0.34
2021
Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation.
0
0.34
2020
Exploiting Deep Representations for Natural Language Processing
0
0.34
2020
Tencent Neural Machine Translation Systems for the WMT20 News Translation Task.
1
0.35
2020
Tencent AI Lab Machine Translation Systems for WMT20 Chat Translation Task.
0
0.34
2020
Multi-Granularity Self-Attention for Neural Machine Translation
0
0.34
2019
Modeling Recurrence for Transformer.
2
0.37
2019
Self-Attention with Structural Position Representations
1
0.39
2019
Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement
1
0.35
2019
Information Aggregation for Multi-Head Attention with Routing-by-Agreement.
0
0.34
2019
Context-Aware Self-Attention Networks.
0
0.34
2019
One Model to Learn Both: Zero Pronoun Prediction and Translation
1
0.35
2019
Towards Understanding Neural Machine Translation with Word Importance
1
0.36
2019
Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons
0
0.34
2019
Learning to Refine Source Representations for Neural Machine Translation.
0
0.34
2018
Incorporating Statistical Machine Translation Word Knowledge Into Neural Machine Translation.
3
0.40
2018
Exploiting Deep Representations for Neural Machine Translation.
4
0.41
2018
Translating Phrases in Neural Machine Translation.
9
0.47
2017
Neural Machine Translation Advised by Statistical Machine Translation.
19
0.63
2017
Topic-Based Coherence Modeling for Statistical Machine Translation
9
0.45
2015
Learning Semantic Representations for Nonterminals in Hierarchical Phrase-Based Translation
1
0.35
2015
Effective Selection Of Translation Model Training Data
5
0.40
2014
A Topic-Based Reordering Model for Statistical Machine Translation
0
0.34
2014
1