Abstract | ||
---|---|---|
This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2013 evaluation campaign [1]. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English, Chinese to English, Arabic to English, and English to French TED-talk translation task. We also applied our existing ASR system to the TED-talk lecture ASR task.We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2012 system, and experiments we ran during the IWSLT-2013 evaluation. Specifically, we focus on 1) crossentropy filtering of MT training data, and 2) improved optimization techniques, 3) language modeling, and 4) approximation of out-ofvocabulary words. |
Year | Venue | DocType |
---|---|---|
2013 | IWSLT | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
14 |
Name | Order | Citations | PageRank |
---|---|---|---|
Michaeel Kazi | 1 | 1 | 2.39 |
Michael Coury | 2 | 2 | 1.39 |
elizabeth salesky | 3 | 11 | 6.29 |
Jessica Ray | 4 | 0 | 1.35 |
Wade Shen | 5 | 1377 | 74.03 |
Terry Gleason | 6 | 0 | 0.34 |
Tim Anderson | 7 | 3 | 3.53 |
Grant Erdmann | 8 | 1 | 4.08 |
Lane Schwartz | 9 | 0 | 2.37 |
Brian Ore | 10 | 0 | 1.01 |
Raymond Slyh | 11 | 0 | 0.34 |
Jeremy Gwinnup | 12 | 0 | 1.69 |
Katherine Young | 13 | 1 | 3.06 |
Michael Hutt | 14 | 0 | 1.69 |