Abstract | ||
---|---|---|
Data noising is an effective technique for regularizing neural network models. While noising is widely adopted in application domains such as vision and speech, commonly used noising primitives have not been developed for discrete sequence-level settings such as language modeling. In this paper, we derive a connection between input noising in neural network language models and smoothing in n-gram models. Using this connection, we draw upon ideas from smoothing to develop effective noising schemes. We demonstrate performance gains when applying the proposed schemes to language modeling and machine translation. Finally, we provide empirical analysis validating the relationship between noising and smoothing. |
Year | Venue | Field |
---|---|---|
2017 | ICLR | Computer science,Neural network language models,Machine translation,Smoothing,Artificial intelligence,Artificial neural network,Language model,Machine learning |
DocType | Volume | Citations |
Journal | abs/1703.02573 | 13 |
PageRank | References | Authors |
0.61 | 16 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ziang Xie | 1 | 62 | 4.53 |
Sida Wang | 2 | 541 | 44.65 |
Jiwei Li | 3 | 1028 | 48.05 |
Daniel Levy | 4 | 31 | 5.76 |
Aiming Nie | 5 | 13 | 0.95 |
Dan Jurafsky | 6 | 6922 | 474.07 |
Andrew Y. Ng | 7 | 26065 | 1987.54 |