Title
Improving Neural Language Modeling via Adversarial Training.
Abstract
Recently, substantial progress has been made in language modeling by using deep neural networks. However, in practice, large scale neural language models have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural language models. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for language modeling on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.
Year
Venue
Field
2019
international conference on machine learning
Computer science,Artificial intelligence,Language model,Machine learning,Adversarial system
DocType
Volume
ISSN
Journal
abs/1906.03805
International Conference on Machine Learning 2019
Citations 
PageRank 
References 
1
0.35
0
Authors
3
Name
Order
Citations
PageRank
Dilin Wang1677.16
ChengYue Gong222.74
Liu, Qiang347248.61