Title
Discrete Autoencoders for Sequence Models.
Abstract
Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations.
Year
Venue
Field
2018
arXiv: Learning
Autoencoder,Machine translation,Artificial intelligence,Natural language processing,Discrete representation,Language model,Semantic hashing,Machine learning,Mathematics
DocType
Volume
Citations 
Journal
abs/1801.09797
1
PageRank 
References 
Authors
0.35
0
2
Name
Order
Citations
PageRank
Łukasz Kaiser1230789.08
Samy Bengio27213485.82