Title
Sparse Non-negative Matrix Language Modeling.
Abstract
We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well-established Kneser-Ney models. The addition of skip-gram features yields a model that is in the same league as the state-of-the-art recurrent neural network language models, as well as complementary: combining the two modeling techniques yields the best known result on the One Billion Word Benchmark. On the Gigaword corpus further improvements are observed using features that cross sentence boundaries. The computational advantages of SNM estimation over both maximum entropy and neural network estimation are probably its main strength, promising an approach that has large flexibility in combining arbitrary features and yet scales gracefully to large amounts of data.
Year
Venue
Field
2016
TACL
Sentence boundary disambiguation,Recurrent neural network language models,Computer science,Matrix (mathematics),Probability estimation,Speech recognition,Natural language processing,Artificial intelligence,Principle of maximum entropy,Artificial neural network,Language model,Machine learning
DocType
Volume
Citations 
Journal
4
1
PageRank 
References 
Authors
0.36
25
3
Name
Order
Citations
PageRank
Joris Pelemans1205.53
Noam Shazeer2108943.70
Ciprian Chelba31055111.19