Title
Deep neural network language models
Abstract
In recent years, neural network language models (NNLMs) have shown success in both peplexity and word error rate (WER) compared to conventional n-gram language models. Most NNLMs are trained with one hidden layer. Deep neural networks (DNNs) with more hidden layers have been shown to capture higher-level discriminative information about input features, and thus produce better networks. Motivated by the success of DNNs in acoustic modeling, we explore deep neural network language models (DNN LMs) in this paper. Results on a Wall Street Journal (WSJ) task demonstrate that DNN LMs offer improvements over a single hidden layer NNLM. Furthermore, our preliminary results are competitive with a model M language model, considered to be one of the current state-of-the-art techniques for language modeling.
Year
Venue
Keywords
2012
WLM@NAACL-HLT
dnn lms,neural network language model,hidden layer,conventional n-gram language model,dnn lms offer improvement,single hidden layer,deep neural network,deep neural network language,model m language model,language modeling
Field
DocType
Citations 
Computer science,Word error rate,Neural network language models,Speech recognition,Natural language processing,Artificial intelligence,Discriminative model,Machine learning,Deep neural networks,Language model
Conference
64
PageRank 
References 
Authors
3.38
21
4
Name
Order
Citations
PageRank
Ebru Arisoy141825.32
Tara N. Sainath23497232.43
B. Kingsbury34175335.43
Bhuvana Ramabhadran41779153.83