Abstract | ||
---|---|---|
In recent years, neural network language models (NNLMs) have shown success in both peplexity and word error rate (WER) compared to conventional n-gram language models. Most NNLMs are trained with one hidden layer. Deep neural networks (DNNs) with more hidden layers have been shown to capture higher-level discriminative information about input features, and thus produce better networks. Motivated by the success of DNNs in acoustic modeling, we explore deep neural network language models (DNN LMs) in this paper. Results on a Wall Street Journal (WSJ) task demonstrate that DNN LMs offer improvements over a single hidden layer NNLM. Furthermore, our preliminary results are competitive with a model M language model, considered to be one of the current state-of-the-art techniques for language modeling. |
Year | Venue | Keywords |
---|---|---|
2012 | WLM@NAACL-HLT | dnn lms,neural network language model,hidden layer,conventional n-gram language model,dnn lms offer improvement,single hidden layer,deep neural network,deep neural network language,model m language model,language modeling |
Field | DocType | Citations |
Computer science,Word error rate,Neural network language models,Speech recognition,Natural language processing,Artificial intelligence,Discriminative model,Machine learning,Deep neural networks,Language model | Conference | 64 |
PageRank | References | Authors |
3.38 | 21 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ebru Arisoy | 1 | 418 | 25.32 |
Tara N. Sainath | 2 | 3497 | 232.43 |
B. Kingsbury | 3 | 4175 | 335.43 |
Bhuvana Ramabhadran | 4 | 1779 | 153.83 |