Title | ||
---|---|---|
Latent Words Recurrent Neural Network Language Models For Automatic Speech Recognition |
Abstract | ||
---|---|---|
This paper demonstrates latent word recurrent neural network language models (LW-RNN-LMs) for enhancing automatic speech recognition (ASR). LW-RNN-LMs are constructed so as to pick up advantages in both recurrent neural network language models (RNN-LMs) and latent word language models (LW-LMs). The RNN-LMs can capture long-range context information and offer strong performance, and the LW-LMs are robust for out-of-domain tasks based on the latent word space modeling. However, the RNN-LMs cannot explicitly capture hidden relationships behind observed words since a concept of a latent variable space is not present. In addition, the LW-LMs cannot take into account long-range relationships between latent words. Our idea is to combine RNN-LM and LW-LM so as to compensate individual disadvantages. The LW-RNN-LMs can support both a latent variable space modeling as well as LW-LMs and a long-range relationship modeling as well as RNN-LMs at the same time. From the viewpoint of RNN-LMs, LW-RNN-LM can be considered as a soft class RNN-LM with a vast latent variable space. In contrast, from the viewpoint of LW-LMs, LW-RNN-LM can be considered as an LW-LM that uses the RNN structure for latent variable modeling instead of an n-gram structure. This paper also details a parameter inference method and two kinds of implementation methods, an n-gram approximation and a Viterbi approximation, for introducing the LW-LM to ASR. Our experiments show effectiveness of LW-RNN-LMs on a perplexity evaluation for the Penn Treebank corpus and an ASR evaluation for Japanese spontaneous speech tasks. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1587/transinf.2018EDP7242 | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS |
Keywords | Field | DocType |
latent words recurrent neural network language models, n-gram approximation, Viterbi approximation, automatic speech recognition | Recurrent neural network language models,Computer science,Speech recognition | Journal |
Volume | Issue | ISSN |
E102D | 12 | 1745-1361 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ryo Masumura | 1 | 25 | 28.24 |
Taichi Asami | 2 | 22 | 10.49 |
Takanobu Oba | 3 | 53 | 12.09 |
Sumitaka Sakauchi | 4 | 36 | 8.30 |
akinori ito | 5 | 3 | 4.10 |