Title
Bidirectional Recurrent Neural Network Language Models For Automatic Speech Recognition
Abstract
Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.
Year
DOI
Venue
2015
10.1109/ICASSP.2015.7179007
2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP)
Keywords
Field
DocType
Language modeling, recurrent neural networks, long short term memory, bidirectional neural networks
Broadcasting,Recurrent neural network language models,Computer science,Recurrent neural network,Speech recognition,Types of artificial neural networks,Time delay neural network,Artificial intelligence,Deep learning,Artificial neural network,Language model
Conference
ISSN
Citations 
PageRank 
1520-6149
8
0.56
References 
Authors
14
4
Name
Order
Citations
PageRank
Ebru Arisoy141825.32
Abhinav Sethy236331.16
Bhuvana Ramabhadran31779153.83
Stanley F. Chen41723219.64