Title
Recurrent Neural Network Language Model Adaptation for Multi-Genre Broadcast Speech Recognition and Alignment.
Abstract
Recurrent neural network language models (RNNLMs) generally outperform n-gram language models when used in automatic speech recognition (ASR). Adapting RNNLMs to new domains is an open problem and current approaches can be categorised as either feature-based or model based. In feature-based adaptation, the input to the RNNLM is augmented with auxiliary features whilst model-based adaptation includ...
Year
DOI
Venue
2019
10.1109/TASLP.2018.2888814
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Keywords
Field
DocType
Adaptation models,Training,Speech recognition,Context modeling,Data models,Speech processing,Task analysis
Perplexity,Data modeling,Speech processing,Computer science,Word error rate,Recurrent neural network,Context model,Speech recognition,Language model,Test set
Journal
Volume
Issue
ISSN
27
3
2329-9290
Citations 
PageRank 
References 
1
0.37
11
Authors
5
Name
Order
Citations
PageRank
Salil Deena1273.61
Madina Hasan2135.35
Mortaza Doulaty3335.35
Oscar Saz414216.30
Thomas Hain5184.50