Title
Semi-Supervised Adaptation Of Rnnlms By Fine-Tuning With Domain-Specific Auxiliary Features
Abstract
Recurrent neural network language models (RNNLMs) can be augmented with auxiliary features. which can provide an extra modality on top of the words. It has been found that RNNLMs perform best when trained on a large corpus of generic text and then fine-tuned on text corresponding to the sub-domain for which it is to be applied. However, in many cases the auxiliary features are available for the sub-domain text but not for the generic text. In such cases, semi-supervised techniques can be used to infer such features for the generic text data such that the RNNLM can be trained and then fine-tuned on the available in-domain data with corresponding auxiliary features.In this paper, several novel approaches are investigated for dealing with the semi-supervised adaptation of RNNLMs with auxiliary features as input. These approaches include: using zero features during training to mask the weights of the feature sub-network: adding the feature sub-network only at the time of fine-tuning; deriving the features using a parametric model and: back-propagating to infer the features on the generic text. These approaches are investigated and results are reported both in terms of PPL and WER on a multi-genre broadcast ASR task.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-1598
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
Field
DocType
RNNLM, Semi-supervised Adaptation, LDA topic models
Broadcasting,Recurrent neural network language models,Parametric model,Pattern recognition,Computer science,Fine-tuning,Speech recognition,Artificial intelligence
Conference
ISSN
Citations 
PageRank 
2308-457X
1
0.37
References 
Authors
0
5
Name
Order
Citations
PageRank
Salil Deena1273.61
Raymond W. M. Ng234021.61
Pranava Swaroop Madhyastha32410.59
lucia specia41217122.84
Thomas Hain517128.29