Abstract | ||
---|---|---|
It is common in applications of ASR to have a large amount of data out-of-domain to the test data and a smaller amount of in-domain data similar to the test data. In this paper, we investigate different ways to utilize this out-of-domain data to improve ASR models based on Lattice-free MMI (LF-MMI). In particular, we experiment with multi-task training using a network with shared hidden layers; and we try various ways of adapting previously trained models to a new domain. Both types of methods are effective in reducing the WER versus in-domain models, with the jointly trained models generally giving more improvement. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/ASRU.2017.8268947 | 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) |
Keywords | DocType | ISBN |
Transfer learning,weight transfer,LF-MMI,multi-task learning,Automatic Speech Recognition | Conference | 978-1-5090-4789-5 |
Citations | PageRank | References |
5 | 0.51 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Pegah Ghahremani | 1 | 99 | 7.09 |
Vimal Manohar | 2 | 54 | 7.99 |
Hossein Hadian | 3 | 11 | 3.31 |
Daniel Povey | 4 | 2442 | 231.75 |
Sanjeev Khudanpur | 5 | 2155 | 202.00 |