Title
Generalized Large-Context Language Models Based on Forward-Backward Hierarchical Recurrent Encoder-Decoder Models
Abstract
This paper presents a generalized form of large-context language models (LCLMs) that can take linguistic contexts beyond utterance boundaries into consideration. In discourse-level and conversation-level automatic speech recognition (ASR) tasks, which have to handle a series of utterances, it is essential to capture long-range linguistic contexts beyond utterance boundaries. The LCLMs of previous studies mainly focused on utilizing past contexts, and none fully utilized future contexts because LMs typically process words in a time-ordered manner. Our key idea is to introduce the LCLMs into the situation where ASR results of the whole series of utterances are given by a first decoding pass. This situation makes it possible for the LCLMs to leverage future contexts. In this paper, we propose generalized LCLMs (GLCLMs) based on forward-backward hierarchical recurrent encoder-decoder models in which generative probabilities of individual utterances are computed by leveraging not only past contexts but also future contexts beyond utterance boundaries. In order to efficiently introduce GLCLMs to ASR, we also propose a global-context iterative rescoring method that repeatedly rescores the ASR hypotheses of an individual utterance by using surrounding ASR hypotheses. Experiments on discourse-level ASR tasks demonstrate the effectiveness of our GLCLM approach.
Year
DOI
Venue
2019
10.1109/ASRU46091.2019.9003857
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Keywords
DocType
ISBN
Generalized large-context language models,forward-backward hierarchical recurrent encoder-decoder,global-context iterative rescoring
Conference
978-1-7281-0307-5
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Ryo Masumura12528.24
Mana Ihori215.41
Tomohiro Tanaka3178.61
Itsumi Saito400.34
Kyosuke Nishida521415.64
Takanobu Oba65312.09