Abstract | ||
---|---|---|
This article describes a framework for incorporating referential semantic information from a world model or ontology directly into a probabilistic language model of the sort commonly used in speech recognition, where it can be probabilistically weighted together with phonological and syntactic factors as an integral part of the decoding process. Introducing world model referents into the decoding search greatly increases the search space, but by using a single integrated phonological, syntactic, and referential semantic language model, the decoder is able to incrementally prune this search based on probabilities associated with these combined contexts. The result is a single unified referential semantic probability model which brings several kinds of context to bear in speech decoding, and performs accurate recognition in real time on large domains in the absence of example in-domain training sentences. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1162/coli.08-011-R2-07-021 | Computational Linguistics |
Keywords | Field | DocType |
speech decoding,introducing world model referents,single unified referential semantic,world model,search space,probability model,fast incremental interpretation,decoding search,decoding process,probabilistic language model,referential semantic language model,referential semantic information,speech recognition,real time,language model | Ontology,Probability model,Computer science,sort,Speech recognition,Semantic information,Natural language processing,Artificial intelligence,Probabilistic logic,Decoding methods,Syntax,Language model | Journal |
Volume | Issue | ISSN |
35 | 3 | 0891-2017 |
Citations | PageRank | References |
17 | 1.12 | 22 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
William Schuler | 1 | 125 | 17.78 |
Stephen Wu | 2 | 147 | 11.73 |
Lane Schwartz | 3 | 209 | 18.01 |