Abstract | ||
---|---|---|
The notion of infix probability has been introduced in the literature as a generalization of the notion of prefix (or initial substring) probability, motivated by applications in speech recognition and word error correction. For the case where a probabilistic context-free grammar is used as language model, methods for the computation of infix probabilities have been presented in the literature, based on various simplifying assumptions. Here we present a solution that applies to the problem in its full generality. |
Year | Venue | Keywords |
---|---|---|
2011 | EMNLP | speech recognition,word error correction,language model,infix probability,initial substring,full generality,probabilistic context-free grammar,natural language processing |
Field | DocType | Volume |
Substring,Context-free grammar,Computer science,Prefix,Theoretical computer science,Artificial intelligence,Natural language processing,Probabilistic logic,Language model,Computation,Infix,Grammar,Machine learning | Conference | D11-1 |
Citations | PageRank | References |
3 | 0.41 | 17 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mark-jan Nederhof | 1 | 387 | 53.30 |
Giorgio Satta | 2 | 902 | 90.85 |