Abstract | ||
---|---|---|
We present two weighted finite-state transducer (WFST) based decoders for handwriting recognition. One decoder is a cloud-based solution that is both compact and efficient. The other is a device-based solution that has a small memory footprint. A compact WFST data structure is proposed for the cloud-based decoder. There are no output labels stored on transitions of the compact WFST. A decoder based on the compact WFST data structure produces the same result with significantly less footprint compared with a decoder based on the corresponding standard WFST. For the device-based decoder, on-the-fly language model rescoring is performed to reduce footprint. Careful engineering methods, such as WFST weight quantization, token and data type refinement, are also explored. When using a language model containing 600,000 n-grams, the cloud-based decoder achieves an average decoding time of 4.04 ms per text line with a peak footprint of 114.4 MB, while the device-based decoder achieves an average decoding time of 13.47 ms per text line with a peak footprint of 31.6 MB. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/ICDAR.2017.32 | 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) |
Keywords | Field | DocType |
WFST-based decoders,weighted finite-state transducer based decoders,on-the-fly language model rescoring,device-based solution,compact WFST data structure,handwriting recognition,WFST weight quantization,memory size 114.4 MByte,memory size 31.6 MByte | Data structure,Pattern recognition,Computer science,Handwriting recognition,Algorithm,Data type,Artificial intelligence,Decoding methods,Memory footprint,Quantization (signal processing),Language model,Viterbi algorithm | Conference |
Volume | ISSN | ISBN |
01 | 1520-5363 | 978-1-5386-3587-2 |
Citations | PageRank | References |
1 | 0.36 | 9 |
Authors | ||
2 |