Title
Decoding with Finite-State Transducers on GPUs.
Abstract
Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, limited previous work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving decoding speedups of up to 5.2x over our serial implementation running on different computer architectures and 6093x over OpenFST.
Year
Venue
Field
2017
conference of the european chapter of the association for computational linguistics
CUDA,Computer science,Parallel computing,Finite state,Natural language processing,Artificial intelligence,Decoding methods,Machine learning
DocType
Volume
Citations 
Journal
abs/1701.03038
0
PageRank 
References 
Authors
0.34
7
2
Name
Order
Citations
PageRank
Arturo Argueta100.68
David Chiang22843144.76