Title
Improving matrix-based dynamic programming on massively parallel accelerators.
Abstract
Dynamic programming techniques are well-established and employed by various practical algorithms, including the edit-distance algorithm or the dynamic time warping algorithm. These algorithms usually operate in an iteration-based manner where new values are computed from values of the previous iteration. The data dependencies enforce synchronization which limits possibilities for internal parallel processing. In this paper, we investigate parallel approaches to processing matrix-based dynamic programming algorithms on modern multicore CPUs, Intel Xeon Phi accelerators, and general purpose GPUs. We address both the problem of computing a single distance on large inputs and the problem of computing a number of distances of smaller inputs simultaneously (e.g., when a similarity query is being resolved). Our proposed solutions yielded significant improvements in performance and achieved speedup of two orders of magnitude when compared to the serial baseline. HighlightsDynamic programming algorithms with matrix organization (e.g., Levenshtein distance).Employing task parallelism and SIMD/SIMT vectorization.Proposed hierarchical algorithm optimized for CPUs, Intel Xeon Phi devices, and GPUs.Can be efficiently parallelized if inputs are large or many distances are computed.Experiments also determine optimal configurations for current hardware.
Year
DOI
Venue
2017
10.1016/j.is.2016.06.001
Inf. Syst.
Keywords
Field
DocType
Parallel,Multicore,GPU,Intel Xeon Phi,Dynamic programming,Edit distance,Dynamic time warping
Dynamic programming,Massively parallel,Computer science,Task parallelism,Xeon Phi,Parallel computing,SIMD,Levenshtein distance,Multi-core processor,Speedup
Journal
Volume
Issue
ISSN
64
C
0306-4379
Citations 
PageRank 
References 
0
0.34
15
Authors
3
Name
Order
Citations
PageRank
David Bednárek14310.89
Michal Brabec211.03
Martin Krulis37613.27