Title
Improving BDD Enumeration for LWE Problem Using GPU.
Abstract
In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-based implementation for the LWE problem. Compared to the sequential BDD implementation of Lindner-Peikert and pruned-enumeration strategies by Kirshanova [1], our GPU-based implementation is almost faster by a factor 6 and 9 respectively. The used GPU is NVIDIA GeForce GTX 1060 6G. We also provided a parallel implementation using two GPUs. The results showed that our algorithm is scalable and faster than the sequential version (Lindner-Peikert and pruned-enumeration) by a factor of almost 13 and 16 respectively. Moreover, the results showed that our parallel implementation using two GPUs is more efficient than Kirshanova et al.'s parallel implementation using 20 CPU-cores.
Year
DOI
Venue
2020
10.1109/ACCESS.2019.2961091
IEEE ACCESS
Keywords
DocType
Volume
Learning with error,lattice-based cryptography,LLL algorithm,shortest vector problem,closest vector problem,bounded distance decoding,GPU,cryptanalysis
Journal
8
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Mohamed S. Esseissah100.34
Ashraf Bhery2101.99
Hatem M. Bahig3237.53