Title
SparseHD: Algorithm-Hardware Co-optimization for Efficient High-Dimensional Computing
Abstract
Hyperdimensional (HD) computing is gaining traction as an alternative light-way machine learning approach for cognition tasks. Inspired by the neural activity patterns of the brain, HD computing performs cognition tasks by exploiting longsize vectors, namely hypervectors, rather than working with scalar numbers as used in conventional computing. Since a hypervector is represented by thousands of dimensions (elements), the majority of prior work assume binary elements to simplify the computation and alleviate the processing cost. In this paper, we first demonstrate that the dimensions need to have more than one bit to provide an acceptable accuracy to make HD computing applicable to real-world cognitive tasks. Increasing the bit-width, however, sacrifices energy efficiency and performance, even when using low-bit integers as the hypervector elements. To address this issue, we propose a framework for HD acceleration, dubbed SparseHD, that leverages the advantages of sparsity to improve the efficiency of HD computing. Essentially, SparseHD takes account of statistical properties of a trained HD model and drops the least effective elements of the model, augmented by iterative retraining to compensate the possible quality loss raised by sparsity. Thanks to the bit-level manipulability and abounding parallelism granted by FPGAs, we also propose a novel FPGAbased accelerator to effectively utilize the advantage of sparsity in HD computation. We evaluate the efficiency of our framework for practical classification problems. We observe that SparseHD makes the HD model up to 90% sparse while affording a minimal quality loss (less than 1%) compared to the non-sparse baseline model. Our evaluation shows that, on average, SparseHD provides 48.5× and 15.0× lower energy consumption and faster execution as compared to the AMD R390 GPU implementation.
Year
DOI
Venue
2019
10.1109/FCCM.2019.00034
2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
Keywords
Field
DocType
Computational modeling,Encoding,Task analysis,Field programmable gate arrays,Training,Associative memory,Machine learning algorithms
Integer,Content-addressable memory,Computer science,Efficient energy use,Parallel computing,Field-programmable gate array,Energy consumption,Computation,Binary number,Encoding (memory)
Conference
ISSN
ISBN
Citations 
2576-2613
978-1-7281-1131-5
5
PageRank 
References 
Authors
0.47
0
6
Name
Order
Citations
PageRank
Mohsen Imani134148.13
Sahand Salamat2287.12
Behnam Khaleghi39113.49
Mohammad Samragh4387.01
Farinaz Koushanfar53055268.84
Tajana Simunic63198266.23