Title
Binary Iterative Hard Thresholding Converges with Optimal Number of Measurements for 1-Bit Compressed Sensing
Abstract
Compressed sensing has been a very successful high-dimensional signal acquisition and recovery technique that relies on linear operations. However, the actual measurements of signals have to be quantized before storing or processing them. 1(One)-bit compressed sensing is a heavily quantized version of compressed sensing, where each linear measurement of a signal is reduced to just one bit: the sign of the measurement. Once enough of such measurements are collected, the recovery problem in 1-bit compressed sensing aims to find the original signal with as much accuracy as possible. The recovery problem is related to the traditional “halfspace-learning” problem in learning theory. For recovery of sparse vectors, a popular reconstruction method from one-bit measurements is the binary iterative hard thresholding (BIHT) algorithm. The algorithm is a simple projected subgradient descent method, and is known to converge well empirically, despite the nonconvexity of the problem. The convergence property of BIHT was not theoretically justified, except with an exorbitantly large number of measurements (i.e., a number of measurement greater than $\max\{k^{10},24^{48}, k^{3.5}/\epsilon\}$, where k is the sparsity and $\epsilon$ denotes the approximation error, and even this expression hides other factors). In this paper we show that the BIHT estimates converge to the original signal with only ${\tilde{O}}\left(\frac{k}{\epsilon}\right)$ measurements. Note that, this dependence on k and $\epsilon$ is optimal for any recovery method in 1-bit compressed sensing. With this result, to the best of our knowledge, BIHT is the only practical and efficient (polynomial time) algorithm that requires the optimal number of measurements in all parameters (both k and $\epsilon$). This is also an example of a gradient descent algorithm converging to the correct solution for a nonconvex problem, under suitable structural conditions.
Year
DOI
Venue
2022
10.1109/FOCS54457.2022.00082
2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS)
Keywords
DocType
ISSN
compressed sensing,quantization,gradient descent,sparsity
Conference
1523-8288
ISBN
Citations 
PageRank 
978-1-6654-5520-6
0
0.34
References 
Authors
10
2
Name
Order
Citations
PageRank
Namiko Matsumoto100.34
Arya Mazumdar230741.81