This page is being updated. Meanwhile, do visit the excellent website containing the materials related to these projects put together by my collaborator Cai Sheng.

Fast and Robust Sparse Recovery: New Algorithms and Applications
Collaborators: Sidharth Jaggi, Sheng Cai, Chun Lam Chan, Mohammad Jahangoshahi, Minghua Chen, Venkatesh Saligrama
Stacks Image 74
In a typical sparse recovery problem given an underlying dataset x, and a measurement process P, the goal is to reconstruct x (or some properties of it) using as few measurements and as low complexity as possible. While such problems have been considered in various forms for a long time now, the path-breaking results on Compressive Sensing of Candes and Tao (2006) and Donoho (2006), have spurred renewed interest in sparse recovery and its various applications. In our work, we look at several different sparse recovery problems and come up with novel algorithms and techniques that beat existing algorithms in terms of number of measurements or complexity, and often both.

A. Compressive Sensing: Suppose x is any exactly k-sparse

is any exactly k-sparse vector in R^n. We present a class of sparse matrices A, and a corresponding algorithm that we call SHO-FA (for Short and Fast) that, with high probability over A, can reconstruct x from Ax. The SHO-FA algorithm is related to the Invertible Bloom Lookup Tables recently introduced by Goodrich et al., with two important distinctions - SHO-FA relies on linear measurements, and is robust to noise. The SHO-FA algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) measurements, (b) the bit-precision of each measurement and each arithmetic operation is O (log(n) + P) (here 2^{-P} is the desired relative error in the reconstruction of x), (c) the decoding complexity is O(k) arithmetic operations and encoding complexity is O(n) arithmetic operations, and (d) if the reconstruction goal is simply to recover a single component of x instead of all of x, with significant probability over A this can be done in constant time. All constants above are independent of all problem parameters other than the desired success probability. For a wide range of parameters these properties are information-theoretically order-optimal. In addition, our SHO-FA algorithm works over fairly general ensembles of "sparse random matrices", is robust to random noise, and (random) approximate sparsity for a large range of k. In particular, suppose the measured vector equals A(x+z)+e, where z and e correspond respectively to the source tail and measurement noise. Under reasonable statistical assumptions on z and e our decoding algorithm reconstructs x with an estimation error of O(||z||_2 +||e||_2). The SHO-FA algorithm works with high probability over A, z, and e, and still requires only O(k) steps and O(k) measurements over O(log n)-bit numbers. This is in contrast to the worst-case z model, where it is known O(k log n/k) measurements are necessary.


This page is being updated. Meanwhile, do visit the excellent website containing the materials related to these projects put together by my collaborator
Cai Sheng.