Click here for a list of publications. Following are some of the topics that I have looked at in my research.

  • Fast and Robust Sparse Recovery - Algorithms and Applications: In recent years, using signal sparsity has proven to be an important technique in many problems of practical interest. Spurred by the seminal work on Compressive Sensing of Candes and Tao (2006) and Donoho (2006), much attention has focussed on devising fast, efficient, and robust algorithms for problems where the underlying dataset is highly “compressible”. We present a class of sparse measurement matrices and a corresponding algorithm that we call SHO-FA (for SHort and FAst) that, with high probability over the measurement matrix, can reconstruct a real-valued signal with sparse support. Our algorithm also applies to several settings that go beyond compressive sensing. As an example, we consider the problem of Network Delay Tomography via path measurements. Our algorithm FRANTIC (for Fast Reference-based Algorithm for Network Tomography vIa Compressive sensing) builds upon the success of our previous algorithm SHO-FA. We also consider three related measurement models - Compressive Phase Retrieval, Group Testing and Threshold Group Testing. Again, under the assumption that the sample is sparse, we come up with new algorithms for these setups that improve upon prior known algorithms in terms of either computational complexity or measurement rate, or both. 
    Click here for the project website.

  • Reliable Deniable and Hidable Communication: I this line of work, we explore the fundamental information theoretic limits of communication in the presence of an eavesdropper when even the knowledge that there is communication taking place is of critical importance. One example of such scenario could be a whistleblower (say, Alice) wanting to communicate deniably to the outside world (say, Bob) knowing that even a suspicion of communication by the eavesdropper (say, Willie) could have dire consequences. Under this communication scenario, three things are desirable - (a) the message be reliably decoded by Bob, (b) the communication be deniable to Willie, i.e., the active distribution of codewords doesn’t look too different from an innocent communication, and (c) the message be secure from Willie. We consider two potential tricks that Alice has at her disposal - pretending innocence and hiding in noise. To showcase the power of the first trick, we consider a simple network of parallel links. Bob gets to see the output of all the links, but Willie only sees an unknown subset of them. Alice is given a fixed innocent distribution and has to design the codebook such that the active distribution mimics the innocent distribution on the subset observed by Willie. We calculate the capacity under this setting and show that by exploiting the fact that Bob gets to see more links than Willie, Alice can talk both deniably and hidably to Bob at a positive rate. In our second setting, we showcase how Alice could also exploit channel noise as a resource. As an example, we consider Alice transmitting over a Binary Symmetric Channel such that the noise to Bob is smaller than the noise to Willie and the innocent transmission status for Alice is all 0’s. By exploiting the statistical nature of the noise, we show that Alice can still communicate deniably and hidably with Bob, though at a sublinear rate of O(1/sqrt(n)). An interesting spinoff from this line of work is a new secrecy metric that we call “Super-strong secrecy” that naturally arises out of our formulation of the deniability problem. Although, this metric may be strictly stronger than the commonly considered information-theoretic notion of strong secrecy, it turns out that for several classes of problems such as Secret Key Agreement and Wiretap Channels with weakly symmetric channels, there is no penalty in rate by demanding this stronger secrecy metric.
    Click here for the project website.

  • Arbitrarily Varying Channels with Quadratic Constaints: In this work we study an Arbitrarily Varying Channel (AVC) with quadratic power constraints on the transmitter and a so-called "oblivious'' jammer (along with additional AWGN) under a maximum probability of error criterion, and no private randomness between the transmitter and the receiver. This is in contrast to similar AVC models under the average probability of error criterion considered in other works and models wherein common randomness is allowed -- these distinctions are important in some communication scenarios outlined below. We consider the regime where the jammer's power constraint is smaller than the transmitter's power constraint (in the other regime it is known no positive rate is possible). For this regime we show the existence of stochastic codes (with {\it no common randomness} between the transmitter and receiver) that enables reliable communication at the same rate as when the jammer is replaced with AWGN with the same power constraint. This matches known information-theoretic outer bounds. In addition to being a stronger result than that in earlier results (enabling recovery of the results therein), our proof techniques are also somewhat more direct, and hence may be of independent interest.
    Click here for the project website.

  • Feedback in Network Source Coding: The usefulness of feedback is well documented in the context of several Channel Coding scenarios such as channels with memory, multi-user channels, and coding for finite blocklength. In contrast, in Source Coding and Network Coding, the benefits of feedback remain largely unexplored except for some scenarios such as universal source coding. We demonstrate that the presence of feedback can drastically reduced the overall amount of resources required for several different network coding and source coding setups. This observation has implications for design of networks where forward transmissions may be more constrained that feedback (e.g. networks of battery-powered sensors communicating to a plugged-in base station).

    (a) M.Bakshi and M. Effros. “On Zero Error Source Coding with Feedback”. Proceedings of the International Symposium on Information Theory 2010.
    (b) M. Bakshi and M. Effros. “On Feedback in Network Source Coding”. Proceedings of the International Symposium on Information Theory 2009.

  • Capacity bounds for large networks : The field of Network Source Coding has focussed primarily on finding the capacity for families of problems defined by either a broad class of networks (e.g., directed, acyclic networks) but a narrow class of demands (e.g., multicast or multiple unicast demands with independent sources), or specific networks (e.g. Slepian-Wolf, Side Information with one source), but more general demands. While the above approaches often yield the exact capacity where they apply, the collection of networks that have been fully solved to date is abysmally small. We investigate two new approaches to finding upper and lower bounds for general network source coding problems - characterizing the role of side-information [a] and decomposing into smaller networks [b]. These approaches enable the calculation of exact capacity for several special cases, while giving bounds for other networks.

    (a) M. Bakshi and M. Effros. “On Achievable Rates for Multicast in the Presence of Side Information”.Proceedings of the International Symposium on Information Theory 2008.
    (b) M. Bakshi, M. Effros, W. Gu, and R. Koetter. “On Network Coding of Independent and Dependent Sources in Line Network” Proceedings of the International Symposium on Information Theory 2007.

  • Complexity, blocklength, and error probability: The asymptotic nature of capacity results in Information Theory suggests a skeptical view towards achieving them with practical complexity and delay considerations. This motivates the study of optimal error probability for finite blocklengths and code designs that approach these probabilities. In [a], we consider the lossy source coding problem for sources with memory and find upper and lower bounds for the error exponent. It is shown that these bounds include known bounds for memoryless sources as special cases. On the design side, in [b], we propose a concatenated coding scheme based on Polar codes to achieve an almost exponential decay of error probability with the blocklength while keeping the encoding and decoding complexity to O(nlogn) for a wide collection of problems (e.g. point-to-point channels, MAC, Slepian-Wolf coding).

    (a) M. Bakshi and R.K. Bansal. “On Error Exponent in Lossy Source Coding” Proceedings of the Allerton Conference on Communication, Control, and Computing, September 2005.
    (b) M. Bakshi, S. Jaggi, and M. Effros. “Concatenated Polar Codes”. Proceedings of the International Symposium on Information Theory 2010.
  • map