Compressed Sensing and Deep Learning Revisited

Recent advances incorporating deep learning into a compressed sensing paradigm requires an update to current understanding. Before delving into recent literature and exactly how DL has been used with CS, we’ll take a historical perspective. One hint of how DL might work well with CS is that a CS framework guarantees convergence bounds for the recovery of signal. Futhermore, there’s been detailed work on the machinery of signal recovery, applicable through an optimization viewpoin

Compressed sensing, similar to deep learning, grew from the work of a core group not limited to Emmanuel Candes, Stephen Boyd, Terence Tao, and David Donoho. Similar to deep learning, it’s broad usefulness, allowing reconstruction of a signal exceed Nyquist limits under a certain criteria, saw its propagation into many applications such as medical imaging, seismic studies, astronomy, light field photography, etc (1,2,3,4). Again like deep learning, it went through a developmental period where early applications such as use of the L1 norm as a regularizer in LASSO regression saw early benefits like shallow neural nets. Today compressed sensing and deep learning are established methods towards state of the art performance in certain tasks.

Figure from http://www.sciencedirect.com/science/article/pii/S0165027016302680 describing a joint DL and CS processing pipeline. Reproduced with permission.

Deep learning improves a critical process within compressed sensing, merging the rigor of CS with DL’s ability for machine learning absorption of expert level performance residing in massive datasets. In particular, DL is used to design the sensing matrix component of a CS pipeline. There are three requirements for CS:

  1. The signal is sparse in a transform domain.
  2. The signal is sampled incoherently in the transform domain.
  3. The signal is reconstructed using convex optimization with a sparsity promoting L1 norm reconstruction.

The first point broadens applicability greatly. If a signal is not sparse when it’s detected, this isn’t necessarily a roadblock. Just transform it to a domain where it is sparse, e.g. an NMR signal is sampled in the frequency (Fourier) domain and transformed to the wavelet space where biological images are sparse (sparsity in MR images). The second point and the third point are related. The signal is sampled incoherently in the transform domain such that aliasing propagates as noise, which is then iteratively filtered using convex optimization. Convex optimization in CS has a data consistency term and regularization term. Early work hinting at CS leveraged assumptions about sparsity and used an L1 regularizer to express this. The space of signals sparse in wavelet and transform domains is much broader than the space of say brain images and useful to encode prior knowledge about the reconstruction problem.

On the left, the ground truth, and on the right four images are snapshots as the reconstruction converges.

Recently, DL-CS has been used to iteratively optimize the processing pipeline of brain computer interfaces (EEG-like devices) http://www.sciencedirect.com/science/article/pii/S0165027016302680. This concept has been extended to perform joint optimization of the sensing matrix and the inference operator https://arxiv.org/abs/1610.09615. Other combinations include combining CS with GANs for accelerating reconstruction through manifold-based predictions https://arxiv.org/abs/1706.00051. More general references on CS can be found here. Subsequently, there has been work on pure optimization, e.g. “differential optimization as a layer in neural networks” https://arxiv.org/abs/1703.00443. OptNet incorporates a new layer type with a quadratic form that gives a 100x improvement in dual interior QP solvers. Barron and Poole introduced a fast bilateral solver in ECCV 2016 enabling general applications such as depth superresolution, filtering, and semantic segmentation https://arxiv.org/abs/1511.03296.

Barron & Poole 2016, Fast bilateral solver results: https://arxiv.org/abs/1511.03296

Improving every step of the signal reconstruction pipeline isn’t new. My thesis work reached into the physical MRI hardware, creating novel nonlinear magnetic fields that form the sensing matrix http://onlinelibrary.wiley.com/doi/10.1002/mrm.24114/full. Nonlinear magnetic fields enabled improved incoherence in the sensing matrix for better signal recovery and perhaps again this approach would be improved with DL.

Creating a sensing matrix for fun and profit. Hardware may be involved, whether GPUs or MRI gradient coils. Incidentally, my first foray into GPU accelerated computing was with nonlinear CS reconstructions with GPUs.

There’s a growing literature on combining DL with CS, and really it’s the best of both worlds. The black box performance of DL is combined with convergence guarantees from a CS framework. Work in this avenue is early, but promising.

Enjoyed? More posts: http://leotam.github.io/