Skip to content Skip to navigation


You are here: Home » Content » Sparse recovery algorithms


Recently Viewed

This feature requires Javascript to be enabled.

Sparse recovery algorithms

Module by: Chinmay Hegde. E-mail the author

Summary: This module introduces some of the tradeoffs involved in the design of sparse recovery algorithms.

Given noisy compressive measurements y=Φx+ey=Φx+e of a signal xx, a core problem in compressive sensing (CS) is to recover a sparse signal xx from a set of measurements yy. Considerable efforts have been directed towards developing algorithms that perform fast, accurate, and stable reconstruction of xx from yy. As we have seen in previous chapters, a “good” CS matrix ΦΦ typically satisfies certain geometric conditions, such as the restricted isometry property (RIP). Practical algorithms exploit this fact in various ways in order to drive down the number of measurements, enable faster reconstruction, and ensure robustness to both numerical and stochastic errors.

The design of sparse recovery algorithms are guided by various criteria. Some important ones are listed as follows.

  • Minimal number of measurements.   Sparse recovery algorithms must require approximately the same number of measurements (up to a small constant) required for the stable embedding of KK-sparse signals.
  • Robustness to measurement noise and model mismatch   Sparse recovery algorithms must be stable with regards to perturbations of the input signal, as well as noise added to the measurements; both types of errors arise naturally in practical systems.
  • Speed.   Sparse recovery algorithms must strive towards expending minimal computational resources, Keeping in mind that a lot of applications in CS deal with very high-dimensional signals.
  • Performance guarantees.   In previous chapters, we have already seen a range of performance guarantees that hold for sparse signal recovery using 11 minimization. In evaluating other algorithms, we will have the same considerations. For example, we can choose to design algorithms that possess instance-optimal or probabilistic guarantees. We can also choose to focus on algorithm performance for the recovery of exactly KK-sparse signals xx, or consider performance for the recovery of general signals xxs. Alternately, we can also consider algorithms that are accompanied by performance guarantees in either the noise-free or noisy settings.

A multitude of algorithms satisfying some (or even all) of the above have been proposed in the literature. While it is impossible to describe all of them in this chapter, we refer the interested reader to the DSP resources webpage for a more complete list of recovery algorithms. Broadly speaking, recovery methods tend to fall under three categories: convex optimization-based approaches, greedy methods, and combinatorial techniques. The rest of the chapter discusses several properties and example algorithms of each flavor of CS reconstruction.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks