Greedy pursuit algorithms (such as MP and OMP) alleviate the issue of computational complexity encountered in optimization-based sparse recovery, but lose the associated strong guarantees for uniform signal recovery, given a requisite number of measurements of the signal. In addition, it is unknown whether these greedy algorithms are robust to signal and/or measurement noise.

There have been some recent attempts to develop greedy algorithms (Regularized OMP [12], [13], Compressive Sampling Matching Pursuit (CoSaMP) [11] and Subspace Pursuit [4]) that bridge this gap between uniformity and complexity. Intriguingly, the restricted isometry property (RIP), developed in the context of analyzing ℓ1ℓ1 minimization, plays a central role in such algorithms. Indeed, if the matrix ΦΦ satisfies the RIP of order KK, this implies that every subset of KK columns of the matrix is approximately orthonormal. This property is used to prove strong convergence results of these greedy-like methods.

One variant of such an approach is employed by the CoSaMP algorithm. An interesting feature of CoSaMP is that unlike MP, OMP and StOMP, new indices in a signal estimate can be added *as well as deleted* from the current set of chosen indices. In contrast, greedy pursuit algorithms suffer from the fact that a chosen index (or equivalently, a chosen atom from the dictionary ΦΦ remains in the signal representation until the end. A brief description of CoSaMP is as follows: at the start of a given iteration ii, suppose the signal estimate is x^i-1x^i-1.

- Form signal residual estimate: e←ΦTre←ΦTr
- Find the biggest 2K2K coefficients of the signal residual ee; call this set of indices ΩΩ.
- Merge supports: T←Ω∪ supp (x^i-1)T←Ω∪ supp (x^i-1) .
- Form signal estimate bb by subspace projection: b|T←ΦT†yb|T←ΦT†y, b|TC←0b|TC←0 .
- Prune bb by retaining its KK largest coefficients. Call this new estimate x^ix^i.
- Update measurement residual: r←y-Φx^ir←y-Φx^i.

This procedure is summarized in pseudocode form below.

```
Inputs: Measurement matrix ΦΦ, measurements yy, signal sparsity KK
Output: KK-sparse approximation x^x^ to true signal representation xx
Initialize: x^0=0x^0=0 , r=yr=y; i=0i=0
```**while** ħalting criterion false **do**
1.
i←i+1i←i+1
2. e←ΦTre←ΦTr {form signal residual estimate}
3. Ω← supp (T(e,2K))Ω← supp (T(e,2K)) {prune signal residual estimate}
4. T←Ω∪ supp (x^i-1)T←Ω∪ supp (x^i-1) {merge supports}
5. b|T←ΦT†yb|T←ΦT†y, b|TCb|TC {form signal estimate}
6. x^i←T(b,K)x^i←T(b,K) {prune signal estimate}
7. r←y-Φx^ir←y-Φx^i {update measurement residual}
**end while**
return x^←x^ix^←x^i

As discussed in [11], the key computational issues for CoSaMP are the formation of the signal residual, and the method used for subspace projection in the signal estimation step. Under certain general assumptions, the computational cost of CoSaMP can be shown to be O(MN)O(MN), which is *independent* of the sparsity of the original signal. This represents an improvement over both greedy algorithms as well as convex methods.

While CoSaMP arguably represents the state of the art in sparse recovery algorithm performance, it possesses one drawback: the algorithm requires prior knowledge of the sparsity KK of the target signal. An incorrect choice of input sparsity may lead to a worse guarantee than the actual error incurred by a weaker algorithm such as OMP. The stability bounds accompanying CoSaMP ensure that the error due to an incorrect parameter choice is bounded, but it is not yet known how these bounds translate into practice.