# Connexions

You are here: Home » Content » Noise-free signal recovery

### Recently Viewed

This feature requires Javascript to be enabled.

# Noise-free signal recovery

Module by: Mark A. Davenport. E-mail the author

Summary: This module establishes a simple performance guarantee of L1 minimization for signal recovery with noise-free measurements.

We now begin our analysis of

x ^ = arg min z z 1 subject to z B ( y ) . x ^ = arg min z z 1 subject to z B ( y ) .
(1)

for various specific choices of B(y)B(y). In order to do so, we require the following general result which builds on Lemma 4 from "11 minimization proof". The key ideas in this proof follow from [1].

## Lemma 1

Suppose that ΦΦ satisfies the restricted isometry property (RIP) of order 2K2K with δ2K<2-1δ2K<2-1. Let x,x^RNx,x^RN be given, and define h=x^-xh=x^-x. Let Λ0Λ0 denote the index set corresponding to the KK entries of xx with largest magnitude and Λ1Λ1 the index set corresponding to the KK entries of hΛ0chΛ0c with largest magnitude. Set Λ=Λ0Λ1Λ=Λ0Λ1. If x^1x1x^1x1, then

h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 . h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 .
(2)

where

C 0 = 2 1 - ( 1 - 2 ) δ 2 K 1 - ( 1 + 2 ) δ 2 K , C 1 = 2 1 - ( 1 + 2 ) δ 2 K . C 0 = 2 1 - ( 1 - 2 ) δ 2 K 1 - ( 1 + 2 ) δ 2 K , C 1 = 2 1 - ( 1 + 2 ) δ 2 K .
(3)

### Proof

We begin by observing that h=hΛ+hΛch=hΛ+hΛc, so that from the triangle inequality

h 2 h Λ 2 + h Λ c 2 . h 2 h Λ 2 + h Λ c 2 .
(4)

We first aim to bound hΛc2hΛc2. From Lemma 3 from "11 minimization proof" we have

h Λ c 2 = j 2 h Λ j 2 j 2 h Λ j 2 h Λ 0 c 1 K , h Λ c 2 = j 2 h Λ j 2 j 2 h Λ j 2 h Λ 0 c 1 K ,
(5)

where the ΛjΛj are defined as before, i.e., Λ1Λ1 is the index set corresponding to the KK largest entries of hΛ0chΛ0c (in absolute value), Λ2Λ2 as the index set corresponding to the next KK largest entries, and so on.

We now wish to bound hΛ0c1hΛ0c1. Since x1x^1x1x^1, by applying the triangle inequality we obtain

x 1 x + h 1 = x Λ 0 + h Λ 0 1 + x Λ 0 c + h Λ 0 c 1 x Λ 0 1 - h Λ 0 1 + h Λ 0 c 1 - x Λ 0 c 1 . x 1 x + h 1 = x Λ 0 + h Λ 0 1 + x Λ 0 c + h Λ 0 c 1 x Λ 0 1 - h Λ 0 1 + h Λ 0 c 1 - x Λ 0 c 1 .
(6)

Rearranging and again applying the triangle inequality,

h Λ 0 c 1 x 1 - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 x - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 . h Λ 0 c 1 x 1 - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 x - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 .
(7)

Recalling that σK(x)1=xΛ0c1=x-xΛ01σK(x)1=xΛ0c1=x-xΛ01,

h Λ 0 c 1 h Λ 0 1 + 2 σ K ( x ) 1 . h Λ 0 c 1 h Λ 0 1 + 2 σ K ( x ) 1 .
(8)

Combining this with Equation 5 we obtain

h Λ c 2 h Λ 0 1 + 2 σ K ( x ) 1 K h Λ 0 2 + 2 σ K ( x ) 1 K h Λ c 2 h Λ 0 1 + 2 σ K ( x ) 1 K h Λ 0 2 + 2 σ K ( x ) 1 K
(9)

where the last inequality follows from standard bounds on pp norms (Lemma 1 from "The RIP and the NSP"). By observing that hΛ02hΛ2hΛ02hΛ2 this combines with Equation 4 to yield

h 2 2 h Λ 2 + 2 σ K ( x ) 1 K . h 2 2 h Λ 2 + 2 σ K ( x ) 1 K .
(10)

We now turn to establishing a bound for hΛ2hΛ2. Combining Lemma 4 from "11 minimization proof" with Equation 8 and again applying standard bounds on pp norms we obtain

h Λ 2 α h Λ 0 c 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 1 + 2 σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 2 + 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 . h Λ 2 α h Λ 0 c 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 1 + 2 σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 2 + 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 .
(11)

Since hΛ02hΛ2hΛ02hΛ2,

( 1 - α ) h Λ 2 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 . ( 1 - α ) h Λ 2 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 .
(12)

The assumption that δ2K<2-1δ2K<2-1 ensures that α<1α<1. Dividing by (1-α)(1-α) and combining with Equation 10 results in

h 2 4 α 1 - α + 2 σ K ( x ) 1 K + 2 β 1 - α Φ h Λ , Φ h h Λ 2 . h 2 4 α 1 - α + 2 σ K ( x ) 1 K + 2 β 1 - α Φ h Λ , Φ h h Λ 2 .
(13)

Plugging in for αα and ββ yields the desired constants.

Lemma 1 establishes an error bound for the class of 11 minimization algorithms described by Equation 1 when combined with a measurement matrix ΦΦ satisfying the RIP. In order to obtain specific bounds for concrete examples of B(y)B(y), we must examine how requiring x^B(y)x^B(y) affects ΦhΛ,ΦhΦhΛ,Φh. As an example, in the case of noise-free measurements we obtain the following theorem.

## Theorem 1: (Theorem 1.1 of [1])

Suppose that ΦΦ satisfies the RIP of order 2K2K with δ2K<2-1δ2K<2-1 and we obtain measurements of the form y=Φxy=Φx. Then when B(y)={z:Φz=y}B(y)={z:Φz=y}, the solution x^x^ to Equation 1 obeys

x ^ - x 2 C 0 σ K ( x ) 1 K . x ^ - x 2 C 0 σ K ( x ) 1 K .
(14)

### Proof

Since xB(y)xB(y) we can apply Lemma 1 to obtain that for h=x^-xh=x^-x,

h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 . h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 .
(15)

Furthermore, since x,x^B(y)x,x^B(y) we also have that y=Φx=Φx^y=Φx=Φx^ and hence Φh=0Φh=0. Therefore the second term vanishes, and we obtain the desired result.

Theorem 1 is rather remarkable. By considering the case where xΣK = x : x 0 K xΣK = x : x 0 K we can see that provided ΦΦ satisfies the RIP — which as shown earlier allows for as few as O(Klog(N/K))O(Klog(N/K)) measurements — we can recover any KK-sparse xxexactly. This result seems improbable on its own, and so one might expect that the procedure would be highly sensitive to noise, but we will see next that Lemma 1 can also be used to demonstrate that this approach is actually stable.

Note that Theorem 1 assumes that ΦΦ satisfies the RIP. One could easily modify the argument to replace this with the assumption that ΦΦ satisfies the null space property (NSP) instead. Specifically, if we are only interested in the noiseless setting, in which case hh lies in the null space of ΦΦ, then Lemma 1 simplifies and its proof could be broken into two steps: (ii) show that if ΦΦ satisfies the RIP then it satisfies the NSP (as shown in "The RIP and the NSP"), and (iiii) the NSP implies the simplified version of Lemma 1. This proof directly mirrors that of Lemma 1. Thus, by the same argument as in the proof of Theorem 1, it is straightforward to show that if ΦΦ satisfies the NSP then it will obey the same error bound.

## References

1. Candès, E. (2008). The restricted isometry property and its implications for compressed sensing. Comptes rendus de l'Académie des Sciences, Série I, 346(9-10), 589–592.

## Content actions

### Give feedback:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks