# Connexions

You are here: Home » Content » Matrices that satisfy the RIP

### Recently Viewed

This feature requires Javascript to be enabled.

# Matrices that satisfy the RIP

Module by: Mark A. Davenport. E-mail the author

Summary: This module provides some examples of matrices that satisfy the restricted isometry property (RIP), focusing primarily on random constructions.

We now turn to the question of how to construct matrices that satisfy the restricted isometry property (RIP). It is possible to deterministically construct matrices of size M×NM×N that satisfy the RIP of order KK, but such constructions also require MM to be relatively large [3], [5]. For example, the construction in [3] requires M=O(K2logN)M=O(K2logN) while the construction in [5] requires M=O(KNα)M=O(KNα) for some constant αα. In many real-world settings, these results would lead to an unacceptably large requirement on MM.

Fortunately, these limitations can be overcome by randomizing the matrix construction. We will construct our random matrices as follows: given MM and NN, generate random matrices ΦΦ by choosing the entries φijφij as independent realizations from some probability distribution. We begin by observing that if all we require is that δ2K>0δ2K>0 then we may set M=2KM=2K and draw a ΦΦ according to a Gaussian distribution. With probability 1, any subset of 2K2K columns will be linearly independent, and hence all subsets of 2K2K columns will be bounded below by 1-δ2K1-δ2K where δ2K>0δ2K>0. However, suppose we wish to know the constant δ2Kδ2K. In order to find the value of the constant we must consider all possible NKNKKK-dimensional subspaces of RNRN. From a computational perspective, this is impossible for any realistic values of NN and KK. Thus, we focus on the problem of achieving the RIP of order 2K2K for a specified constant δ2Kδ2K. Our treatment is based on the simple approach first described in [1] and later generalized to a larger class of random matrices in [8].

To ensure that the matrix will satisfy the RIP, we will impose two conditions on the random distribution. First, we require that the distribution will yield a matrix that is norm-preserving, which will require that

E ( φ i j 2 ) = 1 M , E ( φ i j 2 ) = 1 M ,
(1)

and hence the variance of the distribution is 1/M1/M. Second, we require that the distribution is a sub-Gaussian distribution, meaning that there exists a constant c>0c>0 such that

E e φ i j t e c 2 t 2 / 2 E e φ i j t e c 2 t 2 / 2
(2)

for all tRtR. This says that the moment-generating function of our distribution is dominated by that of a Gaussian distribution, which is also equivalent to requiring that tails of our distribution decay at least as fast as the tails of a Gaussian distribution. Examples of sub-Gaussian distributions include the Gaussian distribution, the Bernoulli distribution taking values ±1/M±1/M, and more generally any distribution with bounded support. See "Sub-Gaussian random variables" for more details.

For the moment, we will actually assume a bit more than sub-Gaussianity. Specifically, we will assume that the entries of ΦΦ are strictly sub-Gaussian, which means that they satisfy Equation 2 with c2=E(φij2)=1Mc2=E(φij2)=1M. Similar results to the following would hold for general sub-Gaussian distributions, but to simplify the constants we restrict our present attention to the strictly sub-Gaussian ΦΦ. In this case we have the following useful result, which is proven in "Concentration of measure for sub-Gaussian random variables".

## Corollary 1

Suppose that ΦΦ is an M×NM×N matrix whose entries φijφij are i.i.d. with φijφij drawn according to a strictly sub-Gaussian distribution with c2=1/Mc2=1/M. Let Y=ΦxY=Φx for xRNxRN. Then for any ϵ>0ϵ>0, and any xRNxRN,

E Y 2 2 = x 2 2 E Y 2 2 = x 2 2
(3)

and

P Y 2 2 - x 2 2 ϵ x 2 2 2 exp - M ϵ 2 κ * P Y 2 2 - x 2 2 ϵ x 2 2 2 exp - M ϵ 2 κ *
(4)

with κ*=2/(1-log(2))6.52κ*=2/(1-log(2))6.52.

This tells us that the norm of a sub-Gaussian random vector strongly concentrates about its mean. Using this result, in "Proof of the RIP for sub-Gaussian matrices" we provide a simple proof based on that in [1] that sub-Gaussian matrices satisfy the RIP.

## Theorem 1

Fix δ(0,1)δ(0,1). Let ΦΦ be an M×NM×N random matrix whose entries φijφij are i.i.d. with φijφij drawn according to a strictly sub-Gaussian distribution with c2=1/Mc2=1/M. If

M κ 1 K log N K , M κ 1 K log N K ,
(5)

then ΦΦ satisfies the RIP of order KK with the prescribed δδ with probability exceeding 1-2e-κ2M1-2e-κ2M, where κ1κ1 is arbitrary and κ2=δ2/2κ*-log(42e/δ)/κ1κ2=δ2/2κ*-log(42e/δ)/κ1.

Note that in light of the measurement bounds in "The restricted isometry property" we see that Equation 5 achieves the optimal number of measurements (up to a constant).

Using random matrices to construct ΦΦ has a number of additional benefits. To illustrate these, we will focus on the RIP. First, one can show that for random constructions the measurements are democratic, meaning that it is possible to recover a signal using any sufficiently large subset of the measurements [4], [6]. Thus, by using random ΦΦ one can be robust to the loss or corruption of a small fraction of the measurements. Second, and perhaps more significantly, in practice we are often more interested in the setting where xx is sparse with respect to some basis ΨΨ. In this case what we actually require is that the product ΦΨΦΨ satisfies the RIP. If we were to use a deterministic construction then we would need to explicitly take ΨΨ into account in our construction of ΦΦ, but when ΦΦ is chosen randomly we can avoid this consideration. For example, if ΦΦ is chosen according to a Gaussian distribution and ΨΨ is an orthonormal basis then one can easily show that ΦΨΦΨ will also have a Gaussian distribution, and so provided that MM is sufficiently high ΦΨΦΨ will satisfy the RIP with high probability, just as before. Although less obvious, similar results hold for sub-Gaussian distributions as well [1]. This property, sometimes referred to as universality, constitutes a significant advantage of using random matrices to construct ΦΦ.

Finally, we note that since the fully random matrix approach is sometimes impractical to build in hardware, several hardware architectures have been implemented and/or proposed that enable random measurements to be acquired in practical settings. Examples include the random demodulator [11], random filtering [12], the modulated wideband converter [7], random convolution [2], [9], and the compressive multiplexer [10]. These architectures typically use a reduced amount of randomness and are modeled via matrices ΦΦ that have significantly more structure than a fully random matrix. Perhaps somewhat surprisingly, while it is typically not quite as easy as in the fully random case, one can prove that many of these constructions also satisfy the RIP.

## References

1. Baraniuk, R. and Davenport, M. and DeVore, R. and Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Const. Approx., 28(3), 253–263.
2. Bajwa, W. and Haupt, J. and Raz, G. and Wright, S. and Nowak, R. (2007, Aug.). Toeplitz-structured compressed sensing matrices. In Proc. IEEE Work. Stat. Signal Processing. Madison, WI
3. DeVore, R. (2007). Deterministic Constructions of Compressed Sensing Matrices. J. Complex., 23(4), 918–925.
4. Davenport, M. and Laska, J. and Boufouons, P. and Baraniuk, R. (2009, Nov.). A simple proof that random matrices are democratic. (TREE 0906). Technical report. Rice Univ., ECE Dept.
5. Indyk, P. (2008, Jan.). Explicit constructions for compressed sensing of sparse signals. In Proc. ACM-SIAM Symp. on Discrete Algorithms (SODA). (p. 30–33).
6. Laska, J. and Boufounos, P. and Davenport, M. and Baraniuk, R. (2009). Democracy in action: Quantization, saturation, and compressive sensing. [Preprint].
7. Mishali, M. and Eldar, Y. C. (2010). From theory to practice: Sub-Nyquist sampling of sparse wideband analog signals. IEEE J. Select. Top. Signal Processing, 4(2), 375–391.
8. Mendelson, S. and Pajor, A. and Tomczack-Jaegermann, N. (2008). Uniform uncertainty principle for Bernoulli and subgaussian ensembles. Const. Approx., 28(3), 277–289.
9. Romberg, J. (2009). Compressive sensing by random convolution. SIAM J. Imag. Sci., 2(4), 1098–1128.
10. Slavinsky, J. P. and Laska, J. and Davenport, M. and Baraniuk, R. (2011, May). The compressive mutliplexer for multi-channel compressive sensing. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP). Prague, Czech Republic
11. Tropp, J. and Laska, J. and Duarte, M. and Romberg, J. and Baraniuk, R. (2010). Beyond Nyquist: Efficient sampling of sparse, bandlimited signals. IEEE Trans. Inform. Theory, 56(1), 520–544.
12. Tropp, J. and Wakin, M. and Duarte, M. and Baron, D. and Baraniuk, R. (2006, May). Random filters for compressive sampling and reconstruction. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP). Toulouse, France

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks