Skip to content Skip to navigation

OpenStax_CNX

You are here: Home » Content » Compressed Sensing

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
  • IEEE-SPS display tagshide tags

    This module is included inLens: IEEE Signal Processing Society Lens
    By: IEEE Signal Processing SocietyAs a part of collection: "Concise Signal Models"

    Comments:

    "A resource on sparse, compressible, and manifold signal models for signals processing and compressed sensing."

    Click the "IEEE-SPS" link to see all content they endorse.

    Click the tag icon tag icon to display tags associated with this content.

Also in these lenses

  • UniqU content

    This module is included inLens: UniqU's lens
    By: UniqU, LLCAs a part of collection: "Concise Signal Models"

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Compressed Sensing

Module by: Michael Wakin. E-mail the author

Summary: This collection reviews fundamental concepts underlying the use of concise models for signal processing. Topics are presented from a geometric perspective and include low-dimensional linear, sparse, and manifold-based signal models, approximation, compression, dimensionality reduction, and Compressed Sensing.

A new theory known as Compressed Sensing (CS) has recently emerged that can also be categorized as a type of dimensionality reduction. Like manifold learning, CS is strongly model-based (relying on sparsity in particular). However, unlike many of the standard techniques in dimensionality reduction (such as manifold learning or the JL lemma), the goal of CS is to maintain a low-dimensional representation of a signal xx from which a faithful approximation to xx can be recovered. In a sense, this more closely resembles the traditional problem of data compression (see Compression). In CS, however, the encoder requires no a priori knowledge of the signal structure. Only the decoder uses the model (sparsity) to recover the signal. We justify such an approach again using geometric arguments.

Motivation

Consider a signal xRNxRN, and suppose that the basis ΨΨ provides a KK-sparse representation of xx

x = Ψ α , x = Ψ α ,
(1)
with α0=Kα0=K. (In this section, we focus on exactly KK-sparse signals, though many of the key ideas translate to compressible signals [14], [18]. In addition, we note that the CS concepts are also extendable to tight frames.)

As we discussed in Compression, the standard procedure for compressing sparse signals, known as transform coding, is to (i) acquire the full NN-sample signal xx; (ii) compute the complete set of transform coefficients αα; (iii) locate the KK largest, significant coefficients and discard the (many) small coefficients; (iv) encode the values and locations of the largest coefficients.

This procedure has three inherent inefficiencies: First, for a high-dimensional signal, we must start with a large number of samples NN. Second, the encoder must compute all NN of the transform coefficients αα, even though it will discard all but KK of them. Third, the encoder must encode the locations of the large coefficients, which requires increasing the coding rate since the locations change with each signal.

Incoherent projections

This raises a simple question: For a given signal, is it possible to directly estimate the set of large α(n)α(n)'s that will not be discarded? While this seems improbable, Candès, Romberg, and Tao [10], [14] and Donoho [18] have shown that a reduced set of projections can contain enough information to reconstruct sparse signals. An offshoot of this work, often referred to as Compressed Sensing (CS) [9], [14], [11], [12], [8], [18], [19], has emerged that builds on this principle.

In CS, we do not measure or encode the KK significant α(n)α(n) directly. Rather, we measure and encode M<NM<N projections y(m)=<x,φmT>y(m)=<x,φmT> of the signal onto a second set of functions {φm},m=1,2,...,M{φm},m=1,2,...,M. In matrix notation, we measure

y = Φ x , y = Φ x ,
(2)
where yy is an M×1M×1 column vector and the measurement basis matrix ΦΦ is M×NM×N with each row a basis vector φmφm. Since M<NM<N, recovery of the signal xx from the measurements yy is ill-posed in general; however the additional assumption of signal sparsity makes recovery possible and practical.

The CS theory tells us that when certain conditions hold, namely that the functions {φm}{φm} cannot sparsely represent the elements of the basis {ψn}{ψn} (a condition known as incoherence of the two dictionaries [14], [10], [18], [29]) and the number of measurements MM is large enough, then it is indeed possible to recover the set of large {α(n)}{α(n)} (and thus the signal xx) from a similarly sized set of measurements yy. This incoherence property holds for many pairs of bases, including for example, delta spikes and the sine waves of a Fourier basis, or the Fourier basis and wavelets. Significantly, this incoherence also holds with high probability between an arbitrary fixed basis and a randomly generated one.

Methods for signal recovery

Although the problem of recovering xx from yy is ill-posed in general (because xRNxRN, yRMyRM, and M<NM<N), it is indeed possible to recover sparse signals from CS measurements. Given the measurements y=Φxy=Φx, there exist an infinite number of candidate signals in the shifted nullspace N(Φ)+xN(Φ)+x that could generate the same measurements yy (see Linear Models from Low-Dimensional Signal Models). Recovery of the correct signal xx can be accomplished by seeking a sparse solution among these candidates.

Recovery via combinatorial optimization

Supposing that xx is exactly KK-sparse in the dictionary ΨΨ, then recovery of xx from yy can be formulated as the 00 minimization

α ^ = arg min α 0 s.t. y = Φ Ψ α . α ^ = arg min α 0 s.t. y = Φ Ψ α .
(3)
Given some technical conditions on ΦΦ and ΨΨ (see Theorem Section 4below), then with high probability this optimization problem returns the proper KK-sparse solution αα, from which the true xx may be constructed. (Thanks to the incoherence between the two bases, if the original signal is sparse in the αα coefficients, then no other set of sparse signal coefficients α'α' can yield the same projections yy.) We note that the recovery program Equation 3 can be interpreted as finding a KK-term approximation to yy from the columns of the dictionary ΦΨΦΨ, which we call the holographic basis because of the complex pattern in which it encodes the sparse signal coefficients [18].

In principle, remarkably few incoherent measurements are required to recover a KK-sparse signal via 00 minimization. Clearly, more than KK measurements must be taken to avoid ambiguity; the following theorem (which is proved in [5]) establishes that K+1K+1 random measurements will suffice. (Similar results were established by Venkataramani and Bresler [31].)

Theorem 1

Let ΨΨ be an orthonormal basis for RNRN, and let 1K<N1K<N. Then the following statements hold:

  1. Let ΦΦ be an M×NM×N measurement matrix with i.i.d. Gaussian entries with M2KM2K. Then with probability one the following statement holds: all signals x=Ψαx=Ψα having expansion coefficients αRNαRN that satisfy α0=Kα0=K can be recovered uniquely from the MM-dimensional measurement vector y=Φxy=Φx via the 00 optimization Equation 3.
  2. Let x=Ψαx=Ψα such that α0=Kα0=K. Let ΦΦ be an M×NM×N measurement matrix with i.i.d. Gaussian entries (notably, independent of xx) with MK+1MK+1. Then with probability one the following statement holds: xx can be recovered uniquely from the MM-dimensional measurement vector y=Φxy=Φx via the 00 optimization Equation 3.
  3. Let ΦΦ be an M×NM×N measurement matrix, where MKMK. Then, aside from pathological cases (specified in the proof), no signal x=Ψαx=Ψα with α0=Kα0=K can be uniquely recovered from the MM-dimensional measurement vector y=Φxy=Φx.

The second statement of the theorem differs from the first in the following respect: when K<M<2KK<M<2K, there will necessarily exist KK-sparse signals xx that cannot be uniquely recovered from the MM-dimensional measurement vector y=Φxy=Φx. However, these signals form a set of measure zero within the set of allKK-sparse signals and can safely be avoided if ΦΦ is randomly generated independently of xx.

Unfortunately, as discussed in Nonlinear Approximation from Approximation, solving this 00 optimization problem is prohibitively complex. Yet another challenge is robustness; in the setting of Theorem "Recovery via ℓ 0 optimization", the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness) can be addressed, but at the expense of slightly more measurements.

Recovery via convex optimization

The practical revelation that supports the new CS theory is that it is not necessary to solve the 00-minimization problem to recover αα. In fact, a much easier problem yields an equivalent solution (thanks again to the incoherency of the bases); we need only solve for the 11-sparsest coefficients αα that agree with the measurements yy [10], [9], [14], [11], [12], [8], [18], [19]

α ^ = arg min α 1 s.t. y = Φ Ψ α . α ^ = arg min α 1 s.t. y = Φ Ψ α .
(4)
As discussed in Nonlinear Approximation from Approximation, this optimization problem, also known as Basis Pursuit [7], is significantly more approachable and can be solved with traditional linear programming techniques whose computational complexities are polynomial in NN.

There is no free lunch, however; according to the theory, more than K+1K+1 measurements are required in order to recover sparse signals via Basis Pursuit. Instead, one typically requires McKMcK measurements, where c>1c>1 is an oversampling factor. As an example, we quote a result asymptotic in NN. For simplicity, we assume that the sparsity scales linearly with NN; that is, K=SNK=SN, where we call SS the sparsity rate.

Theorem 2

[13], [20], [17] Set K=SNK=SN with 0<S10<S1. Then there exists an oversampling factor c(S)=O(log(1/S))c(S)=O(log(1/S)), c(S)>1c(S)>1, such that, for a KK-sparse signal xx in the basis ΨΨ, the following statements hold:

  1. The probability of recovering xx via Basis Pursuit from (c(S)+ϵ)K(c(S)+ϵ)K random projections, ϵ>0ϵ>0, converges to one as NN.
  2. The probability of recovering xx via Basis Pursuit from (c(S)-ϵ)K(c(S)-ϵ)K random projections, ϵ>0ϵ>0, converges to zero as NN.

In an illuminating series of recent papers, Donoho and Tanner [17], [20], [21] have characterized the oversampling factor c(S)c(S) precisely (see also "The geometry of Compressed Sensing"). With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust to measurement noise and quantization error [10].

We often use the abbreviated notation cc to describe the oversampling factor required in various settings even though c(S)c(S) depends on the sparsity KK and signal length NN.

A CS recovery example on the Cameraman test image is shown in Figure 1. In this case, with M=4KM=4K we achieve near-perfect recovery of the sparse measured image.

Figure 1: Compressive sensing reconstruction of the nonlinear approximation Cameraman image from (Reference)(b). Using M=16384M=16384 random measurements of the KK-term nonlinear approximation image (where K=4096K=4096), we solve an 11-minimization problem to obtain the reconstruction shown above. The MSE with respect to the measured image is 0.080.08, so the reconstruction is virtually perfect.
Figure 1 (cameraCS.png)

Recovery via greedy pursuit

At the expense of slightly more measurements, iterative greedy algorithms such as Orthogonal Matching Pursuit (OMP) [29], Matching Pursuit (MP) [27], and Tree Matching Pursuit (TMP) [22], [25] have also been proposed to recover the signal xx from the measurements yy (see Nonlinear Approximation from Approximation). In CS applications, OMP requires c2ln(N)c2ln(N)[29] to succeed with high probability. OMP is also guaranteed to converge within MM iterations. We note that Tropp and Gilbert require the OMP algorithm to succeed in the first KK iterations [29]; however, in our simulations, we allow the algorithm to run up to the maximum of MM possible iterations. The choice of an appropriate practical stopping criterion (likely somewhere between KK and MM iterations) is a subject of current research in the CS community.

Impact and applications

CS appears to be promising for a number of applications in signal acquisition and compression. Instead of sampling a KK-sparse signal NN times, only cKcK incoherent measurements suffice, where KK can be orders of magnitude less than NN. Therefore, a sensor can transmit far fewer measurements to a receiver, which can reconstruct the signal and then process it in any manner. Moreover, the cKcK measurements need not be manipulated in any way before being transmitted, except possibly for some quantization. Finally, independent and identically distributed (i.i.d.) Gaussian or Bernoulli/Rademacher (random ±1±1) vectors provide a useful universal basis that is incoherent with all others. Hence, when using a random basis, CS is universal in the sense that the sensor can apply the same measurement mechanism no matter what basis the signal is sparse in (and thus the coding algorithm is independent of the sparsity-inducing basis) [14], [18], [6].

These features of CS make it particularly intriguing for applications in remote sensing environments that might involve low-cost battery operated wireless sensors, which have limited computational and communication capabilities. Indeed, in many such environments one may be interested in sensing a collection of signals using a network of low-cost signals.

Other possible application areas of CS include imaging [28], medical imaging [10], [26], and RF environments (where high-bandwidth signals may contain low-dimensional structures such as radar chirps) [15]. As research continues into practical methods for signal recovery (see Section 3), additional work has focused on developing physical devices for acquiring random projections. Our group has developed, for example, a prototype digital CS camera based on a digital micromirror design [28]. Additional work suggests that standard components such as filters (with randomized impulse responses) could be useful in CS hardware devices [30].

The geometry of Compressed Sensing

It is important to note that the core theory of CS draws from a number of deep geometric arguments. For example, when viewed together, the CS encoding/decoding process can be interpreted as a linear projection Φ:RNRMΦ:RNRM followed by a nonlinear mapping Δ:RMRNΔ:RMRN. In a very general sense, one may naturally ask for a given class of signals FRNFRN (such as the set of KK-sparse signals or the set of signals with coefficients αp1αp1), what encoder/decoder pair Φ,ΔΦ,Δ will ensure the best reconstruction (minimax distortion) of all signals in FF. This best-case performance is proportional to what is known as the Gluskin nn-width [24], [23] of FF (in our setting n=Mn=M), which in turn has a geometric interpretation. Roughly speaking, the Gluskin nn-width seeks the (N-n)(N-n)-dimensional slice through FF that yields signals of greatest energy. This nn-width bounds the best-case performance of CS on classes of compressible signals, and one of the hallmarks of CS is that, given a sufficient number of measurements this optimal performance is achieved (to within a constant) [18], [16].

Additionally, one may view the 0/10/1 equivalence problem geometrically. In particular, given the measurements y=Φxy=Φx, we have an (N-M)(N-M)-dimensional hyperplane Hy={x'RN:y=Φx'}=N(Φ)+xHy={x'RN:y=Φx'}=N(Φ)+x of feasible signals that could account for the measurements yy. Supposing the original signal xx is KK-sparse, the 11 recovery program will recover the correct solution xx if and only if x'1>x1x'1>x1 for every other signal x'Hyx'Hy on the hyperplane. This happens only if the hyperplane HyHy (which passes through xx) does not “cut into” the 11-ball of radius x1x1. This 11-ball is a polytope, on which xx belongs to a (K-1)(K-1)-dimensional “face.” If ΦΦ is a random matrix with i.i.d. Gaussian entries, then the hyperplane HyHy will have random orientation. To answer the question of how MM must relate to KK in order to ensure reliable recovery, it helps to observe that a randomly generated hyperplane HH will have greater chance to slice into the 11 ball as dim (H)=N-M dim (H)=N-M grows (or as MM shrinks) or as the dimension K-1K-1 of the face on which xx lives grows. Such geometric arguments have been made precise by Donoho and Tanner [17], [20], [21] and used to establish a series of sharp bounds on CS recovery.

Connections with dimensionality reduction

We have also identified [6] a fundamental connection between the CS and the JL lemma. In order to make this connection, we considered the Restricted Isometry Property (RIP), which has been identified as a key property of the CS projection operator ΦΦ to ensure stable signal recovery. We say ΦΦ has RIP of order KK if for every KK-sparse signal xx,

( 1 - ϵ ) M N Φ x 2 x 2 ( 1 + ϵ ) M N . ( 1 - ϵ ) M N Φ x 2 x 2 ( 1 + ϵ ) M N .
(5)
A random M×NM×N matrix with i.i.d. Gaussian entries can be shown to have this property with high probability if M=O(Klog(N/K))M=O(Klog(N/K)).

While the JL lemma concerns pairwise distances within a finite cloud of points, the RIP concerns isometric embedding of an infinite number of points (comprising a union of KK-dimensional subspaces in RNRN). However, the RIP can in fact be derived by constructing an effective sampling of KK-sparse signals in RNRN, using the JL lemma to ensure isometric embeddings for each of these points, and then arguing that the RIP must hold true for all KK-sparse signals. (See [6] for the full details.)

Stable embeddings of manifolds

Finally, we have also shown that the JL lemma can also lead to extensions of CS to other concise signal models. In particular, while conventional CS theory concerns sparse signal models, it is also possible to consider manifold-based signal models. Just as random projections can preserve the low- dimensional geometry (the union of hyperplanes) that corresponds to a sparse signal family, random projections can also guarantee a stable embedding of a low-dimensional signal manifold. We have the following result, which states that an RIP-like property holds for families of manifold-modeled signals.

Theorem 3

Let M M be a compact K K-dimensional Riemannian submanifold of RN RN having condition number 1 τ 1 τ , volume V V, and geodesic covering regularity R R. Fix 0 < ϵ < 1 and 0 < ρ < 1 0<ϵ<1and0<ρ<1. Let Φ be a random M × N orthoprojector with

M = O ( K log ( N V R τ -1 ϵ -1 ) log ( 1 ρ ) ϵ 2 ) M=O( K log ( N V R τ -1 ϵ -1 ) log ( 1 ρ ) ϵ 2 )
(6)
If M N MN, then with probability at least 1 ρ 1ρ the following statement holds: For every pair of points x 1 x 1 , x 2 M x 2 M,
( 1 - ϵ ) M N Φ x 1 - Φ x 2 2 x 1 - x 2 2 ( 1 + ϵ ) M N (1-ϵ) M N Φ x 1 - Φ x 2 2 x 1 - x 2 2 (1+ϵ) M N
(7)

The proof of this theorem appears in [1] and again involves the JL lemma. Due to the limited complexity of a manifold model, it is possible to adequately characterize the geometry using a sufficiently fine sampling of points drawn from the manifold and its tangent spaces. In essence, manifolds with higher volume or with greater curvature have more complexity and require a more dense covering for application of the JL lemma; this leads to an increased number of measurements. The theorem also indicates that the requisite number of measurements depends on the geodesic covering regularity of the manifold, a minor technical concept which is also discussed in [1].

This theorem establishes that, like the class of K K-sparse signals, a collection of signals described by a K K-dimensional manifold M RN MRN can have a stable embedding in an MM-dimensional measurement space. Moreover, the requisite number of random measurements M M is once again linearly proportional to the information level (or number of degrees of freedom) K K in the concise model. This has a number of possible implications for manifold-based signal processing. Manifold-modeled signals can be recovered from compressive measurements (using a customized recovery algorithm adapted to the manifold model, in contrast with sparsity-based recovery algorithms) [2], [4]; unknown parameters in parametric models can be estimated from compressive measurements; multi-class estimation/classification problems can be addressed [2] by considering multiple manifold models; and manifold learning algorithms may be efficiently executed by applying them simply to the projection of a manifold-modeled data set to a low-dimensional measurement space [3]. (As an example, (Reference)(d) shows the result of applying the ISOMAP algorithm on a random projection of a data set from R4096R4096 down to R15R15; the underlying parameterization of the manifold is extracted with little sacrifice in accuracy.) In all of this it is not necessary to adapt the sensing protocol to the model; the only change from sparsity-based CS would be the methods for processing or decoding the measurements. In the future, more sophisticated concise models will likely lead to further improvements in signal understanding from compressive measurements.

References

  1. R. G. Baraniuk and M. B. Wakin. (2008). Random Projections of Smooth Manifolds. [To Appear]. Foundations of Computational Mathematics.
  2. Davenport, M.A. and Duarte, M.F. and Wakin, M.B. and Laska, J.N. and Takhar, D. and Kelly, K.F. and Baraniuk, R.G. (2007, January). The smashed filter for compressive classification and target recognition. In Proc. Computational Imaging V at SPIE Electronic Imaging.
  3. C. Hegde, M.B. Wakin, and R.G. Baraniuk. (2007, December). Random projections for manifold learning. In In Proc. Neural Information Processing Systems (NIPS).
  4. M. B. Wakin. (2006, August). The Geometry of Low-Dimensional Signal Models. Ph. D. Thesis, Department of Electrical and Computer Engineering. Rice University, Houston, Tx.
  5. D. Baron and M. B. Wakin and M. F. Duarte and S. Sarvotham and R. G. Baraniuk. (2005). Distributed compressed sensing. [Preprint].
  6. Baraniuk, R. and Davenport, M. and DeVore, R. and Wakin, M. (2006). The Johnson-Lindenstrauss Lemma Meets Compressed Sensing. [Preprint].
  7. Chen, S. and Donoho, D. and Saunders, M. (1998). Atomic decomposition by basis pursuit. SIAM J. on Sci. Comp., 20(1), 33-61.
  8. Candès, E. and Romberg, J. (2005). Practical signal recovery from random projections. [Preprint].
  9. Candès, E. and Romberg, J. (2006). Quantitative robust uncertainty principles and optimally sparse decompositions. [To appear]. Found. of Comp. Math..
  10. Candès, E. and Romberg, J. and Tao, T. (2006, February). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2),
  11. Candès, E. and Romberg, J. and Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. [To appear]. Communications on Pure and Applied Mathematics.
  12. Candès, E. and Tao, T. (2005, December). Decoding by linear programming. IEEE Trans. Inform. Theory, 51(12),
  13. Candès, E. and Tao, T. (2005). Error correction via linear programming. [Preprint]. Found. of Comp. Math..
  14. Candès, E. and Tao, T. (2006). Near optimal signal recovery from random projections and universal encoding strategies. [To appear]. IEEE Trans. Inform. Theory.
  15. Duarte, M. F. and Davenport, M. A. and Wakin, M. B. and Baraniuk, R. G. (2006, May). Sparse Signal Detection From Incoherent Projections. In Proc. Int. Conf. Acoustics, Speech, Signal Processing (ICASSP).
  16. DeVore, R. A. (Spring 2006). Lecture notes on Compressed Sensing. Rice University ELEC 631 Course Notes.
  17. Donoho, D. (2005, January). High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. [Preprint].
  18. Donoho, D. (2006, April). Compressed sensing. IEEE Trans. Inform. Theory, 52(4),
  19. Donoho, D. and Tsaig, Y. (2004). Extensions of compressed sensing. [Preprint].
  20. Donoho, D. and Tanner, J. (2005). Neighborliness of randomly-projected simplices in high dimensions. [Preprint].
  21. Donoho, D. L. and Tanner, J. (2006). Counting faces of randomly-projected polytopes when then projection radically lowers dimension. (2006-11). Technical report. Stanford University Department of Statistics.
  22. Duarte, M. F. and Wakin, M. B. and Baraniuk, R. G. (2005, Nov.). Fast Reconstruction of Piecewise Smooth Signals from Random Projections. In Proc. SPARS05. Rennes, France
  23. Garnaev, A. and Gluskin, E. D. (1984). The widths of Euclidean balls. Doklady An. SSSR., 277, 1048-1052.
  24. Kashin, B. (1977). The widths of certain finite dimensional sets and classes of smooth functions. Izvestia, (41), 334-351.
  25. La, C. and Do, M. N. (2005, August). Signal reconstruction using sparse tree representation. In Proc. Wavelets XI at SPIE Optics and Photonics. San Diego: SPIE.
  26. Lustig, M. and Donoho, D. L. and Pauly, J. M. (2006, May). Rapid MR Imaging with Compressed Sensing and Randomly Under-Sampled 3DFT Trajectories. In Proc. 14th Ann. Mtg. ISMRM.
  27. Mallat, S. (1999). A wavelet tour of signal processing. San Diego, CA, USA: Academic Press.
  28. Takhar, D. and Bansal, V. and Wakin, M. and Duarte, M. and Baron, D. and Kelly, K. F. and Baraniuk, R. G. (2006, January). A Compressed Sensing Camera: New Theory and an Implementation using Digital Micromirrors. In Proc. Computational Imaging IV at SPIE Electronic Imaging. San Jose: SPIE.
  29. Tropp, J. and Gilbert, A. C. (2005, April). Signal recovery from partial information via orthogonal matching pursuit. [Preprint].
  30. Tropp, J. A. and Wakin, M. B. and Duarte, M. F. and Baron, D. and Baraniuk, R. G. (2006, May). Random Filters For Compressive Sampling And Reconstruction. In Proc. Int. Conf. Acoustics, Speech, Signal Processing (ICASSP).
  31. Venkataramani, R. and Bresler, Y. (1998, Oct.). Further results on spectrum blind sampling of 2D signals. In Proc. IEEE Int. Conf. Image Proc. (ICIP). (Vol. 2). Chicago

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks