Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » An Introduction to Compressive Sensing » Inference using compressive measurements

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Inference using compressive measurements

Module by: Marco F. Duarte. E-mail the author

Summary: This module provides an introduction to some simple algorithms for compressive signal processing, i.e., processing compressive measurements directly without first recovering the signal to solve an inference problem.

While the compressive sensing (CS) literature has focused almost exclusively on problems in signal reconstruction/approximation, this is frequently not necessary. For instance, in many signal processing applications (including computer vision, digital communications and radar systems), signals are acquired only for the purpose of making a detection or classification decision. Tasks such as detection do not require a reconstruction of the signal, but only require estimates of the relevant sufficient statistics for the problem at hand.

As a simple example, suppose a surveillance system (based on compressive imaging) observes the motion of a person across a static background. The relevant information to be extracted from the data acquired by this system would be, for example, the identity of the person, or the location of this person with respect to a predefined frame of coordinates. There are two ways of doing this:

  • Reconstruct the full data using standard sparse recovery techniques and apply standard computer vision/inference algorithms on the reconstructed images.
  • Develop an inference test which operates directly on the compressive measurements, without ever reconstructing the full images.

A crucial property that enables the design of compressive inference algorithms is the information scalability property of compressive measurements. This property arises from the following two observations:

  • For certain signal models, the action of a random linear function on the set of signals of interest preserves enough information to perform inference tasks on the observed measurements.
  • The number of random measurements required to perform the inference task usually depends on the nature of the inference task. Informally, we observe that more sophisticated tasks require more measurements.

We examine three possible inference problems for which algorithms that directly operate on the compressive measurements can be developed: detection (determining the presence or absence of an information-bearing signal), classification (assigning the observed signal to one of two (or more) signal classes), and parameter estimation (calculating a function of the observed signal).

Detection

In detection one simply wishes to answer the question: is a (known) signal present in the observations? To solve this problem, it suffices to estimate a relevant sufficient statistic. Based on a concentration of measure inequality, it is possible to show that such sufficient statistics for a detection problem can be accurately estimated from random projections, where the quality of this estimate depends on the signal to noise ratio (SNR) [2]. We make no assumptions on the signal of interest ss, and hence we can build systems capable of detecting ss even when it is not known in advance. Thus, we can use random projections for dimensionality-reduction in the detection setting without knowing the relevant structure.

In the case where the class of signals of interest corresponds to a low dimensional subspace, a truncated, simplified sparse approximation can be applied as a detection algorithm; this has been dubbed as IDEA [5]. In simple terms, the algorithm will mark a detection when a large enough amount of energy from the measurements lies in the projected subspace. Since this problem does not require accurate estimation of the signal values, but rather whether it belongs in the subspace of interest or not, the number of measurements necessary is much smaller than that required for reconstruction, as shown in Figure 1.

Figure 1: Performance for IDEA. (Top) Sample wideband chirp signal and same chirp embedded in strong narrowband interference. (Bottom) Probability of error to reconstruct and detect chirp signals embedded in strong sinusoidal interference ( SIR =-6 SIR =-6 dB) using greedy algorithms. In this case, detection requires 3×3× fewer measurements and 4×4× fewer computations than reconstruction for an equivalent probability of success. Taken from [5].
(a) (b) (c)
Figure 1(a) (chirps.png)Figure 1(b) (chirpssines.png)Figure 1(c) (dipvsrip.png)

Classification

Similarly, random projections have long been used for a variety of classification and clustering problems. The Johnson-Lindenstrauss Lemma is often exploited in this setting to compute approximate nearest neighbors, which is naturally related to classification. The key result that random projections result in an isometric embedding allows us to generalize this work to several new classification algorithms and settings [2].

Classification can also be performed when more elaborate models are used for the different classes. Suppose the signal/image class of interest can be modeled as a low-dimensional manifold in the ambient space. In such case it can be shown that, even under random projections, certain geometric properties of the signal class are preserved up to a small distortion; for example, interpoint Euclidean (22) distances are preserved [1]. This enables the design of classification algorithms in the projected domain. One such algorithm is known as the smashed filter [3]. As an example, under equal distribution among classes and a gaussian noise setting, the smashed filter is equivalent to building a nearest-neighbor (NN) classifier in the measurement domain. Further, it has been shown that for a K-K-dimensional manifold, M=O(KlogN)M=O(KlogN) measurements are sufficient to perform reliable compressive classification. Thus, the number of measurements scales as the dimension of the signal class, as opposed to the sparsity of the individual signal. Some example results are shown in Figure 2(a).

Figure 2: Results for smashed filter image classification and parameter estimation experiments. (a) Classification rates and (b) average estimation error for varying number of measurements MM and noise levels σσ for a set of images of several objects under varying shifts. As MM increases, the distances between the manifolds increase as well, thus increasing the noise tolerance and enabling more accurate estimation and classification. Thus, the classification and estimation performances improve as σσ decreases and MM increases in all cases. Taken from [4].
(a) (b)
Figure 2(a) (shift_classrate.png)Figure 2(b) (shift_esterror.png)

Estimation

Consider a signal xRNxRN, and suppose that we wish to estimate some function f(x)f(x) but only observe the measurements y=Φxy=Φx, where ΦΦ is again an M×NM×N matrix. The data streaming community has previously analyzed this problem for many common functions, such as linear functions, pp norms, and histograms. These estimates are often based on so-called sketches, which can be thought of as random projections.

As an example, in the case where ff is a linear function, one can show that the estimation error (relative to the norms of xx and ff) can be bounded by a constant determined by MM. This result holds for a wide class of random matrices, and can be viewed as a straightforward consequence of the same concentration of measure inequality that has proven useful for CS and in proving the JL Lemma [2].

Parameter estimation can also be performed when the signal class is modeled as a low-dimensional manifold. Suppose an observed signal xx can be parameterized by a K-K-dimensional parameter vector θθ, where KNKN. Then, it can be shown that with 0(KlogN)(KlogN) measurements, the parameter vector can be obtained via multiscale manifold navigation in the compressed domain [4]. Some example results are shown in Figure 2(b).

References

  1. Baraniuk, R. and Wakin, M. (2009). Random Projections of Smooth Manifolds. Found. Comput. Math., 9(1), 51–77.
  2. Davenport, M. and Boufounos, P. and Wakin, M. and Baraniuk, R. (2010). Signal processing with compressive measurements. IEEE J. Select. Top. Signal Processing, 4(2), 445–460.
  3. Davenport, M. and Duarte, M. and Wakin, M. and Laska, J. and Takhar, D. and Kelly, K. and Baraniuk, R. (2007, Jan.). The Smashed Filter for Compressive Classification and Target Recognition. In Proc. IS&T/SPIE Symp. Elec. Imag.: Comp. Imag. San Jose, CA
  4. Duarte, M. and Davenport, M. and Wakin, M. and Laska, J. and Takhar, D. and Kelly, K. and Baraniuk, R. (2007, Sept.). Multiscale Random Projections for Compressive Classification. In Proc. IEEE Int. Conf. Image Processing (ICIP). San Antonio, TX
  5. Duarte, M. and Davenport, M. and Wakin, M. and Baraniuk, R. (2006, May). Sparse Signal Detection From Incoherent Projections. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP). Toulouse, France

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks