While the compressive sensing (CS) literature has focused almost exclusively on problems
in signal reconstruction/approximation, this is frequently not
necessary. For instance, in many signal processing applications
(including computer vision, digital communications and radar systems), signals are
acquired only for the purpose of making a detection or
classification decision. Tasks such as detection do not require a
reconstruction of the signal, but only require estimates of the
relevant sufficient statistics for the problem at hand.
As a simple example, suppose a surveillance system (based on compressive imaging) observes the motion of a person across a static background. The relevant information to be extracted from the data acquired by this system would be, for example, the identity of the person, or the location of this person with respect to a predefined frame of coordinates. There are two ways of doing this:
 Reconstruct the full data using standard sparse recovery techniques and apply standard computer vision/inference algorithms on the reconstructed images.
 Develop an inference test which operates directly on the compressive measurements, without ever reconstructing the full images.
A crucial property that enables the design of compressive inference algorithms is the information scalability property of compressive measurements. This property arises from the following two observations:
 For certain signal models, the action of a random linear function on the set of signals of interest preserves enough information to perform inference tasks on the observed measurements.
 The number of random measurements required to perform the inference task usually depends on the nature of the inference task. Informally, we observe that more sophisticated tasks require more measurements.
We examine three possible inference problems for which algorithms that directly operate on the compressive measurements can be developed: detection (determining the presence or absence of an informationbearing signal), classification (assigning the observed signal to one of two (or more) signal classes), and parameter estimation (calculating a function of the observed signal).
In detection one simply wishes to
answer the question: is a (known) signal present in the observations?
To solve this problem, it suffices to estimate a
relevant sufficient statistic. Based on a concentration of measure inequality, it is possible to show that such sufficient statistics for a detection problem can be accurately estimated from random projections, where the quality of this estimate depends on
the signal to noise ratio (SNR) [2]. We make no assumptions on the signal of interest ss, and
hence we can build systems capable of detecting ss even when it is
not known in advance. Thus, we can use random projections for
dimensionalityreduction in the detection setting without knowing
the relevant structure.
In the case where the class of signals of interest corresponds to a low dimensional subspace, a truncated, simplified sparse approximation can be applied as a detection algorithm; this has been dubbed as IDEA [5]. In simple terms, the algorithm will mark a detection when a large enough amount of energy from the measurements lies in the projected subspace. Since this problem does not require accurate estimation of the signal values, but rather whether it belongs in the subspace of interest or not, the number of measurements necessary is much smaller than that required for reconstruction, as shown in Figure 1.
Similarly, random projections have long been used for a variety of classification and clustering problems. The JohnsonLindenstrauss Lemma is often exploited in this setting to compute approximate nearest neighbors, which is naturally related to
classification. The key result that random
projections result in an isometric embedding allows us to generalize
this work to several new classification algorithms and settings [2].
Classification can also be performed when more elaborate models are used for the different classes. Suppose the signal/image class of interest can be modeled as a lowdimensional manifold in the ambient space. In such case it can be shown that, even under random projections, certain geometric properties of the signal class are preserved up to a small distortion; for example, interpoint Euclidean (ℓ2ℓ2) distances are preserved [1]. This enables the design of classification algorithms in the projected domain. One such algorithm is known as the smashed filter [3]. As an example, under equal distribution among classes and a gaussian noise setting, the smashed filter is equivalent to building a nearestneighbor (NN) classifier in the measurement domain. Further, it has been shown that for a KKdimensional manifold, M=O(KlogN)M=O(KlogN) measurements are sufficient to perform reliable compressive classification. Thus, the number of measurements scales as the dimension of the signal class, as opposed to the sparsity of the individual signal.
Some example results are shown in Figure 2(a).
Consider a signal x∈RNx∈RN, and suppose that we wish to estimate some function f(x)f(x) but only
observe the measurements y=Φxy=Φx, where ΦΦ is again an
M×NM×N matrix. The data streaming community has previously analyzed this problem for
many common functions, such as linear functions, ℓpℓp norms, and
histograms. These estimates are often based on socalled sketches, which can be thought of as random projections.
As an example, in the case where ff is a linear function, one
can show that the estimation error (relative to the norms of xx and
ff) can be bounded by a constant determined by MM. This result holds for a wide class of random matrices, and can be viewed as a straightforward consequence of the same concentration of measure inequality that has proven useful for CS
and in proving the JL Lemma [2].
Parameter estimation can also be performed when the signal class is modeled as a lowdimensional manifold. Suppose an observed signal xx can be parameterized by a KKdimensional parameter vector θθ, where K≪NK≪N. Then, it can be shown that with 0(KlogN)(KlogN) measurements, the parameter vector can be obtained via multiscale manifold navigation in the compressed domain [4]. Some example results are shown in Figure 2(b).

Baraniuk, R. and Wakin, M. (2009). Random Projections of Smooth Manifolds. Found. Comput. Math., 9(1), 51–77.

Davenport, M. and Boufounos, P. and Wakin, M. and Baraniuk, R. (2010). Signal processing with compressive measurements. IEEE J. Select. Top. Signal Processing, 4(2), 445–460.

Davenport, M. and Duarte, M. and Wakin, M. and Laska, J. and Takhar, D. and Kelly, K. and Baraniuk, R. (2007, Jan.). The Smashed Filter for Compressive Classification and Target Recognition. In Proc. IS&T/SPIE Symp. Elec. Imag.: Comp. Imag. San Jose, CA

Duarte, M. and Davenport, M. and Wakin, M. and Laska, J. and Takhar, D. and Kelly, K. and Baraniuk, R. (2007, Sept.). Multiscale Random Projections for Compressive Classification. In Proc. IEEE Int. Conf. Image Processing (ICIP). San Antonio, TX

Duarte, M. and Davenport, M. and Wakin, M. and Baraniuk, R. (2006, May). Sparse Signal Detection From Incoherent Projections. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP). Toulouse, France