Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » An Introduction to Compressive Sensing » Bayesian methods

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Bayesian methods

Module by: Chinmay Hegde, Mona Sheikh. E-mail the authors

Summary: This module provides an overview of the application of Bayesian methods to compressive sensing and sparse recovery.

Setup

Throughout this course, we have almost exclusively worked within a deterministic signal framework. In other words, our signal xx is fixed and belongs to a known set of signals. In this section, we depart from this framework and assume that the sparse (or compressible) signal of interest arises from a known probability distribution, i.e., we assume sparsity promoting priors on the elements of xx, and recover from the stochastic measurements y=Φxy=Φx a probability distribution on each nonzero element of xx. Such an approach falls under the purview of Bayesian methods for sparse recovery.

The algorithms discussed in this section demonstrate a digression from the conventional sparse recovery techniques typically used in compressive sensing (CS). We note that none of these algorithms are accompanied by guarantees on the number of measurements required, or the fidelity of signal reconstruction; indeed, in a Bayesian signal modeling framework, there is no well-defined notion of “reconstruction error”. However, such methods do provide insight into developing recovery algorithms for rich classes of signals, and may be of considerable practical interest.

Sparse recovery via belief propagation

As we will see later in this course, there are significant parallels to be drawn between error correcting codes and sparse recovery [4]. In particular, sparse codes such as LDPC codes have had grand success. The advantage that sparse coding matrices may have in efficient encoding of signals and their low complexity decoding algorithms, is transferable to CS encoding and decoding with the use of sparse sensing matrices ΦΦ. The sparsity in the ΦΦ matrix is equivalent to the sparsity in LDPC coding graphs.

Figure 1: Factor graph depicting the relationship between the variables involved in CS decoding using BP. Variable nodes are black and the constraint nodes are white.
Figure 1 (bpfactorgraph-3.png)

A sensing matrix ΦΦ that defines the relation between the signal xx and measurements yy can be represented as a bipartite graph of signal coefficient nodes x(i)x(i) and measurement nodes y(i)y(i) [4], [5]. The factor graph in Figure 1 represents the relationship between the signal coefficients and measurements in the CS decoding problem.

The choice of signal probability density is of practical interest. In many applications, the signals of interest need to be modeled as being compressible (as opposed to being strictly sparse). This behavior is modeled by a two-state Gaussian mixture distribution, with each signal coefficient taking either a “large” or “small” coefficient value state. Assuming that the elements of xx are i.i.d., it can be shown that small coefficients occur more frequently than the large coefficients. Other distributions besides the two-state Gaussian may also be used to model the coefficients, for e.g., the i.i.d. Laplace prior on the coefficients of xx.

The ultimate goal is to estimate (i.e., decode) xx, given yy and ΦΦ. The decoding problem takes the form of a Bayesian inference problem in which we want to approximate the marginal distributions of each of the x(i)x(i) coefficients conditioned on the observed measurements y(i)y(i). We can then estimate the Maximum Likelihood Estimate (MLE), or the Maximum a Posteriori (MAP) estimates of the coefficients from their distributions. This sort of inference can be solved using a variety of methods; for example, the popular belief propagation method (BP) [4] can be applied to solve for the coefficients approximately. Although exact inference in arbitrary graphical models is an NP hard problem, inference using BP can be employed when ΦΦ is sparse enough, i.e., when most of the entries in the matrix are equal to zero.

Sparse Bayesian learning

Another probabilistic approach used to estimate the components of xx is by using Relevance Vector Machines (RVMs). An RVM is essentially a Bayesian learning method that produces sparse classification by linearly weighting a small number of fixed basis functions from a large dictionary of potential candidates (for more details the interested reader may refer to [6], [7]). From the CS perspective, we may view this as a method to determine the elements of a sparse xx which linearly weight the basis functions comprising the columns of ΦΦ.

The RVM setup employs a hierarchy of priors; first, a Gaussian prior is assigned to each of the NN elements of xx; subsequently, a Gamma prior is assigned to the inverse-variance αiαi of the i th i th Gaussian prior. Therefore each αiαi controls the strength of the prior on its associated weight in xixi. If xx is the sparse vector to be reconstructed, its associated Gaussian prior is given by:

p ( x | α ) = i = 1 N N ( x i | 0 , α i - 1 ) p ( x | α ) = i = 1 N N ( x i | 0 , α i - 1 )
(1)

and the Gamma prior on αα is written as:

p ( α | a , b ) = i = 1 N Γ ( α i | a , b ) p ( α | a , b ) = i = 1 N Γ ( α i | a , b )
(2)

The overall prior on xx can be analytically evaluated to be the Student-t distribution, which can be designed to peak at xi=0xi=0 with appropriate choice of aa and bb. This enables the desired solution xx to be sparse. The RVM approach can be visualized using a graphical model similar to the one in "Sparse recovery via belief propagation". Using the observed measurements yy, the posterior density on each xixi is estimated by an iterative algorithm (e.g., Markov Chain Monte Carlo (MCMC) methods). For a detailed analysis of the RVM with a measurement noise prior, refer to [2], [7].

Alternatively, we can eliminate the need to set the hyperparameters aa and bb as follows. Assuming Gaussian measurement noise with mean 0 and variance σ2σ2, we can directly find the marginal log likelihood for αα and maximize it by the EM algorithm (or directly differentiate) to find estimates for αα.

L ( α ) = log p ( y | α , σ 2 ) = log p ( y | x , σ 2 ) p ( y | α ) d x . L ( α ) = log p ( y | α , σ 2 ) = log p ( y | x , σ 2 ) p ( y | α ) d x .
(3)

Bayesian compressive sensing

Unfortunately, evaluation of the log-likelihood in the original RVM setup involves taking the inverse of an N×NN×N matrix, rendering the algorithm's complexity to be O(N3)O(N3). A fast alternative algorithm for the RVM is available which monotonically maximizes the marginal likelihoods of the priors by a gradient ascent, resulting in an algorithm with complexity O(NM2)O(NM2). Here, basis functions are sequentially added and deleted, thus building the model up constructively, and the true sparsity of the signal xx is exploited to minimize model complexity. This is known as Fast Marginal Likelihood Maximization, and is employed by the Bayesian Compressive Sensing (BCS) algorithm [2] to efficiently evaluate the posterior densities of xixi.

A key advantage of the BCS algorithm is that it enables evaluation of “error bars” on each estimated coefficient of xx; these give us an idea of the (in)accuracies of these estimates. These error bars could be used to adaptively select the linear projections (i.e., the rows of the matrix ΦΦ) to reduce uncertainty in the signal. This provides an intriguing connection between CS and machine learning techniques such as experimental design and active learning [1], [3].

References

  1. Fedorov, V. (1972). Theory of Optimal Experiments. New York, NY: Academic Press.
  2. Ji, S. and Xue, Y. and Carin, L. (2008). Bayesian Compressive Sensing. IEEE Trans. Signal Processing, 56(6), 2346–2356.
  3. MacKay, D. (1992). Information-based objective functions for active data selection. Neural Comput., 4, 590–604.
  4. Sarvotham, S. and Baron, D. and Baraniuk, R. (2006). Compressed Sensing Reconstruction via Belief Propagation. (TREE-0601). Technical report. Rice Univ., ECE Dept.
  5. Sheikh, M. and Sarvotham, S. and Milenkovic, O. and Baraniuk, R. (2007, Aug.). DNA Array Decoding From Nonlinear Measurements By Belief Propagation. In Proc. IEEE Work. Stat. Signal Processing. Madison, WI
  6. Tipping, M. and Faul, A. (2003, Jan.). Fast marginal likelihood maximization for sparse Bayesian models. In Proc. Int. Conf. Art. Intell. Stat. (AISTATS). Key West, FL
  7. Tipping, M. (2001). Sparse bayesian learning and the relevance vector machine. J. Machine Learning Research, 1, 211–244.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks