Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Introduction to Compressive Sensing » Introduction to vector spaces

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Introduction to vector spaces

Module by: Marco F. Duarte, Mark A. Davenport. E-mail the authors

Summary: This module provides a brief review of some of the key concepts in vector spaces that will be required in developing the theory of compressive sensing.

For much of its history, signal processing has focused on signals produced by physical systems. Many natural and man-made systems can be modeled as linear. Thus, it is natural to consider signal models that complement this kind of linear structure. This notion has been incorporated into modern signal processing by modeling signals as vectors living in an appropriate vector space. This captures the linear structure that we often desire, namely that if we add two signals together then we obtain a new, physically meaningful signal. Moreover, vector spaces allow us to apply intuitions and tools from geometry in R3R3, such as lengths, distances, and angles, to describe and compare signals of interest. This is useful even when our signals live in high-dimensional or infinite-dimensional spaces.

Throughout this course, we will treat signals as real-valued functions having domains that are either continuous or discrete, and either infinite or finite. These assumptions will be made clear as necessary in each chapter. In this course, we will assume that the reader is relatively comfortable with the key concepts in vector spaces. We now provide only a brief review of some of the key concepts in vector spaces that will be required in developing the theory of compressive sensing (CS). For a more thorough review of vector spaces see this introductory course in Digital Signal Processing.

We will typically be concerned with normed vector spaces, i.e., vector spaces endowed with a norm. In the case of a discrete, finite domain, we can view our signals as vectors in an NN-dimensional Euclidean space, denoted by RNRN. When dealing with vectors in RNRN, we will make frequent use of the pp norms, which are defined for p[1,]p[1,] as

x p = i = 1 N | x i | p 1 p , p [ 1 , ) ; max i = 1 , 2 , ... , N | x i | , p = . x p = i = 1 N | x i | p 1 p , p [ 1 , ) ; max i = 1 , 2 , ... , N | x i | , p = .
(1)

In Euclidean space we can also consider the standard inner product in RNRN, which we denote

x , z = z T x = i = 1 N x i z i . x , z = z T x = i = 1 N x i z i .
(2)

This inner product leads to the 22 norm: x2=x,xx2=x,x.

In some contexts it is useful to extend the notion of pp norms to the case where p<1p<1. In this case, the “norm” defined in Equation 1 fails to satisfy the triangle inequality, so it is actually a quasinorm. We will also make frequent use of the notation x0:=| supp (x)|x0:=| supp (x)|, where supp (x)={i:xi0} supp (x)={i:xi0} denotes the support of xx and | supp (x)|| supp (x)| denotes the cardinality of supp (x) supp (x). Note that ·0·0 is not even a quasinorm, but one can easily show that

lim p 0 x 0 x p = | supp ( x ) | , lim p 0 x 0 x p = | supp ( x ) | ,
(3)

justifying this choice of notation. The pp (quasi-)norms have notably different properties for different values of pp. To illustrate this, in Figure 1 we show the unit sphere, i.e., {x:xp=1},{x:xp=1}, induced by each of these norms in R2R2. Note that for p<1p<1 the corresponding unit sphere is nonconvex (reflecting the quasinorm's violation of the triangle inequality).

Figure 1: Unit spheres in R2R2 for the pp norms with p=1,2,p=1,2,, and for the pp quasinorm with p=12p=12.
(a) Unit sphere for 11 norm (b) Unit sphere for 22 norm (c) Unit sphere for norm (d) Unit sphere for pp quasinorm
Figure 1(a) (10_3.png)Figure 1(b) (10_1.png)Figure 1(c) (10_2.png)Figure 1(d) (10_3a.png)

We typically use norms as a measure of the strength of a signal, or the size of an error. For example, suppose we are given a signal xR2xR2 and wish to approximate it using a point in a one-dimensional affine space AA. If we measure the approximation error using an pp norm, then our task is to find the x^Ax^A that minimizes x-x^px-x^p. The choice of pp will have a significant effect on the properties of the resulting approximation error. An example is illustrated in Figure 2. To compute the closest point in AA to xx using each pp norm, we can imagine growing an pp sphere centered on xx until it intersects with AA. This will be the point x^Ax^A that is closest to xx in the corresponding pp norm. We observe that larger pp tends to spread out the error more evenly among the two coefficients, while smaller pp leads to an error that is more unevenly distributed and tends to be sparse. This intuition generalizes to higher dimensions, and plays an important role in the development of CS theory.

Figure 2: Best approximation of a point in R2R2 by a a one-dimensional subspace using the pp norms for p=1,2,p=1,2,, and the pp quasinorm with p=12p=12.
(a) Approximation in 11 norm (b) Approximation in 22 norm (c) Approximation in norm (d) Approximation in pp quasinorm
Figure 2(a) (10_6.png)Figure 2(b) (10_4.png)Figure 2(c) (10_5.png)Figure 2(d) (10_6a.png)

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks