Skip to content Skip to navigation


You are here: Home » Content » Hilbert Spaces and Separable Vector Spaces


Recently Viewed

This feature requires Javascript to be enabled.

Hilbert Spaces and Separable Vector Spaces

Module by: Don Johnson. E-mail the author

Hilbert Spaces

Definition 1: Hilbert Spaces
A Hilbert space is a closed, normed linear vector space which contains all of its limit points: if x n x n is any sequence of elements in that converges to xx, then xx is also contained in . xx is termed the limit point of the sequence.

Example 1

Let the space consist of all rational numbers. Let the inner product be simple multiplication: x,y=xy x y x y . However, the limit point of the sequence x n =1+1+12!++1n! x n 1 1 1 2 1 n is not a rational number. Consequently, this space is not a Hilbert space. However, if we define the space to consist of all finite numbers, we have a Hilbert space.

Definition 2: orthogonal
If 𝒴𝒴 is a subspace of , the vector xx is orthogonal to the subspace 𝒴𝒴 for every y𝒴 y 𝒴 , x,y=0 x y 0 .

We now arrive at a fundamental theorem.

Theorem 1

Let be a Hilbert space and 𝒴𝒴 a subspace of it. Any element x x has the unique decomposition x=y+z x y z , where y𝒴 y 𝒴 and zz is orthogonal to 𝒴𝒴. Furthermore, xy=min ν xν ν𝒴 x y ν ν 𝒴 x ν : the distance between xx and all elements of 𝒴𝒴 is minimized by the vector yy. This element yy is termed the projection of xx onto 𝒴𝒴.

Geometrically, 𝒴𝒴 is a line or a plane passing through the origin. Any vector xx can be expressed as the linear combination of a vector lying in 𝒴𝒴 and a vector orthogonal to yy. This theorem is of extreme importance in linear estimation theory and plays a fundamental role in detection theory.

Separable Vector Spaces

Definition 3: separable
A Hilbert space is said to be separable if there exists a set of vectors φ i φ i , i=1elements   of  ℋ i 1 elements   of  ℋ , that express every element x x as
x=i=1 x i φ i x i 1 x i φ i
where x i x i are scalar constants associated with φ i φ i and xx and where "equality" is taken to mean that the distance between each side becomes zero as more terms are taken in the right. limit  mxi=1m x i φ i =0 m x i 1 m x i φ i 0

The set of vectors φ i φ i are said to form a complete set if the above relationship is valid. A complete set is said to form a basis for the space . Usually the elements of the basis for a space are taken to be linearly independent. Linear independence implies that the expression fo the zero vector by a basis can only be made by zero coefficients.

i:i=1 x i φ i =0 x i =0 i i 1 i 1 x i φ i 0 x i 0
The representation theorem states simply that separable vector spaces exist. The representation of the vector xx is the sequence of coefficients x i x i .

Example 2

The space consisting of column matrices of length NN is easily shown to be separable. Let the vector φ i φ i be given a column matrix having a one in the i th i th row and zeros in the remaining rows: φ i =00100T φ i 0 0 1 0 0 . This set of vectors φ i φ i , i=1N i 1 N constitutes a basis for the space. Obviously if the vector xx is given by x= x 1 x 2 x N T x x 1 x 2 x N , it may be expressed as: x=i=1N x i φ i x i 1 N x i φ i using the basis vectors just defined.

In general, the upper limit on the sum in Equation 1 is infinite. For the previous example, the upper limit is finite. The number of basis vectors that is required to express every element of a separable space in terms of Equation 1 is said to be the dimension of the space. In this example, the dimension of the space is NN. There exist separable vector spaces for which the dimension is infinite.

Definition 4: orthonormal
The basis for a separable vector space is said to be an orthonormal basis if the elements of the basis satisfy the following two properties:
  • The inner product between distinct elements of the basis is zero (i.e., the elements of the basis are mutually orthogonal).
    i,j,ij: φ i , φ j =0 i j i j φ i φ j 0
  • The norm of each element of a basis is one (normality).
    i,i=1: φ i =1 i i 1 φ i 1

For example, the basis given above for the space of NN-dimensional column matrices is orthonormal. For clarity, two facts must be explicitly stated. First, not every basis is orthonormal. If the vector space is separable, a complete set of vectors can be found; however, this set does not have to be orthonormal to be a basis. Secondly, not every set of orthonormal vectors can constitute a basis. When the vector space L 2 L 2 is discussed in detail, this point will be illustrated.

Despite these qualifications, an orthonormal basis exists for every separable vector space. There is an explicit algorithm - the Gram-Schmidt procedure - for deriving an orthonormal set of functions from a complete set. Let φ i φ i denote a basis; the orthonormal basis ψ i ψ i is sought. The Gram-Schmidt procedure is:

  • 1.: ψ 1 = φ 1 φ 1 ψ 1 φ 1 φ 1 . This step makes ψ 1 ψ 1 have unit length.
  • 2.: ψ 2 = φ 2 ( ψ 1 , φ 2 ) ψ 1 ψ 2 φ 2 ψ 1 φ 2 ψ 1 . Consequently, the inner product between ψ 2 ψ 2 and ψ 1 ψ 1 is zero. We obtain ψ 2 ψ 2 from ψ 2 ψ 2 forcing the vector to have unit length.
  • 2'.: ψ 2 = ψ 2 ψ 2 ψ 2 ψ 2 ψ 2 .

The algorithm now generalizes.

  • k.: ψ k = φ k i =1k1( ψ i , φ k ) ψ i ψ k φ k i 1 k 1 ψ i φ k ψ i
  • k'.: ψ k = ψ k ψ k ψ k ψ k ψ k

By construction, this new set of vectors is an orthonormal set. As the original set of vectors φ i φ i is a complete set, and, as each ψ k ψ k is just a linear combination of φ i φ i , i=1k i 1 k , the derived set ψ i ψ i is also complete. Because of the existence of this algorithm, a basis for a vector space is usually assumed to be orthonormal.

A vector's representation with respect to an orthonormal basis φ i φ i is easily computed. The vector xx may be expressed by:

x=i=1 x i φ i x i 1 x i φ i
x i =x, φ i x i x φ i
This formula is easily confirmed by substituting Equation 5 into Equation 6 and using the properties of an inner product. Note that the exact element values of a given vector's representation depends upon both the vector and the choice of basis. Consequently, a meaningful specification of the representation of a vector must include the definition of the basis.

The mathematical representation of a vector (expressed by equations Equation 5 and Equation 6 can be expressed geometrically. This expression is a generalization of the Cartesian representation of numbers. Perpendicular axes are drawn; these axes correspond to the orthonormal basis vector used in the representation. A given vector is representation as a point in the "plane" with the value of the component along the φ i φ i axis being x i x i .

An important relationship follows from this mathematical representation of vectors. Let xx and yy by any two vectors in a separable space. These vectors are represented with respect to an orthonormal basis by x i x i and y i y i , respectively. The inner product x,y x y is related to these representations by: x,y=i=1 x i y i x y i 1 x i y i This result is termed Parseval's Theorem. Consequently, the inner product between any two vectors can be computed from their representations. A special case of this result corresponds to the Cartesian notion of the length of a vector; when x=y x y , Parseval's relationship becomes: x=i=1 x i 2 x i 1 x i 2 These two relationships are key results of the representation theorem. The implication is that any inner product computed from vectors can also be computed from their representations. There are circumstances in which the latter computation is more manageable than the former and, furthermore, of greater theoretical significance.

Content actions

Download module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks