# Connexions

You are here: Home » Content » Matrix Analysis » The Singular Value Decomposition

• #### Singular Value Decomposition

• The Singular Value Decomposition

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This collection is included in aLens by: Digital Scholarship at Rice University

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

#### Also in these lenses

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Steven J. Cox. E-mail the author

# The Singular Value Decomposition

Module by: Steven J. Cox. E-mail the author

Summary: This module introduces the singular value decomposition.

The singular value decomposition is another name for the spectral representation of a rectangular matrix. Of course if AA is mm-by-nn and mn m n then it does not make sense to speak of the eigenvalues of AA. We may, however, rely on the previous section to give us relevant spectral representations of the two symmetric matrices

• ATA A A
• AAT A A
That these two matrices together may indeed tell us 'everything' about AA can be gleaned from 𝒩ATA=𝒩A 𝒩 A A 𝒩 A 𝒩AAT=𝒩AT 𝒩 A A 𝒩 A ATA=AT A A A AAT=A A A A You have proven the first of these in a previous exercise. The proof of the second is identical. The row and column space results follow from the first two via orthogonality.

On the spectral side, we shall now see that the eigenvalues of AAT A A and ATA A A are nonnegative and that their nonzero eigenvalues coincide. Let us first confirm this on the adjacency matrix associated with the unstable swing (see figure in another module)

A=( 0100 -1010 0001 ) A 0 1 0 0 -1 0 1 0 0 0 0 1
(1)
The respective products are AAT=( 100 020 001 ) A A 1 0 0 0 2 0 0 0 1 ATA=( 10-10 0100 -1010 0001 ) A A 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0 1 Analysis of the first is particularly simple. Its null space is clearly just the zero vector while λ1 =2 λ1 2 and λ2 =1 λ2 1 are its eigenvalues. Their geometric multiplicities are n1 =1 n1 1 and n2 =2 n2 2 . In ATA A A we recognize the SS matrix from the exercise in another moduleand recall that its eigenvalues are λ1 =2 λ1 2 , λ2 =1 λ2 1 , and λ3 =0 λ3 0 with multiplicities n1 =1 n1 1 , n2 =2 n2 2 , and n3 =1 n3 1 . Hence, at least for this AA, the eigenvalues of AAT A A and ATA A A are nonnegative and their nonzero eigenvalues coincide. In addition, the geometric multiplicities of the nonzero eigenvalues sum to 3, the rank of AA.

## Proposition 1

The eigenvalues of AAT A A and ATA A A are nonnegative. Their nonzero eigenvalues, including geometric multiplicities, coincide. The geometric multiplicities of the nonzero eigenvalues sum to the rank of AA.

### Proof

If ATAx=λx A A x λ x then xTATAx=λxTx x A A x λ x x , i.e., Ax2=λx2 A x 2 λ x 2 and so λ ≥ 0. A similar argument works for AAT A A .

Now suppose that λj >0 λj 0 and that x j , k k = 1 nj x j , k k = 1 nj constitutes an orthogonal basis for the eigenspace Pj Pj . Starting from

ATA x j , k = λj x j , k A A x j , k λj x j , k
(2)
we find, on multiplying through (from the left) by AA that AATA x j , k = λj A x j , k A A A x j , k λj A x j , k i.e., λjλj is an eigenvalue of AAT A A with eigenvector A x j , k A x j , k , so long as A x j , k 0 A x j , k 0 . It follows from the first paragraph of this proof that A x j , k = λj A x j , k λj , which, by hypothesis, is nonzero. Hence,
y j , k A x j , k λj   ,   1k nj    1 k nj y j , k A x j , k λj
(3)
is a collection of unit eigenvectors of AAT A A associated with λjλj. Let us now show that these vectors are orthonormal for fixed jj. y j , i T y j , k =1 λj x j , i T ATA x j , k = x j , i T x j , k =0 y j , i T y j , k 1 λj x j , i T A A x j , k x j , i T x j , k 0 We have now demonstrated that if λj >0 λj 0 is an eigenvalue of ATA A A of geometric multiplicity njnj. Reversing the argument, i.e., generating eigenvectors of ATA A A from those of AAT A A we find that the geometric multiplicities must indeed coincide.

Regarding the rank statement, we discern from Equation 2 that if λj >0 λj 0 then x j , k ATA x j , k A A . The union of these vectors indeed constitutes a basis for ATA A A , for anything orthogonal to each of these x j , k x j , k necessarily lies in the eigenspace corresponding to a zero eigenvalue, i.e., in 𝒩ATA 𝒩 A A . As ATA=AT A A A it follows that dimℛATA=r dimℛ A A r and hence the njnj, for λj >0 λj 0 , sum to r.

Let us now gather together some of the separate pieces of the proof. For starters, we order the eigenvalues of ATA A A from high to low, λ1 > λ2 >> λh λ1 λ2 λh and write

ATA=X Λn XT A A X Λn X
(4)
where X= X1 Xh   ,   Xj = x j , 1 x j , n j    Xj x j , 1 x j , n j X X1 Xh and ΛnΛn is the nn-by-nn diagonal matrix with λ1λ1 in the first n1n1 slots, λ2λ2 in the next n2n2 slots, etc. Similarly
AAT=Y Λm YT A A Y Λm Y
(5)
where Y= Y1 Yh   ,   Yj = y j , 1 y j , n j    Yj y j , 1 y j , n j Y Y1 Yh and ΛmΛm is the mm-by-mm diagonal matrix with λ1λ1 in the first n1n1 slots, λ2λ2 in the next n2n2 slots, etc. The y j , k y j , k were defined in Equation 3 under the assumption that λj >0 λj 0 . If λj =0 λj 0 let YjYj denote an orthonormal basis for 𝒩AAT 𝒩 A A . Finally, call σj = λj σj λj and let ΣΣ denote the mm-by-nn matrix diagonal matrix with σ1σ1 in the first n1n1 slots, σ2σ2 in the next n2n2 slots, etc. Notice that
ΣTΣ= Λn Σ Σ Λn
(6)
ΣΣT= Λm Σ Σ Λm
(7)
Now recognize that Equation 3 may be written A x j , k = σj y j , k A x j , k σj y j , k and that this is simply the column by column rendition of AX=YΣ A X Y Σ As XXT=I X X I we may multiply through (from the right) by XT X and arrive at the singular value decomposition of AA,
A=YΣXT A Y Σ X
(8)
Let us confirm this on the AA matrix in Equation 1. We have Λ4 =( 2000 0100 0010 000 ) Λ4 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 X=12( -1001 0200 1001 0020 ) X 1 2 -1 0 0 1 0 2 0 0 1 0 0 1 0 0 2 0 Λ3 =( 200 010 001 ) Λ3 2 0 0 0 1 0 0 0 1 Y=( 010 100 001 ) Y 0 1 0 1 0 0 0 0 1 Hence,
Σ=( 2000 0100 0010 ) Σ 2 0 0 0 0 1 0 0 0 0 1 0
(9)
and so A=YΣXT A Y Σ X says that AA should coincide with ( 010 100 001 )( 2000 0100 0010 )( -120120 0100 0001 120120 ) 0 1 0 1 0 0 0 0 1 2 0 0 0 0 1 0 0 0 0 1 0 -1 2 0 1 2 0 0 1 0 0 0 0 0 1 1 2 0 1 2 0 This indeed agrees with AA. It also agrees (up to sign changes on the columns of XX) with what one receives upon typing [Y, SIG, X] = scd(A) in Matlab.

You now ask what we get for our troubles. I express the first dividend as a proposition that looks to me like a quantitative version of the fundamental theorem of linear algebra.

## Proposition 2

If YΣXT Y Σ X is the singular value decomposition of AA then

1. The rank of AA, call it rr, is the number of nonzero elements in ΣΣ.
2. The first rr columns of XX constitute an orthonormal basis for AT A . The nr n r last columns of XX constitute an orthonormal basis for 𝒩A 𝒩 A .
3. The first rr columns of YY constitute an orthonormal basis for A A . The mr m r last columns of YY constitute an orthonormal basis for 𝒩AT 𝒩 A .

Let us now 'solve' Ax=b A x b with the help of the pseudo-inverse of AA. You know the 'right' thing to do, namely reciprocate all of the nonzero singular values. Because mm is not necessarily nn we must also be careful with dimensions. To be precise, let Σ+Σ+ denote the nn-by-mm matrix whose first n1n1 diagonal elements are 1 σ1 1 σ1 , whose next n2n2 diagonal elements are 1 σ2 1 σ2 and so on. In the case that σh =0 σh 0 , set the final nhnh diagonal elements of Σ+Σ+ to zero. Now, one defines the pseudo-inverse of AA to be A+ X Σ+ YT A+ X Σ+ Y In the case of that AA is that appearing in Equation 1 we find Σ+ =( 200 010 001 000 ) Σ+ 2 0 0 0 1 0 0 0 1 0 0 0 and so A+ =( -120120 0100 0001 120120 )( 1200 010 001 000 )( 010 100 001 ) A+ -1 2 0 1 2 0 0 1 0 0 0 0 0 1 1 2 0 1 2 0 1 2 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 therefore, A+ =( 0-120 100 0120 001 ) A+ 0 -1 2 0 1 0 0 0 1 2 0 0 0 1 in agreement with what appears from pinv(A). Let us now investigate the sense in which A+A+ is the inverse of AA. Suppose that bRm b m and that we wish to solve Ax=b A x b . We suspect that A+ b A+ b should be a good candidate. Observe by Equation 4 that (ATA) A+ b=X Λn XTX Σ+ YTb A A A+ b X Λn X X Σ+ Y b because XTX=I X X I (ATA) A+ b=X Λn Σ+ YTb A A A+ b X Λn Σ+ Y b by Equation 6 and Equation 7 (ATA) A+ b=XΣTΣ Σ+ YTb A A A+ b X Σ Σ Σ+ Y b because ΣTΣ Σ+ =ΣT Σ Σ Σ+ Σ (ATA) A+ b=XΣTYTb A A A+ b X Σ Y b by Equation 8 (ATA) A+ b=ATb A A A+ b A b that is A+ b A+ b satisfies the least-squares problem ATAx=ATb A A x A b .

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks