# Connexions

You are here: Home » Content » Eigenvalue Decomposition

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "State Space Systems"

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

### Recently Viewed

This feature requires Javascript to be enabled.

# Eigenvalue Decomposition

Module by: Thanos Antoulas, JP Slavinsky. E-mail the authors

Summary: (Blank Abstract)

When we apply a matrix to a vector (i.e. multiply them together), the vector is transformed. An interesting question to ask ourselves is whether there are any particular combinations of such a matrix and vector whose result is a new vector that is proportional to the original vector. In math terminology, this question can be posed as follows: if we have a matrix AA: nn n n , does there exist a vector xn x n and a scalar λC λ such that Ax=λx A x λ x ? If so, then the complexity of Ax A x is reduced. It no longer must be thought of as a matrix multiplication; instead, applying AA to xx has the simple effect of linearly scaling xx by some scalar factor λλ.

In this situation, where Ax=λx A x λ x , λλ is known as an eigenvalue and xx is its associated eigenvector. For a certain matrix, each one of its eigenvectors is associated with a particular (though not necessarily unique) eigenvalue. The word "eigen" is German and means "same"; this is appropriate because the vector xx after the matrix multiplication is the same as the original vector xx, except for the scaling factor. The following two examples give actual possible values for the matrices, vectors, and values discussed in general terms above.

( 1-1 -11 )11=011 1-1 -11 1 1 0 1 1
(1)

Here, 11 1 1 is the eigenvector and 00 is its associated eigenvalue.

( 21 12 )11=311 21 12 1 1 3 1 1
(2)

In this second example, 11 1 1 is again the eigenvector but the eigenvalue is now 33.

Now we'd like to develop a method of finding the eigenvalues and eigenvectors of a matrix. We start with what is basically the defining equation behind this whole idea:

Ax=λx A x λ x
(3)

Next, we move the λx λ x term to the left-hand side and factor:

(AλI)x=0 A λ I x 0
(4)

Here's the important rule to remember: there exists x0 x 0 satisfying the equation if and only if det(AλI)=0 A λ I 0 . So, to find the eigenvalues, we need to solve this determinant equation.

## Example 1

Given the matrix AA, solve for λλ in det(AλI)=0 A λ I 0 .

A=( 21 12 ) A 21 12
(5)
det(AλI)=det( 2λ1 12λ )=λ221=λ24λ+3=0 A λ I 2 λ 1 1 2 λ λ 2 2 1 λ 2 4 λ 3 0
(6)
λ=31 λ 3 1
(7)

After finding the eigenvalues, we need to find the associated eigenvectors. Looking at the defining equation, we see that the eigenvector xx is annihilated by the matrix AλI A λ I . So to solve for the eigenvectors, we simply find the kernel (nullspace) of AλI A λ I using the two eigenvalues we just calculated. If we did this for the example above, we'd find that the eigenvector associated with λ=3 λ 3 is 11 11 and the eigenvector associated with λ=1 λ 1 is 1-1 1-1 .

You may be wondering why eigenvalue decomposition is useful. It seems at first glance that it is only helpful in determining the effect a matrix has on a certain small subset of possible vectors (the eigenvectors). However, the benefits become clear when you think about how many other vectors can be looked at from an eigenvalue perspective by decomposing them into components along the available eigenvectors. For instance, in the above example, let's say we wanted to apply AA to the vector 20 20 . Instead of doing the matrix multiply (admittedly not too difficult in this case), the vector 20 20 could be split into components in the direction of the eigenvalues:

20=11+1-1 2 0 1 1 1 -1
(8)

Now, each of these components could be scaled by the appropriate eigenvalue and then added back together to form the net result.

## Multiplicity

Once we have determined the eigenvalues of a particular matrix, we can start to discuss them in terms of their multiplicity. There are two types of eigenvalue multiplicity: algebraic multiplicity and geometric multiplicity.

Definition 1: Algebraic Multiplicity
The number of repetitions of a certain eigenvalue. If, for a certain matrix, λ=334 λ 3 3 4 , then the algebraic multiplicity of 33 would be 22 (as it appears twice) and the algebraic multiplicity of 44 would be 11 (as it appears once). This type of multiplicity is normally represented by the Greek letter αα, where αλi α λi represents the algebraic multiplicity of λiλi.
Definition 2: Geometric Multiplicity
A particular eigenvalue's geometric multiplicity is defined as the dimension of the nullspace of λIA λ I A . This type of multiplicity is normally represented by the Greek letter γγ, where γλi γ λi represents the geometric multiplicity of λiλi.

### Rank

A matrix AA is full rank if detA0 A 0 . However, if λ=0 λ 0 then det(λIA)=0 λ I A 0 . This tells us that detA=0 A 0 . Therefore, if a matrix has at least one eigenvalue equal to 00, then it cannot have full rank. Specifically, for an nn-dimensional square matrix:

• When one eigenvalue equals 00, rankA=n1 rank A n 1
• When multiple eigenvalues equal 00 rankA=nγ0 rank A n γ 0 . This property holds even if there are other non-zero eigenvalues

### Symmetric Matrices

A symmetric matrix is one whose transpose is equal to itself ( A=AT A A ). These matrices (represented by A below) have the following properties:

1. Its eigenvalues are real.
2. Its eigenvectors are orthogonal.
3. They are always diagonalizable.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks