# Connexions

You are here: Home » Content » Statistical Signal Processing » LMS Algorithm Analysis

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Clayton Scott. E-mail the author

# LMS Algorithm Analysis

Module by: Clayton Scott, Robert Nowak. E-mail the authors

## Objective

Minimize instantaneous squared error

e k 2w= y k x k Tw2 e k w 2 y k x k w 2
(1)

## LMS Algorithm

w^k=w^k1+μ x k e k w k w k 1 μ x k e k
(2)
Where w k w k is the new weight vector, w k1 w k 1 is the old weight vector, and μ x k e k μ x k e k is a small step in the instantaneous error gradient direction.

## Interpretation in Terms of Weight Error Vector

Define

v k = w k w opt v k w k w opt
(3)
Where w opt w opt is the optimal weight vector and
ε k = y k x k T w opt ε k y k x k w opt
(4)
where ε k ε k is the minimum error. The stochastic difference equation is:
v k =I v k1 +μ x k ε k v k I μ x k x k v k 1 μ x k ε k
(5)

## Convergence/Stability Analysis

Show that (tightness)

limit   B maxPr v k B=0 B v k B 0
(6)
With probability 1, the weight error vector is bounded for all kk.

Chebyshev's inequality is

Pr v k BE v k 2B2 v k B v k 2 B 2
(7)
and
Pr v k B=1B2(E v k 2+σ( v k )2) v k B 1 B 2 v k 2 v k
(8)
where E v k 2 v k 2 is the squared bias. If E v k 2+σ( v k )2 v k 2 v k is finite for all kk, then limit   B Pr v k B=0 B v k B 0 for all kk.

Also,

σ( v k )2=trE v k v k T v k tr v k v k
(9)
Therefore σ( v k )2 v k is finite if the diagonal elements of Γ k E v k v k T Γ k v k v k are bounded.

## Convergence in Mean

E v k 0 v k 0 as k k . Take expectation of Equation 5 using smoothing property to simplify the calculation. We have convergence in mean if

1. R xx R xx is positive definite (invertible).
2. μ<2 λ max R xx μ 2 λ max R xx .

## Bounded Variance

Show that Γ k =E v k v k T Γ k v k v k , the weight vector error covariance is bounded for all kk.

### note:

We could have E v k 0 v k 0 , but σ( v k )2 v k ; in which case the algorithm would not be stable.
Recall that it is fairly straightforward to show that the diagonal elements of the transformed covariance C k =U Γ k UT C k U Γ k U tend to zero if μ<1 λ max R xx μ 1 λ max R xx ( U U is the eigenvector matrix of R xx R xx ; R xx =UDUT R xx U D U ). The diagonal elements of C k C k were denoted by γ k , i   ,   i=1p    γ k , i i i 1 p .

### Note:

σ( v k )2=tr Γ k =trUT C k U=tr C k = i =1p γ k , i v k tr Γ k tr U C k U tr C k i 1 p γ k , i
Thus, to guarantee boundedness of σ( v k )2 v k we need to show that the "steady-state" values γ k , i ( γ i <) γ k , i γ i .

We showed that

γ i =μ(α+ σ ε 2)2×(1μ λ i ) γ i μ α σ ε 2 2 1 μ λ i
(10)
where σ ε 2=E ε k 2 σ ε 2 ε k 2 , λ i λ i is the i th i th eigenvalue of R xx R xx ( R xx =U( λ 1 0 0 λ p )UT R xx U λ 1 0 0 λ p U ), and α=c σ ε 21c α c σ ε 2 1 c .
0<c=12 i =1pμ λ i 1μ λ i <1 0 c 1 2 i 1 p μ λ i 1 μ λ i 1
(11)
We found a sufficient condition for μμ that guaranteed that the steady-state γ i γ i 's (and hence σ( v k )2 v k ) are bounded: μ<23 i =1p λ i μ 2 3 i 1 p λ i Where i =1p λ i =tr R xx i 1 p λ i tr R xx is the input vector energy.

With this choice of μμ we have:

1. convergence in mean
This implies
limit   B maxPr v k B=0 B v k B 0
(12)
In other words, the LMS algorithm is stable about the optimum weight vector w opt w opt .

## Learning Curve

Recall that

e k = y k x k T w k1 e k y k x k w k 1
(13)
and Equation 4. These imply
e k = ε k x k T v k1 e k ε k x k v k 1
(14)
where v k = w k w opt v k w k w opt . So the MSE
E e k 2= σ ε 2+E v k1 T x k x k T v k1 = σ ε 2+EE v k1 T x k x k T v k1 | x n ε n   ,   n<k    = σ ε 2+E v k1 T R xx v k1 = σ ε 2+Etr R xx v k1 v k1 T= σ ε 2+tr R xx Γ k1 e k 2 σ ε 2 v k 1 x k x k v k 1 σ ε 2 x n ε n n n k v k 1 x k x k v k 1 σ ε 2 v k 1 R xx v k 1 σ ε 2 tr R xx v k 1 v k 1 σ ε 2 tr R xx Γ k 1
(15)
Where (tr R xx Γ k1 α k1 )(α=c σ ε 21c) tr R xx Γ k 1 α k 1 α c σ ε 2 1 c . So the limiting MSE is
ε =limit   k E e k 2= σ ε 2+c σ ε 21c= σ ε 21c ε k e k 2 σ ε 2 c σ ε 2 1 c σ ε 2 1 c
(16)
Since 0<c<1 0 c 1 was required for convergence, ε > σ ε 2 ε σ ε 2 so that we see noisy adaptation leads to an MSE larger than the optimal
E ε k 2=E y k x k T w opt 2= σ ε 2 ε k 2 y k x k w opt 2 σ ε 2
(17)
To quantify the increase in the MSE, define the so-called misadjustment:
M= ε σ ε 2 σ ε 2= ε σ ε 21=α σ ε 2=c1c M ε σ ε 2 σ ε 2 ε σ ε 2 1 α σ ε 2 c 1 c
(18)
We would of course like to keep MM as small as possible.

Fast adaptation and quick convergence require that we take steps as large as possible. In other words, learning speed is proportional to μμ; larger μμ means faster convergence. How does μμ affect the misadjustment?

To guarantee convergence/stability we require μ<23 i =1p λ i R xx μ 2 3 i 1 p λ i R xx Let's assume that in fact μ1 i =1p λ i μ 1 i 1 p λ i so that there is no problem with convergence. This condition implies μ1 λ i μ 1 λ i or μ λ i 1   ,   i=1p    μ λ i 1 i i 1 p . From here we see that

c=12 i =1pμ λ i 1μ λ i 12μ i =1p λ i 1 c 1 2 i 1 p μ λ i 1 μ λ i 1 2 μ i 1 p λ i 1
(19)
M=c1cc=12μ i =1p λ i M c 1 c c 1 2 μ i 1 p λ i
(20)

Since we still have convergence in mean, this essentially means that with a larger step size we "converge" faster but have a larger variance (rattling) about w opt w opt .

small μμ implies

large μμ implies

## Example 1

w opt =11 w opt 1 1 x k 𝒩0( 10 01 ) x k 0 1 0 0 1 y k = x k T w opt + ε k y k x k w opt ε k ε k 𝒩00.01 ε k 0 0.01

### LMS Algorithm

initialization w 0 =00 w 0 0 0 and w k = w k1 +μ x k e k   ,   k1    w k w k 1 μ x k e k k k 1 , where e k = y k x k T w k1 e k y k x k w k 1

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks