Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » First Order Convergence Analysis of the LMS Algorithm

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

In these lenses

  • Lens for Engineering

    This module is included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

  • SigProc display tagshide tags

    This module is included inLens: Signal Processing
    By: Daniel McKennaAs a part of collection: "Fundamentals of Signal Processing"

    Click the "SigProc" link to see all content selected in this lens.

    Click the tag icon tag icon to display tags associated with this content.

  • richb's DSP display tagshide tags

    This module is included inLens: richb's DSP resources
    By: Richard BaraniukAs a part of collection: "Adaptive Filters"

    Comments:

    "A good introduction in adaptive filters, a major DSP application."

    Click the "richb's DSP" link to see all content selected in this lens.

    Click the tag icon tag icon to display tags associated with this content.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

First Order Convergence Analysis of the LMS Algorithm

Module by: Douglas L. Jones. E-mail the author

Analysis of the LMS algorithm

It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it converges to the Wiener solution, to determine how quickly it converges, how much degredation is suffered due to the noisy gradient, etc. In particular, we need to know how to choose the parameter μμ.

Mean of W

does W k W k , k k approach the Wiener solution? (since W k W k is always somewhat random in the approximate gradient-based LMS algorithm, we ask whether the expected value of the filter coefficients converge to the Wiener solution)

E W k + 1 = W k + 1 ¯=E W k +2μ ε k X k = W k ¯+2μE d k X k +2μE(( W k T X k ) X k )= W k ¯+2μP+(2μE( W k T X k ) X k ) W k + 1 W k + 1 W k 2 μ ε k X k W k 2 μ d k X k 2 μ W k X k X k W k 2 μ P 2 μ W k X k X k
(1)

Patently False Assumption

X k X k and X k - i X k - i , X k X k and d k - i d k - i , and d k d k and d k - i d k - i are statistically independent, i0 i 0 . This assumption is obviously false, since X k - 1 X k - 1 is the same as X k X k except for shifting down the vector elements one place and adding one new sample. We make this assumption because otherwise it becomes extremely difficult to analyze the LMS algorithm. (First good analysis not making this assumption: Macchi and Eweda) Many simulations and much practical experience has shown that the results one obtains with analyses based on the patently false assumption above are quite accurate in most situations

With the independence assumption, W k W k (which depends only on previous X k - i X k - i , d k - i d k - i ) is statitically independent of X k X k , and we can simplify E( W k T X k ) X k W k X k X k

Now ( W k T X k ) X k W k X k X k is a vector, and

E( W k T X k ) X k =E i =0M1 w i k x k - i x k - j = i =0M1E w i k x k - i x k - j = i =0M1 w i k E x k - i x k - j = i =0M1 w i k ¯ r xx ij=R W k ¯ W k X k X k i M 1 0 w i k x k - i x k - j i M 1 0 w i k x k - i x k - j i M 1 0 w i k x k - i x k - j i M 1 0 w i k r xx i j R W k
(2)
where R=E X k X k T R X k X k is the data correlation matrix.

Putting this back into our equation

W k + 1 -= W k -+2μP+(2μR W k -)=I W k -+2μP W k + 1 W k 2 μ P 2 μ R W k I 2 μ R W k 2 μ P
(3)
Now if W k - W k converges to a vector of finite magnitude ("convergence in the mean"), what does it converge to?

If W k - W k converges, then as k k , W k + 1 - W k - W k + 1 W k , and W -=I W -+2μP W I 2 μ R W 2 μ P 2μR W -=2μP 2 μ R W 2 μ P R W -=P R W P or W opt -=R-1P W opt R P the Wiener solution!

So the LMS algorithm, if it converges, gives filter coefficients which on average are the Wiener coefficients! This is, of course, a desirable result.

First-order stability

But does W k - W k converge, or under what conditions?

Let's rewrite the analysis in term of V k - V k , the "mean coefficient error vector" V k -= W k - W opt V k W k W opt , where W opt W opt is the Wiener filter W k + 1 -= W k -2μR W k -+2μP W k + 1 W k 2 μ R W k 2 μ P W k + 1 - W opt = W k - W opt +(2μR W k -)+2μR W opt 2μR W opt +2μP W k + 1 W opt W k W opt 2 μ R W k 2 μ R W opt 2 μ R W opt 2 μ P V k + 1 -= V k -2μR V k -+(2μR W opt )+2μP V k + 1 V k 2 μ R V k 2 μ R W opt 2 μ P Now W opt =R-1 W opt R P , so V k + 1 -= V k -2μR V k -+(2μRR-1P)+2μP=(I2μR) V k - V k + 1 V k 2 μ R V k 2 μ R R P 2 μ P I 2 μ R V k We wish to know under what conditions V k -0- V k 0 ?

Linear Algebra Fact

Since RR is positive definite, real, and symmetric, all the eigenvalues are real and positive. Also, we can write RR as Q-1ΛQ Λ Q Q , where ΛΛ is a diagonal matrix with diagonal entries λ i λ i equal to the eigenvalues of RR, and QQ is a unitary matrix with rows equal to the eigenvectors corresponding to the eigenvalues of RR.

Using this fact, V k + 1 =(I2μ(Q-1ΛQ)) V k V k + 1 I 2 μ Λ Q Q V k multiplying both sides through on the left by QQ: we get Q V k + 1 -=(Q2μΛQ) V k -=(12μΛ)Q V k - Q V k + 1 Q 2 μ Λ Q V k 1 2 μ Λ Q V k Let V ' =QV V ' Q V : V ' k + 1 =(12μΛ) V ' k V ' k + 1 1 2 μ Λ V ' k Note that V ' V ' is simply VV in a rotated coordinate set in m m , so convergence of V ' V ' implies convergence of VV.

Since 12μΛ 1 2 μ Λ is diagonal, all elements of V ' V ' evolve independently of each other. Convergence (stability) bolis down to whether all MM of these scalar, first-order difference equations are stable, and thus 0 0 . i,i=12M: V i ' k + 1 =(12μ λ i ) V i ' k i i 1 2 M V i ' k + 1 1 2 μ λ i V i ' k These equations converge to zero if |12μ λ i |<1 1 2 μ λ i 1 , or i :|μ λ i |<1 i μ λ i 1 μμ and λ i λ i are positive, so we require i :μ<1 λ i i μ 1 λ i so for convergence in the mean of the LMS adaptive filter, we require

μ<1 λ max μ 1 λ max
(4)
This is an elegant theoretical result, but in practice, we may not know λ max λ max , it may be time-varying, and we certainly won't want to compute it. However, another useful mathematical fact comes to the rescue... trR= i =1M r ii = i =1M λ i λ max tr R i 1 M r ii i 1 M λ i λ max Since the eigenvalues are all positive and real.

For a correlation matrix, i ,i1M: r ii =r0 i i 1 M r ii r 0 . So trR=Mr0=ME x k x k tr R M r 0 M x k x k . We can easily estimate r0 r 0 with O1 O 1 computations/sample, so in practice we might require μ<1M r0 ^ μ 1 M r 0 as a conservative bound, and perhaps adapt μμ accordingly with time.

Rate of convergence

Each of the modes decays as 12μ λ i k 1 2 μ λ i k

Good news:

The initial rate of convergence is dominated by the fastest mode 12μ λ max 1 2 μ λ max . This is not surprising, since a dradient descent method goes "downhill" in the steepest direction

Bad news:

The final rate of convergence is dominated by the slowest mode 12μ λ min 1 2 μ λ min . For small λ min λ min , it can take a long time for LMS to converge.
Note that the convergence behavior depends on the data (via RR). LMS converges relatively quickly for roughly equal eigenvalues. Unequal eigenvalues slow LMS down a lot.

References

  1. O. Macchi and E. Eweda. (Jan 1983). Second-Order Convergence Analysis of Stochastic Adaptive Linear Filtering. IEEE Trans. on Automatic Controls, AC-28 #1, 76-85.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks