Skip to content Skip to navigation

OpenStax_CNX

You are here: Home » Content » Adaptive Filtering: LMS Algorithm

Navigation

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Adaptive Filtering: LMS Algorithm

Module by: Matthew Berry. E-mail the author

Summary: This module introduces adaptive filters through the example of system identification using the LMS algorithm.

Note: You are viewing an old version of this document. The latest version is available here.

Introduction

This module introduces adaptive filtering.

Figure 1 shows a block diagram for system identification using adaptive filtering. The objective is to adapt an FIR filter, W W, to match as closely as possible the response of an unknown filter, H H. The unknown system and the adapting filter process the same input signal xn x n and have outputs yn y n (also referred to as the desired signal) and y ^ n y ^ n .

Figure 1: System identification block diagram.
Figure 1 (sys_id.png)

Gradient Descent Adaptation

The adaptive filter, W W, is adapted using the least mean-square algorithm. First the error signal is computed, en=yn y ^ n e n y n y ^ n , which provides a measure of how similar the output of the adaptive filter is to the output of the unknown system. The coefficient update relation is a function of this error signal squared and is given by

h n+1 i= h n i+μ2(|e|2 h n i ) h n+1 i h n i μ 2 h n i e 2
(1)

The term inside the parenthesis represents the gradient of the squared-error with respect to the iith coefficient. The gradient is a vector pointing in the direction of the change in filter coefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, Equation 1 updates the filter coefficients in the direction opposite the gradient; that is why the gradient term is negated. The constant μμ is a step-size, which controls the amount of gradient information used to update each coefficient. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge; that is, the difference between the unknown and adaptive systems should get smaller and smaller.

To express the gradient decent coefficient update equation in a more usable manner, we can rewrite the derivative of the squared-error term as |e|2 hi =2e hi e h i e 2 2 h i e e |e|2 hi =2(y y ^ ) hi e h i e 2 2 h i y y ^ e |e|2 hi =(2(y i =0N1hixni) hi )e h i e 2 2 h i y i 0 N 1 h i x n i e

|e|2 hi =2(xni)e h i e 2 2 x n i e
(2)
which in turn gives us the final LMS coefficient update,
h n+1 i= h n i+μexni h n+1 i h n i μ e x n i
(3)
The step-size μ μ directly affects how quickly the adaptive filter will converge toward the unknown system. If μ μ is very small, then the coefficients are not altered by a significant amount at each update, and the filter converges slowly. With a larger step-size, more gradient information is included in each update, and the filter converges more quickly; however, when the step-size is too large, the coefficients change too much and the filter will not converge.

Matlab Simulation

Simulate the system identification block diagram shown in Figure 1.

Previously in MATLAB, you used the filter command or the conv command to implement shift-invariant filters. Those commands will not work here because adaptive filters are shift-varying, since the coefficient update equation changes the filter's impulse response at every sample time. Therefore, implement the system identification block on a sample-by-sample basis with a do loop, similar to the way you might implement a time-domain FIR filter on a DSP. For the "unknown" system, use the fourth-order, low-pass, elliptical, IIR filter designed for the IIR Filtering Lab.

Use Gaussian random noise as your input, which can be generated in Matlab using the command randn. Simulate the system with an adaptive filter of length 32 and a step-size of 0.020.02. Initialize all of the adaptive filter coefficients to zero. From your simulation, plot the error (or squared-error) as it evolves over time and plot the frequency response of the adaptive filter coefficients at the end of the simulation. How well does your adaptive filter match the "unknown" filter? How long does it take to converge?

Once your simulation is working, experiment with different step-sizes and adaptive filter lengths.

Processor Implementation

Use the same "unknown" filter as you used in the Matlab simulation.

Although the coefficient update equation is relatively straightforward, consider using the lms instruction available on the TI processor, which is designed for this application and yields a very efficient implementation of the coefficient update equation.

To generate noise on the DSP, you can use the PN generator from the Digital Transmitter Lab, but shift the PN register contents up to make the sign bit random. (If the sign bit is always zero, then the noise will not be zero-mean and this will affect convergence.) Send the desired signal, yn y n , the output of the adaptive filter, y ^ n y ^ n , and the error to the D/A for display on the oscilloscope.

When using the step-size suggested in the Matlab simulation section, you should notice that the error converges very quickly. Try an extremely small μμ so that you can actually watch the error signal decrease towards zero in amplitude.

Extensions

If your project requires some modifications to the discussed system identification implementation, refer to the listed reference, Haykin, and consider some of the following questions regarding such modificiations:

  • How would the system in Figure 1 change for different applications? (noise cancellation, equalization, etc.)
  • What happens to the error when the step-size is too large or too small?
  • How does the length of an adaptive FIR filters affect convergence?
  • What other types of coefficient update relations are possible other than the described LMS algorithm?

References

  • S. Haykin, Adaptive Filter Theory. Prentice Hall, 3rd ed., 1996.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks