Adaptive Filtering: LMS Algorithmm10481Adaptive Filtering: LMS Algorithm2.142002/02/012009/06/01 09:22:21.938 GMT-5DouglasL.JonesDouglas L. Jonesdl-jones@uiuc.eduSwaroopAppadwedulaSwaroop Appadwedulaappadwed@uiuc.eduMatthewJ.BerryMatthew Berrymjberry@uiuc.eduMarkA.HaunMark Haunmarkhaun@uiuc.eduDimaMoussaDima Moussadmoussa@uiuc.eduDanielGrobeSachsDaniel Sachssachs@uiuc.eduDouglasL.JonesDouglas L. Jonesdl-jones@uiuc.eduSwaroopAppadwedulaSwaroop Appadwedulaappadwed@uiuc.eduMatthewJ.BerryMatthew Berrymjberry@uiuc.eduMarkA.HaunMark Haunmarkhaun@uiuc.eduDanielGrobeSachsDaniel Sachssachs@uiuc.eduMikeRussellMike Russellmerussel@uiuc.eduMarkD.ButalaMark Butalabutala@uiuc.eduRobertL.MorrisonRobert Morrisonrlmorris@uiuc.eduDouglasL.JonesDouglas L. Jonesdl-jones@uiuc.eduSwaroopAppadwedulaSwaroop Appadwedulaappadwed@uiuc.eduMatthewJ.BerryMatthew Berrymjberry@uiuc.eduMarkA.HaunMark Haunmarkhaun@uiuc.eduDimaMoussaDima Moussadmoussa@uiuc.eduDanielGrobeSachsDaniel Sachssachs@uiuc.eduJakeJanevitzJake Janovetzjake@janovetz.comMichaelL.KramerMichael Kramerkramer@ifp.uiuc.eduBrianWadeBrian Wadebwade@uiuc.eduadaptive filteringDSPgradient descentLMSsystem identificationScience and TechnologyThis module introduces adaptive filters through the example of system identification using the LMS algorithm. The adaptive filter adjusts its coefficients to minimize the mean-square error between its output and that of an unknown system.enIntroduction
is a block diagram of system
identification using adaptive filtering. The objective is to
change (adapt) the coefficients of an FIR filter,
W, to match as closely as possible the response of an
unknown system,
H. The unknown system and the adapting filter process
the same input signal
xn and have outputs
dn(also referred to as the desired signal) and
yn.
Gradient-descent adaptation
The adaptive filter,
W, is adapted using the least mean-square
algorithm, which is the most widely used adaptive filtering
algorithm. First the error signal,
en, is computed as
endnyn, which measures the difference between the output
of the adaptive filter and the output of the unknown system.
On the basis of this measure, the adaptive filter will
change its coefficients in an attempt to reduce the error.
The coefficient update relation is a function of the error
signal squared and is given by
hn+1ihniμ2hnie2
The term inside the parentheses represents the gradient of
the squared-error with respect to the
ith coefficient. The gradient is a vector pointing in
the direction of the change in filter coefficients that will
cause the greatest increase in the error signal. Because
the goal is to minimize the error, however, updates the filter coefficients in the
direction opposite the gradient; that is why the gradient
term is negated. The constant
μ is a step-size, which controls the amount of
gradient information used to update each coefficient. After
repeatedly adjusting each coefficient in the direction
opposite to the gradient of the error, the adaptive filter
should converge; that is, the difference between the unknown
and adaptive systems should get smaller and smaller.
To express the gradient decent coefficient update equation
in a more usable manner, we can rewrite the derivative of the
squared-error term as
hie22hiee2hidye2hidi0N1hixniehie22xniewhich in turn gives us the final LMS coefficient
update,
hn+1ihniμexni The step-size
μ directly affects how quickly the adaptive filter
will converge toward the unknown system. If
μ is very small, then the coefficients change only a
small amount at each update, and the filter converges
slowly. With a larger step-size, more gradient information
is included in each update, and the filter converges more
quickly; however, when the step-size is too large, the
coefficients may change too quickly and the filter will
diverge. (It is possible in some cases to determine
analytically the largest value of
μ ensuring convergence.)
MATLAB Simulation
Simulate the system identification block diagram
shown in .
Previously in MATLAB, you used the filter command
or the conv command to implement shift-invariant
filters. Those commands will not work here because adaptive
filters are shift-varying, since the coefficient update
equation changes the filter's impulse response at every sample
time. Therefore, implement the system identification block on
a sample-by-sample basis with a do loop, similar
to the way you might implement a time-domain FIR filter on a
DSP. For the "unknown" system, use the fourth-order,
low-pass, elliptical, IIR filter designed for the IIR Filtering: Filter-Design Exercise in
MATLAB.
Use Gaussian random noise as your input, which can be
generated in MATLAB using the command randn.
Random white noise provides signal at all digital frequencies
to train the adaptive filter. Simulate the system with an
adaptive filter of length 32 and a step-size of
0.02. Initialize all of the adaptive filter coefficients
to zero. From your simulation, plot the error (or
squared-error) as it evolves over time and plot the frequency
response of the adaptive filter coefficients at the end of the
simulation. How well does your adaptive filter match the
"unknown" filter? How long does it take to converge?
Once your simulation is working, experiment with different
step-sizes and adaptive filter lengths.
Processor Implementation
Use the same "unknown" filter as you used in the MATLAB simulation.
Although the coefficient update equation is relatively
straightforward, consider using the lms
instruction available on the TI processor, which is designed
for this application and yields a very efficient
implementation of the coefficient update equation.
To generate noise on the DSP, you can use the PN generator
from the Digital Transmitter:
Introduction to Quadrature Phase-Shift Keying, but
shift the PN register contents up to make the sign bit random.
(If the sign bit is always zero, then the noise will not be
zero-mean and this will affect convergence.) Send the desired
signal,
dn, the output of the adaptive filter,
yn, and the error to the D/A for display on the
oscilloscope.
When using the step-size suggested in the MATLAB simulation
section, you should notice that the error converges very
quickly. Try an extremely small
μ so that you can actually watch the amplitude of the
error signal decrease towards zero.
Extensions
If your project requires some modifications to the
implementation here, refer to Haykin and consider some of the
following questions regarding such modifications:
How would the system in change
for different applications? (noise cancellation,
equalization, etc.)
What happens to the error when the step-size is too large
or too small?
How does the length of an adaptive FIR filters affect
convergence?
What types of coefficient update relations are possible
besides the described LMS algorithm?
S. HaykinAdaptive Filter TheoryPrentice Hall19963rd edition