# Connexions

You are here: Home » Content » Principles of Digital Communications » Adaptive Equalization

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Tuan Do-Hong. E-mail the author

Module by: Ha Ta-Hong, Tuan Do-Hong. E-mail the authorsEdited By: Ha Ta-Hong, Tuan Do-HongTranslated By: Ha Ta-Hong, Tuan Do-Hong

Another type of equalization, capable of tracking a slowly time-varying channel response, is known as adaptive equalization. It can be implemented to perform tap-weight adjustments periodically or continually. Periodic adjustments are accomplished by periodically transmitting a preamble or short training sequence of digital data known by the receiver. Continual adjustment are accomplished by replacing the known training sequence with a sequence of data symbols estimated from the equalizer output and treated as known data. When performed continually and automatically in this way, the adaptive procedure is referred to as decision directed.

If the probability of error exceeds one percent, the decision directed equalizer might not converge. A common solution to this problem is to initialize the equalizer with an alternate process, such as a preamble to provide good channel-error performance, and then switch to decision-directed mode.

The simultaneous equations described in equation (3) of module “Transversal Equalizer”, do not include the effects of channel noise. To obtain stable solution to the filter weights, it is necessary that the data be averaged to obtain the stable signal statistic, or the noisy solution obtained from the noisy data must be averaged. The most robust algorithm that average noisy solution is the least-mean-square (LMS) algorithm. Each iteration of this algorithm uses a noisy estimate of the error gradient to adjust the weights in the direction to reduce the average mean-square error.

The noisy gradient is simply the product e(k)rxe(k)rx size 12{e $$k$$ r rSub { size 8{x} } } {} of an error scalar e(k)e(k) size 12{e $$k$$ } {}and the data vector rxrx size 12{r rSub { size 8{x} } } {}.

e(k)=z(k)zˆ(k)e(k)=z(k)zˆ(k) size 12{e $$k$$ =z $$k$$ - { hat {z}} $$k$$ } {} (1)

Where z(k)z(k) size 12{z $$k$$ } {} and zˆ(k)zˆ(k) size 12{ { hat {z}} $$k$$ } {} are the desired output signal (a sample free of ISI) and the estimate at time k.

zˆ(k)=cTrx=n=NNx(kn)cnzˆ(k)=cTrx=n=NNx(kn)cn size 12{ { hat {z}} $$k$$ =c rSup { size 8{T} } r rSub { size 8{x} } = Sum cSub { size 8{n= - N} } cSup { size 8{N} } {x $$k - n$$ c rSub { size 8{n} } } } {} (2)

Where cTcT size 12{c rSup { size 8{T} } } {} is the transpose of the weight vector at time k.

Iterative process that updates the set of weights is obtained as follows:

c(k+1)=c(k)+Δe(k)rxc(k+1)=c(k)+Δe(k)rx size 12{c $$k+1$$ =c $$k$$ +Δe $$k$$ r rSub { size 8{x} } } {} (3)

Where c(k)c(k) size 12{c $$k$$ } {} is the vector of filter weights at time k, and ΔΔ size 12{Δ} {} is a small term that limits the coefficient step size and thus controls the rate of convergence of the algorithm as well as the variance of the steady state solution. Stability is assured if the parameter ΔΔ size 12{Δ} {} is smaller than the reciprocal of the energy of the data in the filter. Thus, while we want the convergence parameter ΔΔ size 12{Δ} {} to be large for fast convergence but not so large as to be unstable, we also want it to be small enough for low variance.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks