Parks and McClellan formulated the basic Chebyshev FIR filter design
problem by specifying the desired amplitude response A(ω)A(ω) and the
transition band edges, then minimizing the weighted Chebyshev error over
the pass and stop bands. For the basic lowpass filter illustrated in
Figure 1, the pass band edge ωpωp and the stop band edge ωsωs
are specified, the maximum passband error is related to the maximum stop
band error by δp=Kδsδp=Kδs and they are minimized.
Notice that if there is no transition band, i.e. ωp=ωsωp=ωs,
that δp+δs=1δp+δs=1 and no minimization is possible. While not
the case for a least squares approximation, a transition band is necessary
for the Chebyshev approximation problem to be wellposed. The effects of
a small transition band are large pass and stopband ripple as illustrated
in Figure 3b.
The alternation theorem states that the optimal approximation for this
problem will have an error function with R+1R+1 extremal points with
alternating signs. The theorem also states that there exists R+1R+1
frequencies such that, if the Chebyshev error at those frequencies are
equal and alternate in sign, it will be minimized over the pass band and
stop band. Note that there are nine extremal points in the length15 example
shown in Figure 1, counting those at the band edges in addition
to those that are interior to the pass and stopbands. For this case,
R=(L+1)/2R=(L+1)/2 which agree with the example.
Parks and McClellan applied the Remez exchange algorithm [50] to this
filter design problem by writing R+1R+1 equations using Equation 37 from FIR Digital Filters and
Equation 1 from Design of IIR Filters by Frequency Transformations evaluated at the R+1R+1 extremal frequencies with RR unknown
cosine parameters a(n)a(n) and the unknown ripple value, δδ. In
matrix form this becomes
A
d
(
ω
0
)
A
d
(
ω
1
)
A
d
(
ω
2
)
A
d
(
ω
3
)
⋮
A
d
(
ω
R
)
=
cos
(
ω
0
0
)
cos
(
ω
0
1
)
⋯
cos
(
ω
0
(
R

1
)
)
1
cos
(
ω
1
0
)
cos
(
ω
1
1
)
⋯
cos
(
ω
1
(
R

1
)
)

1
cos
(
ω
2
0
)
cos
(
ω
2
1
)
⋯
cos
(
ω
2
(
R

1
)
)
1
cos
(
ω
3
0
)
cos
(
ω
3
1
)
⋯
cos
(
ω
3
(
R

1
)
)

1
⋮
⋮
cos
(
ω
R
0
)
cos
(
ω
R
1
)
⋯
cos
(
ω
R
M
)
±
1
a
(
0
)
a
(
1
)
a
(
2
)
⋮
a
(
R

1
)
δ
.
A
d
(
ω
0
)
A
d
(
ω
1
)
A
d
(
ω
2
)
A
d
(
ω
3
)
⋮
A
d
(
ω
R
)
=
cos
(
ω
0
0
)
cos
(
ω
0
1
)
⋯
cos
(
ω
0
(
R

1
)
)
1
cos
(
ω
1
0
)
cos
(
ω
1
1
)
⋯
cos
(
ω
1
(
R

1
)
)

1
cos
(
ω
2
0
)
cos
(
ω
2
1
)
⋯
cos
(
ω
2
(
R

1
)
)
1
cos
(
ω
3
0
)
cos
(
ω
3
1
)
⋯
cos
(
ω
3
(
R

1
)
)

1
⋮
⋮
cos
(
ω
R
0
)
cos
(
ω
R
1
)
⋯
cos
(
ω
R
M
)
±
1
a
(
0
)
a
(
1
)
a
(
2
)
⋮
a
(
R

1
)
δ
.
(12)These equations are solved for a(n)a(n) and δδ using an initial guess
as to the location of the extremal frequencies ωiωi. This design is
optimal but only over the guessed frequencies, and we want optimality over
all of the pass and stopbands. Therefore, the amplitude response of the
filter is calculated over a dense set of frequency samples using
Equation 34 from FIR Digital Filters and a new set of estimates of the extremal frequencies is found
from the local minima and maxima and these are used to replace the initial
guesses (they are exchanged). This process is iteratively performed until
the guaranteed convergence is achieved and the optimal filter is designed.
The detailed steps of the ParksMcClellan algorithm are:
 Specify the ideal amplitude, Ad(ω)Ad(ω), the stop and pass band edges,
ωpωp and ωsωs, the error weight KK where δp=Kδsδp=Kδs, and the length LL.
 Choose R+1R+1 initial guesses for the extremal frequencies, ωiωi, in
the bands of approximation, ΩΩ.
This is often done uniformly over the pass and stop bands, including
ω=0,ωp,ωs,ω=0,ωp,ωs, and ππ.
 Calculate the cosine matrix at the current ωiωi and solve Equation 12 for
a(n)a(n) and δδ which are optimal over these current extremal frequencies,
ωiωi.
 Using the a(n)a(n) or the equivalent h(n)h(n) from step 3, evaluate A(ω)A(ω)
over a dense set of frequencies. This amplitude response will interpolate
A(ωi)=Ad(ωi)±δA(ωi)=Ad(ωi)±δ at the extremal frequencies.
 Find R+1R+1 new extremal frequencies where the error has a local maximum
or minimum and has alternating sign. This includes the band edges.
 If the largest error is the same as δδ found in step 3, then
convergence has occured and the optimal filter has been designed, otherwise,
exchange the old extremal frequencies ωiωi used in step 2 and
return to step 3 for the next iteration.
 This iterative algorithm is guaranteed to converge to the unique optimal
solution using almost any starting points in step 2.
This iterative procedure is called a multiple exchange algorithm because
all of the extremal frequencies are updated each iteration. If only the
frequency of the largest error is updated each iteration, it is called a
single exchange algorithm which also converges but much more slowly. Some
modification of the ParksMcClellan method or the Remez exchange algorithm
will not converge as a multiple exchange, but will as a single exchange.
The Alternation theorem states that there will be a minimum of R+1R+1
extremal frequencies, even for multiband designs with arbitrary
Ad(ω)Ad(ω). If Ad(ω)Ad(ω) is piecewise constant with TT
transition bands, one can derive the maximum possible number of extremal
frequencies and it is R+2TR+2T. This comes from the maximum number
of maxima and minima that a function of the form Equation 2 or
Equation 3 can have plus two at the edges of each transition band. For a
simple lowpass filter with one passband, one transition band, and one
stopband, there will be a minimum of R+1R+1 extremal frequencies and a
maximum of R+2R+2. For a bandpass filter, the maximum is R+4R+4. If a design has more than the minimum number of extremal frequencies, it is
called an extra ripple design. If it has the maximum number, it is
called a maximum ripple design.
It is interesting to note that at each iteration, the approximation is
optimal over that set of extremal frequencies and δδ increased
over the previous iteration. At convergence, δδ has increased to
the maximum error over ΩΩ and that is the minimum Chebyshev error.
At each iteration, the exchange of a proper set of extremal frequencies
with alternating signs of the errors is always possible. One can show
there will never be too few and if there are too many, one uses those
corresponding to the largest errors.
In step 4 it is suggested that the amplitude response A(ω)A(ω) be
calculated over a dense grid in the pass and stopbands and in step 5
the local extremes are found by searching over this dense grid. There
are more accurate methods that use bisection methods and/or Newton's
method to find the extremal points.
In step 2 it is suggested that the simultaneous equation of Equation 12 be
solved. Parks and McClellan [49] use a more efficient and
numerically robust method of evaluating δδ using a form of Cramer's
rule. With that δδ, an interpolation method can be used to find
a(n)a(n). This is faster and allows longer filters to be designed than with
the linear algebra based approach described here.
For the low pass filter, this formulation always has an extremal frequency
at both pass and stop band edges, ωpωp and ωsωs, and at
ω=0ω=0 and/or at ω=πω=π. The extra ripple filter has R+2R+2
extremal frequencies including both zero and pi. If this algorithm is
started with an incorrect number of extremal frequencies in the stop or
pass band, the iterations will correct this. It is interesting and
informative to plot the frequency response of the filters designed at each
iteration of this algorithm and observe how the correction takes place.
The ParksMcClellan algorithm starts with fixed pass and stop band edges
then minimizes a weighted form of the pass and stop band error ripple. In
some cases it may be more appropriate to fix one of the ripples and
minimize the other or to fix both ripples and minimize the transition band
width. Indeed Schüssler, Hofstetter, Tufts, and others [31], [27], [28], [29]
formulated some of these ideas before Parks and McClellan developed their
algorithm. The DSP group at Rice has developed some modifications to
these methods and they are presented below.