# Connexions

You are here: Home » Content » Digital Signal Processing and Digital Filter Design (Draft) » Constrained Approximation and Mixed Criteria

• Preface: Digital Signal Processing and Digital Filter Design

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• NSF Partnership

This collection is included inLens: NSF Partnership in Signal Processing
By: Sidney Burrus

Click the "NSF Partnership" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

• Featured Content

This collection is included inLens: Connexions Featured Content
By: Connexions

Click the "Featured Content" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Inside Collection (Textbook):

Textbook by: C. Sidney Burrus. E-mail the author

# Constrained Approximation and Mixed Criteria

Module by: C. Sidney Burrus. E-mail the author

## Trade-off of Error Measures and Design Specifications

In many filter design problems, more than one criterion is important. For example, both L2L2 and LL may be of interest in one filter. Often one is posed as a constraint and the other as an optimized variable. Indeed, because L2L2 approximation minimizes the error energy and because Parseval's theorem states that an optimal L2L2 frequency domain approximation is also an optimal L2L2 time domain approximation, an LL constrained minimum L2L2 error approximation seems a good practical approach. To see how this might have advantages, it is informative to examine the relationship of the L2L2 error to the LL error as the constraint is varied from tight to loose [66], [2] in Figure 1. From this one can see just how sensitive one error is to the other and how the traditional designs are extremes on this diagram.

Another trade-off is the error in a Chebyshev design as a function of the transition band location. There are certain locations of transition band or band edges that give much lower ripple size than others. Rabiner has examined that relation [56], [55].

## Constrained Least Squares Design

There are problems where the peak error or Chebyshev error is important. This can be minimized directly using the Remez exchange algorithm but, in many cases, is better controlled by use of a peak error constraint on the basic least squared error formulation of the problem [2], [6], [66], [5]. An efficient algorithm for minimizing the constrained least squared error uses Lagrange multipliers [41], [40] and the Kuhn-Tucker conditions [66], [65].

Similar to the Chebyshev design problem, there are two formulations of the problem: one where there is a well defined transition band separating the desired signal spectrum (passband) from the noise or interfering signal spectrum (stopband) and the second where there is a well defined frequency that separates the pass and stopband but no well defined transition band.

The first case would include situations with signals residing in specified bands separated by “guard bands" such as commercial radio and TV transmissions. It also includes cases where due to multirate sampling, certain well defined bands are aliased into other well defined bands. The Parks-McClellan and Shpak-Antoniou Chebyshev designs address this case for the Chebyshev error. Adams' method [2], [1], [3], [6], [61], [5] described below applies to the constrained least squares design with a specified transition band.

The second case would include signals with known spectral support with additive white or broad-band noise. In these cases there is no obvious transition band or “don't care" band. The Hoffstetter-Oppenheim-Siegel and the method of the section Chebyshev Approximations using the Exchange Algorithms address this case for a Chebyshev design. The method in section below applies to the constrained least squares design [66] without a specified transition band.

### The Lagrangian

To pose the constrained least squared error optimization problem, we use a Lagrange multiplier formulation. First define the Lagrangian as

L = P 0 π ( A ( ω ) - A d ( ω ) ) 2 d ω + i μ i A ( ω i ) - [ A d ( ω i ) ± T ( ω i ) ] L = P 0 π ( A ( ω ) - A d ( ω ) ) 2 d ω + i μ i A ( ω i ) - [ A d ( ω i ) ± T ( ω i ) ]
(1)

where the μiμi are the necessary number of Langrange multipliers and PP is a scale factor that can be chosen for simplicity later. The first term in Equation 1 is the integral squared error of the frequency response to be minimized and the second term will be zero when the equality constraints are satisfied at the frequencies, ωiωi. The function T(ω)T(ω) is the constraint function in that A(ω)A(ω) must satisfy

A d ( ω ) + T ( ω ) A ( ω ) A d ( ω ) - T ( ω ) . A d ( ω ) + T ( ω ) A ( ω ) A d ( ω ) - T ( ω ) .
(2)

Necessary conditions for the minimization of the integral squared error are that the derivative of the Lagrangian with respect to the filter parameters a(n)a(n) defined in Equation 49 from FIR Digital Filters and to the Lagrange multipliers μiμi be zero [69].

The derivatives of the Lagrangian with respect to a(n)a(n) are

d L d a ( n ) = P 0 π 2 ( A ( ω ) - A d ( ω ) ) d A d a d ω + i μ i d A d a ω i d L d a ( n ) = P 0 π 2 ( A ( ω ) - A d ( ω ) ) d A d a d ω + i μ i d A d a ω i
(3)

where from Equation 49 from FIR digital Filters we have for n=1,2,,Mn=1,2,,M

d A ( ω ) d a ( n ) = cos ( ω n ) d A ( ω ) d a ( n ) = cos ( ω n )
(4)

and for n=0n=0

d A ( ω ) d a ( 0 ) = K . d A ( ω ) d a ( 0 ) = K .
(5)

For n=1,2,,Mn=1,2,,M this gives

d L d a ( n ) = 2 P A ( ω ) cos ( ω n ) d ω - A d ( ω ) cos ( ω n ) d ω + i μ i cos ( ω i n ) d L d a ( n ) = 2 P A ( ω ) cos ( ω n ) d ω - A d ( ω ) cos ( ω n ) d ω + i μ i cos ( ω i n )
(6)

and for n=0n=0 gives

d L d a ( 0 ) = 2 P K A ( ω ) d ω - A d ( ω ) d ω + i μ i K . d L d a ( 0 ) = 2 P K A ( ω ) d ω - A d ( ω ) d ω + i μ i K .
(7)

Using Equation 50 from FIR Digital Filters for n=1,2,,Mn=1,2,,M, we have

d L d a ( n ) = π P a ( n ) - a d ( n ) + i μ i cos ( ω i n ) = 0 d L d a ( n ) = π P a ( n ) - a d ( n ) + i μ i cos ( ω i n ) = 0
(8)

and for n=0n=0

d L d a ( 0 ) = 2 π P K 2 a ( 0 ) - a d ( 0 ) + K i μ i = 0 . d L d a ( 0 ) = 2 π P K 2 a ( 0 ) - a d ( 0 ) + K i μ i = 0 .
(9)

Choosing P=1/πP=1/π gives

a ( n ) = a d ( n ) - i μ i cos ( ω i n ) a ( n ) = a d ( n ) - i μ i cos ( ω i n )
(10)

and

a ( 0 ) = a d ( 0 ) - 1 2 K i μ i a ( 0 ) = a d ( 0 ) - 1 2 K i μ i
(11)

Writing Equation 10 and Equation 11 in matrix form gives

a = a d - H μ . a = a d - H μ .
(12)

where HH is a matrix with elements

h ( n , i ) = cos ( ω i n ) h ( n , i ) = cos ( ω i n )
(13)

except for the first row which is

h ( 0 , i ) = 1 2 K h ( 0 , i ) = 1 2 K
(14)

because of the normalization of the a(0)a(0) term. The ad(n)ad(n) are the cosine coefficients for the unconstrained approximation to the ideal filter which result from truncating the inverse DTFT of Ad(ω)Ad(ω).

The derivative of the Lagrangian in Equation 1 with respect to the Lagrange multipliers μiμi, when set to zero, gives

A ( ω i ) = A d ( ω i ) ± T ( ω i ) = A c ( ω i ) A ( ω i ) = A d ( ω i ) ± T ( ω i ) = A c ( ω i )
(15)

which is simply a statement of the equality constraints.

In terms of the filter's cosine coefficients a(n)a(n), from Equation 49 from FIR Digital Filters, this can be written"

A c ( ω i ) = n a ( n ) cos ( ω i n ) + K a ( 0 ) A c ( ω i ) = n a ( n ) cos ( ω i n ) + K a ( 0 )
(16)

and as matrices

A c = G a A c = G a
(17)

where AcAc is the vector of frequency response values which are the desired response plus or minus the constraints evaluated at the frequencies in the constraint set. The frequency response must interpolate these values. The matrix GG is

g ( i , n ) = cos ( ω i n ) g ( i , n ) = cos ( ω i n )
(18)

except for the first column which is

g ( i , 0 ) = K . g ( i , 0 ) = K .
(19)

Notice that if K=1/2K=1/2, the first rows and columns are such that we have GT=HGT=H.

The two equations Equation 12 and Equation 17 that must be satisfied can be written as a single matrix equation of the form

I H G 0 a μ = a d A c I H G 0 a μ = a d A c
(20)

or, if K=1/2K=1/2, as

I G T G 0 a μ = a d A c I G T G 0 a μ = a d A c
(21)

which have as solutions

μ = ( G H ) - 1 ( G a d - A c ) a = a d - H μ μ = ( G H ) - 1 ( G a d - A c ) a = a d - H μ
(22)

The filter corresponding to the cosine coefficients a(n)a(n) minimize the L2L2 error norm subject the equality conditions in Equation 17.

Notice that the term in Equation 22 of the form GadGad is the frequency response of the optimal unconstrained filter evaluated at the constraint set frequencies. Equation Equation 22 could, therefore, be written

μ = ( G H ) - 1 ( A u - A c ) μ = ( G H ) - 1 ( A u - A c )
(23)

### The Constrained Weighted Least Squares Design of FIR Filters

Combining the weighted least squared error formulation with the constrained least squared error gives the general formulation of this class of problems.

We now modify the Lagrangian in Equation 1 to allow a weighted squared error giving

L = 1 π 0 π W ( ω ) ( A ( ω ) - A d ( ω ) ) 2 d ω + i μ i A ( ω i ) - A d ( ω i ) ± T ( ω i ) L = 1 π 0 π W ( ω ) ( A ( ω ) - A d ( ω ) ) 2 d ω + i μ i A ( ω i ) - A d ( ω i ) ± T ( ω i )
(24)

with a corresponding derivative of

d L d a ( n ) = 2 π W ( ω ) ( A ( ω ) - A d ( ω ) d A d a d ω + i μ i d A d a ω i d L d a ( n ) = 2 π W ( ω ) ( A ( ω ) - A d ( ω ) d A d a d ω + i μ i d A d a ω i
(25)

The integral cannot be carried out analytically for a general weighting function, but if the weight function is constant over each subband, Equation 47 from Least Squared Error Design of FIR Filters can be written

d L d a ( n ) = 2 π k ω k ω k + 1 W k ( m = 1 M a ( m ) cos ( ω m ) + K a ( 0 ) - A d ( ω ) ) cos ( ω n ) d ω + i μ i d A d a ω i d L d a ( n ) = 2 π k ω k ω k + 1 W k ( m = 1 M a ( m ) cos ( ω m ) + K a ( 0 ) - A d ( ω ) ) cos ( ω n ) d ω + i μ i d A d a ω i
(26)

which after rearranging is

= m = 1 M 2 π k W k ω k ω k + 1 ( cos ( ω m ) cos ( ω n ) ) d ω a ( m ) = m = 1 M 2 π k W k ω k ω k + 1 ( cos ( ω m ) cos ( ω n ) ) d ω a ( m )
(27)
- 2 π k W k ω k ω k + 1 A d ( ω ) cos ( ω n ) d ω + i μ i cos ( ω i n ) = 0 - 2 π k W k ω k ω k + 1 A d ( ω ) cos ( ω n ) d ω + i μ i cos ( ω i n ) = 0
(28)

where the integral in the first term can now be done analytically. In matrix notation Equation 49 from Least Squared Error Design of FIR Filters is

R a - a d w + H μ = 0 R a - a d w + H μ = 0
(29)

This is a similar form to that in the multiband paper where the matrix RR gives the effects of weighting with elements

r ( n , m ) = 2 π k W k ω k ω k + 1 ( cos ( ω m ) cos ( ω n ) ) d ω r ( n , m ) = 2 π k W k ω k ω k + 1 ( cos ( ω m ) cos ( ω n ) ) d ω
(30)

except for the first row which should be divided by 2K2K because of the normalizing of the a(0)a(0) term in Equation 49 from FIR Digital Filters and Equation 14 and the first column which should be multiplied by KK because of Equation 51 from FIR Digital Filters and Equation 19. The matrix RR is a sum of a Toeplitz matrix and a Hankel matrix and this fact might be used to advantage and adwadw is the vector of modified filter parameters with elements

a d w ( n ) = 2 π W k ω k ω k + 1 A d ( ω ) cos ( ω n ) d ω a d w ( n ) = 2 π W k ω k ω k + 1 A d ( ω ) cos ( ω n ) d ω
(31)

and the matrix HH is the same as used in Equation 12 and defined in Equation 13. Equations Equation 50 from Least Squared Error Design of FIR Filters and Equation 17 can be written together as a matrix equation

R H G 0 a μ = a d w A c R H G 0 a μ = a d w A c
(32)

The solutions to Equation 50 from Least Squared Error Design of FIR Filters and Equation 17 or to Equation 32 are

μ = ( G R - 1 H ) - 1 ( GR - 1 a d w - A c ) μ = ( G R - 1 H ) - 1 ( GR - 1 a d w - A c )
(33)
a = R - 1 ( a d w - H μ ) a = R - 1 ( a d w - H μ )
(34)

which are ideally suited to a language like Matlab and are implemented in the programs at the end of this book.

Since the solution of Rau=adwRau=adw is the optimal unconstrained weighted least squares filter, we can write Equation 33 and Equation 34 in the form

μ = ( G R - 1 H ) - 1 ( G a u - A c ) = ( G R - 1 H ) - 1 ( A u - A c ) μ = ( G R - 1 H ) - 1 ( G a u - A c ) = ( G R - 1 H ) - 1 ( A u - A c )
(35)
a = a u - R - 1 H μ a = a u - R - 1 H μ
(36)

### The Exchange Algorithms

This Lagrange multiplier formulation together with applying the Kuhn-Tucker conditions are used in an iterative multiple exchange algorithm similar to the Remez exchange algorithm to give the complete design method.

One version of this exchange algorithm applies to the problem posed by Adams with specified pass and stopband edges and with zero error weighting in the transition band. This problem has the structure of a quadratic programming problem and could be solved using general QP methods but the multiple exchange algorithm suggested here is probably faster.

The second version of this exchange algorithm applies to the problem where there is no explicitly specified transition band. This problem is not strictly a quadratic programming problem and our exchange algorithm has no proof of convergence (the HOS algorithm also has no proof of convergence). However, in practice, this program has proven to be robust and converges for a wide variety of lengths, constraints, weights, and band edges. The performance is completely independent of the normalizing parameter KK. Notice that the inversion of the RR matrix is done once and does not have to be done each iteration. The details of the program are included in the filter design paper and in the Matlab program at the end of this book.

As mentioned earlier, this design problem might be addressed by general constrained quadratic programming methods [23], [31], [47], [51], [49], [50], [52], [76], [75], [79].

### Examples and Observations on CLS Designs

Here we show that the CLS FIR filter design approach is probably the best general FIR filter design method. For example, a length-31 linear phase lowpass FIR filter is designed for a band edge of 0.3 and the constraint that the response in the stop cannot be greater than 0.03 is illustrated in Figure 2.

This filter was designed using the Matlab command: 'fircls1()' function.

## L^p Approximation and the Iterative Reweighted Least Squares Method

We now consider the general LpLp approximation which contains the least squares L2L2 and the Chebyshev LL cases. This approach is described in [8], [11], [12] using the iterative reweighted least squared (IRLS) alorithm and looks attractive in that it can use different pp in different frequency bands. This would allow, for example, a least squared error approximation in the passband and a Chebyshev approximation in the stopband. The IRLS method can also be used for complex Chebyshev approximations [73] and constrained L2L2 approximatin.

### Iterative Reweighted Least Squares Filter Design Methods

There are cases where it is desirable to design an FIR filter that will minimize the LpLp error norm. The error is defined by

q = Ω | A ( ω ) - A d ( ω ) | p d ω q = Ω | A ( ω ) - A d ( ω ) | p d ω
(37)

but we usually work with Q2Q2. For large pp, the results are essentially the same as the Chebyshev filter and this gives a continuum of design between L2L2 and LL. It also allows the very interesting important possibility of allowing p(ω)p(ω) to be a function of frequency. This means one could have different error criteria in different frequency bands. It can be modified to give the same effects as a constraint. This approach is discussed in [12]. It can be applied to complex approximation and to two-dimensional filter design [8], [9].

The least squared error and the minimum Chebyshev error criteria are the two most commonly used linear-phase FIR filter design methods [46]. There are many situations where better total performance would be obtained with a mixture of these two error measures or some compromise design that would give a trade-off between the two. We show how to design a filter with an L2L2 approximation in the passband and a Chebyshev approximation in the stopband. We also show that by formulating the LpLp problem we can solve the constrained L2L2 approximation problem [2].

This section first explores the minimization of the pthpth power of the error as a general approximation method and then shows how this allows L2L2 and LL approximations to be used simultaneous in different frequency bands of one filter and how the method can be used to impose constraints on the frequency response. There are no analytical methods to find this approximation, therefore, an iterative method is used over samples of the error in the frequency domain. The methods developed here [8], [11] are based on what is called an iterative reweighted least squared (IRLS) error algorithm [70], [58], [14] and they can solve certain FIR filter design problems that neither the Remez exchange algorithm nor analytical L2L2 methods can.

The idea of using an IRLS algorithm to achieve a Chebyshev or LL approximation seems to have been first developed by Lawson [32] and extended to LpLp by Rice and Usow [59], [57]. The basic IRLS method for LpLp was given by Karlovitz [30] and extended by Chalmers, et. al. [16], Bani and Chalmers [13], and Watson [78]. Independently, Fletcher, Grant and Hebden [24] developed a similar form of IRLS but based on Newton's method and Kahng [29] did likewise as an extension of Lawson's algorithm. Others analyzed and extended this work [22], [44], [14], [78]. Special analysis has been made for 1p<21p<2 by [74], [77], [58], [37], [44], [70], [80] and for p=p= by [24], [13], [58], [42], [4], [38]. Relations to the Remez exchange algorithm [17], [48] were suggested by [13], to homotopy [64] by [60], and to Karmarkar's linear programming algorithm [69] by [58], [68]. Applications of Lawson's algorithm to complex Chebyshev approximation in FIR filter design have been made in [25], [18], [21], [73] and to 2-D filter design in [15]. Reference [72] indicates further results may be forthcoming. Application to array design can be found in [71] and to statistics in [14].

This paper unifies and extends the IRLS techniques and applies them to the design of FIR digital filters. It develops a framework that relates all of the above referenced work and shows them to be variations of a basic IRLS method modified so as to control convergence. In particular, we generalize the work of Rice and Usow on Lawson's algorithm and explain why its asymptotic convergence is slow.

The main contribution here is a new robust IRLS method [8], [11] that combines an improved convergence acceleration scheme and a Newton based method. This gives a very efficient and versatile filter design algorithm that performs significantly better than the Rice-Usow-Lawson algorithm or any of the other IRLS schemes. Both the initial and asymptotic convergence behavior of the new algorithm is examined and the reason for occasional slow convergence of this and all other IRLS methods is discovered.

We then show that the new IRLS method allows the use of pp as a function of frequency to achieve different error criteria in the pass and stopbands of a filter. Therefore, this algorithm can be applied to solve the constrained LpLp approximation problem. Initial results of applications to the complex and two-dimensional filter design problem are presented.

Although the traditional IRLS methods were sometimes slower than competing approaches, the results of this paper and the availability of fast modern desktop computers make them practical now and allow exploitation of their greater flexibility and generality.

#### Minimum Squared Error Approximations

Various approximation methods can be developed by considering different definitions of norm or error measure. Commonly used definitions are L1L1, L2L2, and Chebyshev or LL. Using the L2L2 norm, gives the scalar error to minimize

q = k = 0 L - 1 | A ( ω k ) - A d ( ω k ) | 2 q = k = 0 L - 1 | A ( ω k ) - A d ( ω k ) | 2
(38)

or in matrix notation using Equation 38, the error or residual vector is defined by

q = C a - A d q = C a - A d
(39)

giving the scalar error of Equation 38 as

q = ϵ T ϵ . q = ϵ T ϵ .
(40)

This can be minimized by solution of the normal equations [36], [26], [69]

C T C a = C T A d . C T C a = C T A d .
(41)

The weighted squared error defined by

q = k = 0 L - 1 w k 2 | A ( ω k ) - A d ( ω k ) | 2 . q = k = 0 L - 1 w k 2 | A ( ω k ) - A d ( ω k ) | 2 .
(42)

or, in matrix notation using Equation 39 and Equation 40 causes Equation 42 to become

q = ϵ T W T W ϵ q = ϵ T W T W ϵ
(43)

which can be minimized by solving

W C a = W A d W C a = W A d
(44)

with the normal equations

C T W T W C a = C T W T W A d C T W T W C a = C T W T W A d
(45)

where WW is an LL by LL diagonal matrix with the weights wkwk from Equation 42 along the diagonal. A more general formulation of the approximation simply requires WTWWTW to be positive definite. Some authors define the weighted error in Equation 42 using wkwk rather than wk2wk2. We use the latter to be consistent with the least squared error algorithms in Matlab [43].

Solving Equation 45 is a direct method of designing an FIR filter using a weighted least squared error approximation. To minimize the sum of the squared error and get approximately the same result as minimizing the integral of the squared error, one must choose LL to be 3 to 10 or more times the length LL of the filter being designed.

#### Iterative Algorithms to Minimize the Error

There is no simple direct method for finding the optimal approximation for any error power other than two. However, if the weighting coefficients wkwk as elements of WW in Equation 45 could be set equal to the elements in |A-Ad||A-Ad|, minimizing Equation 42 would minimize the fourth power of |A-Ad||A-Ad|. This cannot be done in one step because we need the solution to find the weights! We can, however, pose an iterative algorithm which will first solve the problem in Equation 41 with no weights, then calculate the error vector ϵϵ from Equation 39 which will then be used to calculate the weights in Equation 45. At each stage of the iteration, the weights are updated from the previous error and the problem solved again. This process of successive approximations is called the iterative reweighted least squared error algorithm (IRLS).

The basic IRLS equations can also be derived by simply taking the gradient of the pp-error with respect to the filter coefficients hh or aa and setting it equal to zero [24], [29]. These equations form the basis for the iterative algorithm.

If the algorithm is a contraction mapping [39], the successive approximations will converge and the limit is the solution of the minimum L4L4 approximation problem. If a general problem can be posed [67], [27], [45] as the solution of an equation in the form

x = f ( x ) , x = f ( x ) ,
(46)

a successive approximation algorithm can be proposed which iteratively calculates xx using

x m + 1 = f ( x m ) x m + 1 = f ( x m )
(47)

starting with some x0x0. The function f(·)f(·) maps xmxm into xm+1xm+1 and, if

limmxm=x0limmxm=x0 where x0=f(x0)x0=f(x0), x0x0 is the fixed point of the mapping and a solution to Equation 46. The trick is to find a mapping that solves the desired problem, converges, and converges fast.

By setting the weights in Equation 42 equal to

w ( k ) = | A ( ω k ) - A d ( ω k ) | ( p - 2 ) / 2 , w ( k ) = | A ( ω k ) - A d ( ω k ) | ( p - 2 ) / 2 ,
(48)

the fixed point of a convergent algorithm minimizes

q = k = 0 L - 1 | A ( ω k ) - A d ( ω k ) | p . q = k = 0 L - 1 | A ( ω k ) - A d ( ω k ) | p .
(49)

It has been shown [59] that weights always exist such that minimizing Equation 42 also minimizes Equation 49. The problem is to find those weights efficiently.

#### Basic Iterative Reweighted Least Squares

The basic IRLS algorithm is started by initializing the weight matrix defined in Equation 42 and Equation 43 for unit weights with W0=IW0=I. Using these weights to start, the mthmth iteration solves Equation 45 for the filter coefficients with

a m = [ C T W m T W m C ] - 1 C T W m T W m A d a m = [ C T W m T W m C ] - 1 C T W m T W m A d
(50)

This is a formal statement of the operation. In practice one should not invert a matrix, one should use a sophisticated numerical method [20] to solve the overdetermined equations in Equation 38 The error or residual vector Equation 39 for the mthmth iteration is found by

ϵ m = C a m - A d ϵ m = C a m - A d
(51)

A new weighting vector is created from this error vector using Equation 48 by

w m + 1 = | ϵ m | ( p - 2 ) / 2 w m + 1 = | ϵ m | ( p - 2 ) / 2
(52)

whose elements are the diagonal elements of the new weight matrix

W m + 1 = d i a g [ w m + 1 ] . W m + 1 = d i a g [ w m + 1 ] .
(53)

Using this weight matrix, we solve for the next vector of filter coefficients by going back to Equation 50 and this defines the basic iterative process of the IRLS algorithm.

It can easily be shown that the aa that minimizes Equation 49 is a fixed point of this iterative map. Unfortunately, applied directly, this basic IRLS algorithm does not converge and/or it has numerical problems for most practical cases [14]. There are three aspects that must be addressed. First, the IRLS algorithm must theoretically converge. Second, the solution of Equation 50 must be numerically stable. Finally, even if the algorithm converges and is numerically stable, it must converge fast enough to be practical.

Both theory and experience indicate there are different convergence problems connected with several different ranges and values of pp. In the range 2p<32p<3, virtually all methods converge [24], [14], [44]. In the range 3p<3p<, the algorithm diverges and the various methods discussed in this paper must be used. As pp becomes large compared to 2, the weights carry a larger contribution to the total minimization than the underlying least squared error minimization, the improvement at each iteration becomes smaller, and the likelihood of divergence becomes larger. For p=p= we can use to advantage the fact that the optimal approximation solution to Equation 49 is unique but the weights in Equation 42 that give that solution are not. In other words, different matrices WW give the same solution to Equation 50 but will have different convergence properties. This allows certain alteration to the weights to improve convergence without harming the optimality of the results [38]. In the range 1<p<21<p<2, both convergence and numerical problems exist as, in contrast to p>2p>2, the IRLS iterations are undoing what the underlying least squares is doing. In particular, the weights near frequencies with small errors become very large. Indeed, if the error happens to be zero, the weight becomes infinite because of the negative exponent in Equation 52. For p=1p=1 the solution to the optimization problem is not even unique. The various algorithms that are presented below are based on schemes to address these problems.

### The Karlovitz Method

In order to achieve convergence, a second order update is used which only partially changes the filter coefficients amam in Equation 50 each iteration. This is done by first calculating the unweighted L2L2 approximation filter coefficients using Equation 6 from Chebyshev or Equal Ripple Error Approximation Filters as

a 0 = [ C T C ] - 1 C T A d . a 0 = [ C T C ] - 1 C T A d .
(54)

The error or residual vector Equation 1 from Chebyshev or Equal Ripple Error Approximation Filters for the mthmth iteration is found as Equation 97 from Sampling, Up-Sampling, Down-Sampling, and Multi-Rate by

ϵ m = C a m - A d ϵ m = C a m - A d
(55)

and the new weighting vector is created from this error vector using Equation 48 by

w m + 1 = | ϵ m | ( p - 2 ) / 2 w m + 1 = | ϵ m | ( p - 2 ) / 2
(56)

whose elements are the diagonal elements of the new weight matrix

W m + 1 = d i a g [ w m + 1 ] . W m + 1 = d i a g [ w m + 1 ] .
(57)

This weight matrix is then used to calculate a temporary filter coefficient vector by

a ^ m + 1 = [ C T W m + 1 T W m + 1 C ] - 1 C T W m + 1 T W m + 1 A d . a ^ m + 1 = [ C T W m + 1 T W m + 1 C ] - 1 C T W m + 1 T W m + 1 A d .
(58)

The vector of filter coefficients that is actually used is only partially updated using a form of adjustable step size in the following second order linearly weighted sum

a m + 1 = λ a ^ m + 1 + ( 1 - λ ) a m a m + 1 = λ a ^ m + 1 + ( 1 - λ ) a m
(59)

Using this filter coefficient vector, we solve for the next error vector by going back to Equation 55 and this defines Karlovitz's IRLS algorithm [30].

In this algorithm, λλ is a convergence parameter that takes values 0<λ10<λ1. Karlovitz showed that for the proper λλ, the IRLS algorithm using Equation 58 always converges to the globally optimal LpLp approximation for pp an even integer in the range 4p<4p<. At each iteration the LpLp error has to be minimized over λλ which requires a line search. In other words, the full Karlovitz method requires a multi-dimensional weighted least squares minimization and a one-dimensional pthpth power error minimization at each iteration. Extensions of Karlovitz's work [78] show the one-dimensional minimization is not necessary but practice shows the number of required iterations increases considerably and robustness in lost.

Fletcher et al. [24] and later Kahng [29] independently derive the same second order iterative algorithm by applying Newton's method. That approach gives a formula for λλ as a function of pp and is discussed later in this paper. Although the iteration count for convergence of the Karlovitz method is good, indeed, perhaps the best of all, the minimization of λλ at each iteration causes the algorithm to be very slow in execution.

### Newton's Methods

Both the new method in section 4.3 and Lawson's method use a second order updating of the weights to obtain convergence of the basic IRLS algorithm. Fletcher et al. [24] and Kahng [29] use a linear summation for the updating similar in form to Equation 59 but apply it to the filter coefficients in the manner of Karlovitz rather than the weights as Lawson did. Indeed, using our development of Karlovitz's method, we see that Kahng's method and Fletcher, Grant, and Hebden's method are simply a particular choice of λλ as a function of pp in Karlovitz's method. They derive

λ = 1 p - 1 λ = 1 p - 1
(60)

by using Newton's method to minimize εε in Equation 49 to give for Equation 59

a m = ( a ^ m + ( p - 2 ) a m - 1 ) / ( p - 1 ) . a m = ( a ^ m + ( p - 2 ) a m - 1 ) / ( p - 1 ) .
(61)

This defines Kahng's method which he says always converges [45]. He also notes that the summation methods in the sections Calculation of the Fourier Transform and Fourier Series using the FFT, Sampling Functions-- the Shah Function, and Down-Sampling, Subsampling, or Decimation do not have the possible restarting problem that Lawson's method theoretically does. Because Kahng's algorithm is a form of Newton's method, its asymptotic convergence is very good but the initial convergence is poor and very sensitive to starting values.

#### A New Robust IRLS Method

A modification and generalization of an acceleration method suggested independently by Ekblom [22] and by Kahng [29] is developed here and combined with the Newton's method of Fletcher, Grant, and Hebden and of Kahng to give a robust, fast, and accurate IRLS algorithm [8], [11]. It overcomes the poor initial performance of the Newton's methods and the poor final performance of the RUL algorithms.

Rather than starting the iterations of the IRLS algorithms with the actual desired value of pp, after the initial L2L2 approximation, the new algorithm starts with p=K*2p=K*2 where KK is a parameter between one and approximately two, chosen for the particular problem specifications. After the first iteration, the value of pp is increased to p=K2*2p=K2*2. It is increased by a factor of KK at each iteration until it reaches the actual desired value. This keeps the value of pp being approximated just ahead of the value achieved. This is similar to a homotopy where we vary the value of pp from 2 to its final value. A small value of KK gives very reliable convergence because the approximation is achieved at each iteration but requires a large number of iterations for pp to reach its final value. A large value of KK gives faster convergence for most filter specifications but fails for some. The rule that is used to choose pmpm at the mthmth iteration is

p m = min ( p , K p m - 1 ) . p m = min ( p , K p m - 1 ) .
(62)

Each iteration of our new variable pp method is implemented by the basic algorithm described as Karlovitz's method but using the Newton's method based value of λλ from Fletcher or Kahng in Equation 60. Both Ekblom and Kahng only used K=2K=2 which is too large in almost all cases.

We also tried the generalized acceleration scheme with the basic Karlovitz method and the RUL algorithm. Although it improved the initial performance of the Karlovitz method, the slowness of each iteration still made this method unattractive. Use with the RUL algorithm gave only a minor improvement of initial performance and no improvement of the poor final convergence.

Our new algorithm uses three distinct concepts:

• The basic IRLS which is a straight forward algorithm with linear convergence [14] when it converges.
• The second order or Newton's modification which increases the number of cases where initial convergence occurs and gives quadratic asymptotic convergence [24], [29].
• The controlled increasing of pp from one iteration to the next is a modification which gives excellent initial convergence and allows adaptation for “difficult" cases.

The best total algorithm, therefore, combines the increasing of pp given in Equation 62 the updating the filter coefficients using Equation 59, and the Newton's choice of λλ in Equation 60. By slowly increasing pp, the error surface slowly changes from the parabolic shape of L2L2 which Newton's method is based on, to the more complicated surface of LpLp. The question is how fast to change and, from experience with many examples, we have learned that this depends on the filter design specifications.

A Matlab program that implements this basic IRLS algorithm is given in the appendix of this paper. It uses an updating of A(ωk)A(ωk) in the frequency domain rather than of a(n)a(n) in the time domain to allow modifications necessary for using different pp in different bands as will be developed later in this paper.

An example design for a length L=31L=31, passband edge fp=0.4fp=0.4, stopband edge fs=0.44fs=0.44, and p=2p=2 the program does not have to iterate and give the response in Figure 3.

For the same specifications except p=4p=4 we get Figure 4

and for p=100p=100 we get Figure 5

#### Different Error Criteria in Different Bands

Probably the most important use of the LpLp approximation problem posed here is its use for designing filters with different error criteria in different frequency bands. This is possible because the IRLS algorithm allows an error power that is a function of frequency p(ω)p(ω) which can allow an L2L2 measure in the passband and a Chebyshev error measure in the stopband or any other form. This is important if an L2L2 approximation is needed in the passband because Parseval's theorem shows that the time domain properties of the filtered signal will be well preserved but, because of unknown properties of the noise or interference, the stopband attenuation must be less than some specified valued.

The new algorithm described in "A New Robust IRLS Method" was modified so that the iterative updating is done to A(ω)A(ω) rather than to a(n)a(n). Because the Fourier transform is linear, the updating of Equation 59 can also be achieved by

A m + 1 ( ω ) = λ A ^ m + 1 ( ω ) + ( 1 - λ ) A m ( ω ) . A m + 1 ( ω ) = λ A ^ m + 1 ( ω ) + ( 1 - λ ) A m ( ω ) .
(63)

The Matlab program listed in the appendix uses this form. This type of updating in the frequency domain allows different pp to be used in different bands of A(ω)A(ω) and different update parameters λλ to be used in the appropriate bands. In addition, it allows a different constant KK weighting to be used for the different bands. The error for this problem is changed from Equation 38 to be

q = k = 0 k 0 | A ( ω k ) - A d ( ω k ) | 2 + K k = k 0 + 1 L - 1 | A ( ω k ) - A d ( ω k ) | p q = k = 0 k 0 | A ( ω k ) - A d ( ω k ) | 2 + K k = k 0 + 1 L - 1 | A ( ω k ) - A d ( ω k ) | p
(64)

Figure 6 shows the frequency response of a filter designed with a passband p=2p=2, a stopband p=4p=4, and a stopband weight of K=1K=1.

Figure 7 gives the frequency response for the same specifications but with p=100p=100 and Figure 8 adds a constant weight to the stopband.

#### The Constrained Approximation

In some design situations, neither a pure L2L2 nor a LL or Chebyshev approximation is appropriate. If one evaluates both the squared error and the Chebyshev error of a particular filter, it is easily seen that for an optimal least squares solution, a considerable reduction of the Chebyshev error can be obtained by allowing a small increase in the squared error. For the optimal Chebyshev solution the opposite is true. A considerable reduction of the squared error can be obtained by allowing a small increase in the Chebyshev error. This suggests a better filter might be obtained by some combination of L2L2 and LL approximation. This problem is stated and addressed by Adams [2] and by Lang [33], [34].

We have applied the IRLS method to the constrained least squares problem by adding an error based weighting function to unity in the stopband only in the frequency range where the response in the previous iteration exceeds the constraint. The frequency response of an example is the that was illustrated in (Reference) as obtained using the CLS algorithm. The IRLS approach to this problem is currently being evaluated and compared to the approach used by Adams. The initial results are encouraging.

#### Application to the Complex Approximation and the 2D Filter Design Problem

Although described above in terms of a one dimensional linear phase FIR filter, the method can just as easily be applied to the complex approximation problem and to the multidimensional filter design problem. We have obtained encouraging initial results from applications of our new IRLS algorithm to the optimal design of FIR filters with a nonlinear phase response. By using a large pp we are able to design essentially Chebyshev filters where the Remez algorithm is difficult to apply reliably.

Our new IRLS design algorithm was applied to the two examples considered by Chen and Parks [19] and by Schulist [62], [63] and Preuss [54], [53]. One is a lowpass filter and the other a bandpass filter, both approximating a constant group delay over their passbands. Examination of magnitude frequency response plots, imaginary vs. real part plots, and group delay frequency response plots for the filters designed by the IRLS method showed close agreement with published results [9]. The use of an LpLp approximation may give more desirable results than a true Chebyshev approximation. Our results on the complex approximation problem are preliminary and we are doing further investigations on convergence properties of the algorithm and on the characteristics of LpLp approximations in this context.

Application of the new IRLS method to the design of 2D FIR filters has also given encouraging results. Here again, it is difficult to apply the Remez exchange algorithm directly to the multi-dimensional approximation problem. Application of the IRLS to this problem is currently being investigated.

We designed 5×55×5, 7×77×7, 9×99×9, 41×4141×41, and 71×7171×71 filters to specifications used in [35], [28], [15], [7]. Our preliminary observations from these examples indicate the new IRLS method is faster and/or gives lower Chebyshev errors than any of the other methods [10]. Values of KK in the 1.1 to 1.2 range were required for convergence. As for the complex approximation problem, further research is being done on convergence properties of the algorithm and on the characteristics of LpLp approximations in this context.

#### Section Conclusions

We have proposed applying the iterative reweighted least squared error approach to the FIR digital filter design problem. We have shown how a large number of existing methods can be cast as variations on one basic successive approximation algorithm called Iterative Reweighted Least Squares. From this formulation we were able to understand the convergence characteristics of all of them and see why Lawson's method has experimentally been found to have slow convergence.

We have created a new IRLS algorithm by combining an improved acceleration scheme with Fletcher's and Kahng's Newton type methods to give a very good design method with good initial and final convergence properties. It is a significant improvement over the Rice-Usow-Lawson method.

The main contribution of the paper was showing how to use these algorithms with different pp in different frequency bands to give a filter with different pass and stopband characteristics, how to solve the constrained LpLp problem, and how the approach is used in complex approximation and in 2D filter design.

## Minimum Phase Design

Here we design optimal approximations that can be “lifted" to give a positive function that when viewed as a magnitude squared, can be factored to give a minimum phase optimal design. However, the factoring can be a problem for long filters.

## Window Function Design of FIR Filters

One should not use Hamming, Hanning, Blackman, or Bartlet windows for the design of FIR filters. They are appropriate for segmenting long data strings into shorter blocks to minimize the effects of blocking, but they do not design filters with any control over the transition band and do not design filters that are optimal in any meaningful sense.

The Kaiser window does have the ability to control the transition band. It also gives a fairly good approximation to a least squares approximation modified to reduce the Gibbs effect. However, the design is also not optimal in any meaningful sense and does not allow individual control of the widths of multiple transition bands. The spline transition function method gives the same control as the Kaiser window but does have a criterion of optimality and does allow independent control over individual transition bands. No window method allows any separate weighting of the error in different bands.

## References

1. Adams, John W. (1991, August). A New Optimal Window. IEEE Transactions on Signal Processing, 39(8), 1753–1769.
2. Adams, John W. (1991, April). FIR Digital Filters with Least–Squares Stopbands Subject to Peak–Gain Constraints. IEEE Transactions on Circuits and Systems, 39(4), 376–388.
3. Adams, John W. (1992, May). Constrained Least-Squares Digital Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (p. 565–567). ISCAS-92, San Diego, CA
4. Algazi, V. Ralph and Suk, Minsoo. (1975, December). On the Frequency Weighted Least-Square Design of Finite Duration Filters. IEEE Transactions on Circuits and Systems, 22(12), 943–953.
5. Adams, John W. and Sullivan, James L. (1998, February). Peak-Constrained Least-Squares Optimization. IEEE Transactions on Signal Processing, 46(2), 306–321.
6. Adams, J. W. and Sullivan, J. L. and Hashemi, R. and Ghadimi, C. and Franklin, J. and Tucker, B. (1993, May). New Approaches to Constrained Optimization of Digital Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (p. 80–83). ISCAS-93, Chicago
7. Algazi, V. R. and Suk, M. and Rim, C. S. (1986, June). Design of Almost Minimax FIR Filters in One and Two Dimensions by WLS Techniques. IEEE Transactions on Circuits and Systems, CAS–33, 590–596.
8. Burrus, C. S. and Barreto, J. A. (1992, May). Least -Power Error Design of FIR Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (Vol. 2, p. 545–548). ISCAS-92, San Diego, CA
9. Barreto, J. A. and Burrus, C. S. (1994, April 19–22). Complex Approximation using Iterative Reweighted Least Squares for FIR Digital Filters. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. III:545–548). IEEE ICASSP-94, Adelaide, Australia
10. Barreto, J. A. and Burrus, C. S. (1994, November 13-16). Iterative Reweighted Least Squares and the Design of Two-Dimensional FIR Digital Filters. In Proceedings of the IEEE International Conference on Image Processing. (Vol. I, p. I:775–779). IEEE ICIP-94, Austin, Texas
11. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1992, September 13–16). Reweighted Least Squares Design of FIR Filters. In Paper Summaries for the IEEE Signal Processing Society's Fifth DSP Workshop. (p. 3.1.1). Starved Rock Lodge, Utica, IL
12. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1994, November). Iterative Reweighted Least Squares Design of FIR Filters. IEEE Transactions on Signal Processing, 42(11), 2926–2936.
13. Bani, Mohamed S. and Chalmers, Bruce L. (1984). Best Approximation in via Iterative Hilbert Space Procedures. Journal of Approximation Theory, 42, 173–180.
14. Byrd, Richard H. and Pyne, David A. (1979, June). Convergence of the Iteratively Reweighted Least Squares Algorithm for Robust Regression. (313). Technical report. Dept. of Mathematical Sciences, The Johns Hopkins University.
15. Chi, Chong-Yung and Chiou, Shyu-Li. (1992, May). A New Self–Initiated WLS Approximation Method for the Design of Two–dimensional Equiripple FIR Digital Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (p. 1436–1439). San Diego, CA
16. Chalmers, B. L. and Egger, A. G. and Taylor, G. D. (1983). Convex Approximation. Journal of Approximation Theory, 37, 326–334.
17. Cheney, E. W. (1966). Introduction to Approximation Theory. New York: McGraw-Hill.
18. Chit, Nassim N. and Mason, John S. (1991, January). Complex Chebyshev Approximation for FIR Digital Filters. IEEE Transactions on Signal Processing, 39(1), 49–54.
19. Chen, X. and Parks, T. W. (1987, February). Design of FIR Filters in the Complex Domain. IEEE Transactions on Acoustics, Speech and Signal Processing, 35, 144–153.
20. Dongarra, J. J. and Bunch, J. R. and Moler, C. B. and Stewart, G. W. (1979). LINPACK User's Guide. Philadelphia, PA: SIAM.
21. de Queiroz Cobra, Daniel Távora. (1986, January). FIR Digital Filter Design by the Method of Successive Projections. Technical report. MIT.
22. Ekblom, Hakan. (1973). Calculation of Linear Best –Approximations. BIT, 13(3), 292–300.
23. Er, M. H. and Siew, C. K. (1995, March). Design of FIR Filters Using Quadratic Programming Approach. IEEE Transaction on Circuits and Systems – II, 42(3), 217–220.
24. Fletcher, R. and Grant, J. A. and Hebden, M. D. (1971). The Calculation of Linear Best Approximations. Computer Journal, 14, 276–279.
25. Fisher, John D. (1973). Design of Finite Impulse Response Digital Filters. Ph. D. Thesis. Rice University.
26. Gill, Philip E. and Murray, Walter and Wright, Margaret H. (1981). Practical Optimization. London: Academic Press.
27. Garcia, C. B. and Zangwill, W. I. (1981). Pathways to Solutions, Fixed Points, and Equilibria. Englewood Cliffs, NJ: Prentice-Hall.
28. Harris, D. B. and Merserau, R. M. (1977). A Comparison of Algorithms for Minimax Design of Two–Dimensional Linear–Phase FIR Digital Filters. IEEE Transactions on Acoustics, Speech, and Signal Processing, 25(6),
29. Kahng, S. W. (1972, April). Best Approximation. Mathematics of Computation, 26(118), 505–508.
30. Karlovitz, L. A. (1970). Construction of Nearest Points in the , even, and norms. I. Journal of Approximation Theory, 3, 123–127.
31. Lang, Mathias C. (1997, April). Design of Nonlinear Phase FIR Digital Filters using Quadratic Programming. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Munich
32. Lawson, C. L. (1961). Contribution to the Theory of Linear Least Maximum Approximations. Ph. D. Thesis. University of California at Los Angeles.
33. Lang, Markus and Bamberger, Joachim. (1993, April). Nonlinear Phase FIR Filter Design with Minimum LS Error and Additional Constraints. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. (p. III:57–60). Minneapolis, ICASSP-93
34. Lang, Markus and Bamberger, Joachim. (1994, March). Nonlinear Phase FIR Filter Design According to the Norm with Constraints for the Complex Error. [Paper reprinted in July issue to correct typesetting errors]. Signal Processing, 36(1), 31–40.
35. Lodge, J. H. and Fahmy, M. (1980). An Efficient Optimization Technique for the Design of Two–Dimensional Linear–Phase Digital Filters. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(3),
36. Lawson, C. L. and Hanson, R. J. (1974). Solving Least Squares Problems. Inglewood Cliffs, NJ: Prentice-Hall.
37. Li, Yuying. (1991, June). A Globally Convergent Method for Problems. (TR 91-1212). Technical report. Ithaca, NY 14853-7501: Computer Science Department, Cornell University.
38. Lim, Y. C. and Lee, J. -H. and Chen, C. K. and Yang, R. -H. (1992, March). A Weighted Least Squares Algorithm for Quasi–Equripple FIR and IIR Digital Filter Design. IEEE Transactions on Signal Processing, 40(3), 551–558.
39. Luenberger, D. G. (1969). Optimization by Vector Space Methods. New York: John Wiley & Sons.
40. Medlin, G. W. and Adams, J. W. (1989). A New Technique for Maximally Linear Differentiators. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. 825–828).
41. Medlin, G. W. and Adams, J. W. and Leondes, C. T. (1988, October). Lagrange Multiplier Approach to the Design of FIR Filters for Multirate Applications. IEEE Transactions on Circuits and Systems, 35, 1210–1219.
42. Mason, J. S. and Chit, N. N. (1987). New Approach to the Design of FIR Digital Filters. IEE Proceedings, Part G, 134(4), 167–180.
43. Moler, Cleve and Little, John and Bangert, Steve. (1989). Matlab User's Guide. South Natick, MA: The MathWorks, Inc.
44. Merle, G. and Späth, H. (1974). Computational Experiences with Discrete –Approximation. Computing, 12, 315–321.
45. Ortega, J. M. and Rheinboldt, W. C. (1970). Iterative Solutions of Nonlinear Equations in Several Variables. New York: Academic Press.
46. Parks, T. W. and Burrus, C. S. (1987). Digital Filter Design. New York: John Wiley & Sons.
47. Potchinkov, Alexander W. (1997). Design of Optimal Linear Phase FIR Filters by a Semi-Infinite Programming Technique. Signal Processing, 58, 165–180.
48. Powell, M. J. D. (1981). Approximation Theory and Methods. Cambridge, England: Cambridge University Press.
49. Potchinkov, Alexander W. and Reemtsen, Rembert M. (1994). FIR Filter Design in the Complex Plane by a Semi-Infinite Programming Technique I, the Method. AEÜ, 48(3), 135–144.
50. Potchinkov, Alexander W. and Reemtsen, Rembert M. (1994). FIR Filter Design in the Complex Plane by a Semi-Infinite Programming Technique II, Examples. AEÜ, 48(4), 200–209.
51. Potchinkov, Alexander W. and Reemtsen, Rembert M. (1995). Design of FIR Filters in the Complex Plane by Convex Optimization. Signal Processing, 46, 127–146.
52. Potchinkov, Alexander W. and Reemtsen, Rembert M. (1997). The Simultaneous Approximation of Magnitude and Phase by FIR Digital Filters I, A New Approach. International Journal on Circuit Theory and Application, 25, 167–177.
53. Preuss, Klaus. (1987, April). A Novel Approach for Complex Chebyshev Approximation with FIR Filters using the Remez Exchange Algorithm. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (Vol. 2, p. 872–875). Dallas, Tx
54. Preuss, Klaus. (1989, May). On the Design of FIR Filters by Complex Approximation. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(5), 702–712.
55. Parks, T. W. and Rabiner, L. R. and McClellan, J. H. (1973, February). On the Transition Width of Finite Impulse-Response Digital Filters. IEEE Transactions on Audio and Electroacoustics, 21(1), 1–4.
56. Rabiner, L. R. and Gold, B. (1975). Theory and Application of Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall.
57. Rice, J. R. (1969). The Approximation of Functions. (Vol. 2). Reading, MA: Addison-Wesley.
58. Ruzinsky, S. A. and Olsen, E. T. (1989, February). and Minimization Via a Variant of Karmarkar's Algorithm. IEEE Transactions on ASSP, 37(2), 245–253.
59. Rice, John R. and Usow, Karl H. (1968). The Lawson Algorithm and Extensions. Math. Comp., 22, 118–127.
60. Stonick, Virginia L. and Alexander, S. T. (1991). A Relationship Between the Recursive Least Squares Update and Homotopy Continuation Methods. [preprint].
61. Sullivan, J. L. and Adams, J. W. (1994, May). A New Nonlinear Optimization Algorithm for Asymmetric FIR Digital filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (p. 541–544). ISCAS-94, London
62. Schulist, Matthias. (1990). Improvements of a Complex FIR Filter Design Algorithm. Signal Processing, 20, 81–90.
63. Schulist, Matthias. (1992, April). Complex Approximation with Additional Constraints. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (Vol. 5, p. 101–104).
64. Sieradski, Allan J. (1992). An Introduction to Topology and Homotopy. Boston: PWS–Kent.
65. Selesnick, Ivan W. and Lang, Marcus and Burrus, C. Sidney. (1995, May 8–12). Constrained Least Square Design of FIR Filters without Specified Transition Bands. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (Vol. 2, p. 1260–1263). IEEE ICASSP-95, Detroit
66. Selesnick, Ivan W. and Lang, Markus and Burrus, C. Sidney. (1996, August). Constrained Least Square Design of FIR Filters without Explicitly Specified Transition Bands. IEEE Transactions on Signal Processing, 44(8), 1879–1892.
67. Schafer, R. W. and Mersereau, R. M. and Richards, M. A. (1981). Constrained Iterative Restoration Algorithms. Proceedings of the IEEE, 69(4), 432–450.
68. Stone, Richard E. and Tovey, Craig A. (1991, June). The Simplex and Projective Scaling Algorithms as Iteratively Reweighted Least Squares Methods. SIAM Review, 33(2), 220–237.
69. Strang, Gilbert. (1986). Introduction to Applied Mathematics. Wellesley, MA: Wellesley-Cambridge Press.
70. Schroeder, Jim and Yarlagadda, Rao and Hershey, John. (1991, August). Normed Minimization with Applications to Linear Predictive Modeling for Sinusoidal Frequency Estimation. Signal Processing, 24(2), 193–216.
71. Tseng, Ching–Yih and Griffiths, Lloyd J. (1992, November). A Simple Algorithm to Achieve Desired Patterns for Arbitrary Arrays. IEEE Transaction on Signal Processing, 40(11), 2737–2746.
72. Tseng, Ching-Yih and Griffiths, Lloyd J. (1994, January). Are Equiripple Digital FIR Filters Always Optimal with Minimax Error Criterion? IEEE Signal Processing Letters, 1(1), 5–8.
73. Tseng, Ching-Yih. (1992, May). A Numerical Algorithm for Complex Chebyshev FIR Filter Design. In Proceedings of the IEEE International Symposium on Circuits and Systems. (Vol. 3, p. 549–552). San Diego, CA
74. Usow, Karl H. (1967). On Approximation 1: Computation for Continuous Functions and Continuous Dependence. SIAM Journal on Numerical Analysis, 4(1), 70–88.
75. Vandenberghe, Lieven and Boyd, Stephen. (1996, March). Semidefinite Programming. SIAM Review, 38(1), 49–95.
76. Vandenberghe, Lieven and Boyd, Stephen. (to appear 1998). Connections Between Semi-Infinite and Semidefinite Programming. In Reemtsen, R. and Rueckmann, J.-J. (Eds.), Semi-Infinite Programming. Kluwer.
77. Watson, G. A. (1981). An Algorithm for Linear Approximation of Continuous Functions. IMA Journal of Numerical Analysis, 1, 157–167.
78. Watson, G. A. (1988, October). Convex Approximation. Journal of Approximation Theory, 55(1), 1–11.
79. Wu, Shao-Po and Boyd, Stephen and Vandenberghe, Lieven. (1997). FIR Filter Design via Spectral Factorization and Convex Optimization. In Datta, Biswa (Ed.), Applied Computational Control, Signal and Communications. Birkhauser.
80. Yu, Wen–Shyong and Fong, I–Kong and Chang, Kuang–Chiung. (1992, August). An –Approximation Based Method for Synthesizing FIR Filters. IEEE Transactions on Circuits and Systems II, 39(8), 578–561.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks