Connexions

You are here: Home » Content » A test module for Burrus

Recently Viewed

This feature requires Javascript to be enabled.

A test module for Burrus

Approximation

The problem of approximating one function by another or of approximating measured data by the output of a mathematical or computer model is extraordinarily useful and ubiquitous. In this note, we present a very powerful algorithm most often called “Iterative Reweighted Least Squares” or (IRLS). Because minimizing the squared error in an approximation can often be done analytically or with a finite number of numerical calculations, it is the base of many other approaches. We will pose the problem as the solution of a set of simultaneous linear equations.

a 11 a 12 a 13 a 1 N a 21 a 22 a 23 a 31 a 32 a 33 a M 1 a M N x 1 x 2 x 3 x N = b 1 b 2 b 3 b M a 11 a 12 a 13 a 1 N a 21 a 22 a 23 a 31 a 32 a 33 a M 1 a M N x 1 x 2 x 3 x N = b 1 b 2 b 3 b M
(1)

or, in matrix notation

A x = b A x = b
(2)

where we are given an MM by NN real matrix AA and an MM by 1 vector bb, and want to find the NN by 1 vector xx. Only if AA is square and non-singular (full rank) is there a unique solution. Otherwise, an approximate solution is sought according to some criterion of approximation.

If bb does not lie in the range space of AA (the space spanned by the columns of AA), there is no exact solution to Equation 2, therefore, an approximation problem can be posed by minimizing the norm or some other measure of an equation error vector defined by

e = A x - b . e = A x - b .
(3)

Least Squared Error Approximation

A generalized solution (or an optimal approximate solution) to Equation 2 is usually considered to be an xx that minimizes some norm or other measure of ee. If that problem does not have a unique solution, further conditions, such as also minimizing the norm of xx, are imposed. The l2l2 or root-mean-squared error or Euclidean norm is eTeeTe and its minimization has an analytical solution. The squared error is defined as

| | e | | 2 2 = i e i 2 = e T e | | e | | 2 2 = i e i 2 = e T e
(4)

which when minimized results in an exact or approximation solution of Equation 2

• If AA has M=NM=N, (square and nonsingular), then the exact solution is
x=A-1bx=A-1b
(5)
• If AA has M>NM>N, (over specified) then the solution with the least squared equation error is
x=[ATA]-1ATbx=[ATA]-1ATb
(6)
• If AA has M<NM<N, (under specified) then the solution with the least norm is
x=AT[AAT]-1bx=AT[AAT]-1b
(7)

These formulas assume AA has full rank but generalized solutions exist using the Moore-Penrose pseudoinverse [1], [9], [13]

Weighted Least Squared Error Approximation

In addition to these cases with “analytical” solutions, we can pose a more general problem by asking for an optimal approximation with a weighted norm [9], [10] to emphasize or de-emphasize certain components or range of equations. Here we minimize

| | We | | 2 2 = i w i 2 e i 2 = e T W T W e | | We | | 2 2 = i w i 2 e i 2 = e T W T W e
(8)

where WW is a diagonal matrix with the weights, wiwi, along its diagonal. The approximate solutions are

• If AA has M>NM>N, (over specified) then the minimum weighted error solution is
x=[ATWTWA]-1ATWTWbx=[ATWTWA]-1ATWTWb
(9)
• If AA has M<NM<N, (under specified) then the minimum weighted norm solution is
x=[WTW]-1ATA[WTW]-1AT-1bx=[WTW]-1ATA[WTW]-1AT-1b
(10)

These solutions to the weighted approximation problem are useful in their own right but also serve as the foundation for the Iterative Reweighted Least Squares (IRLS) algorithm developed next.

Approximation with Other Norms and Error Measures

Most of the discussion about the approximate solutions to Ax=bAx=b are about the result of minimizing the l2l2 equation error ||Ax-b||2||Ax-b||2 and/or the l2l2 norm of the solution ||x||2||x||2 because in some cases that can be done by analytic formulas and also because the l2l2 norm has a energy interpretation. However, both the l1l1 and the ll[18] have well known applications that are important [20], [12] and the more general lplp error is remarkably flexible [3], [7]. Donoho has shown [21] that l1l1 optimization gives essentially the same sparsity as the true sparsity measure in l0l0.

The definition of the lplp norm of the vector xx is:

| | e | | p = i | e i | p 1 / p | | e | | p = i | e i | p 1 / p
(11)

which has the same solution for the minimum as

| | e | | p p = i | e i | p | | e | | p p = i | e i | p
(12)

but is often easier to use.

In some cases, one uses a different norm for the minimization of the equation error than the one for minimization of the solution norm. And in other cases, one minimizes a weighted error to emphasize some equations relative to others [9]. A modification allows minimizing according to one norm for one set of equations and another for a different set. A more general error measure than a norm can be used which used a polynomial error [7] which does not satisfy the scaling requirement of a norm, but is convex. One could even use the so-called lplp norm for 1>p>01>p>0 which is not even convex but is an interesting tool for obtaining sparse solutions.

Note from the figure how the l10l10 norm puts a large penalty on large errors. This gives a Chebyshev-like solution. The l0.2l0.2 norm puts a large penalty on small errors making them tend to zero. This (and the l1l1 norm) give a sparse solution.

The LpLp Norm Approximation

The IRLS (iterative reweighted least squares) algorithm allows an iterative algorithm to be built from the analytical solutions of the weighted least squares with an iterative reweighting to converge to the optimal lplp approximation [10], [35].

The Overdetermined System with more Equations than Unknowns

If one poses the lplp approximation problem in solving an overdetermined set of equations (case 2 from Chapter 3), it comes from defining the equation error vector

e = A x - b e = A x - b
(13)

and minimizing the p-norm defined in Equation 11 or, equivalently, Equation 12, neither of which can we minimize directly. However, we do have formulas [9] to find the minimum of the weighted squared error

| | W e | | 2 2 = n w n 2 | e n | 2 | | W e | | 2 2 = n w n 2 | e n | 2
(14)

one of which is

x = [ A T W T W A ] - 1 A T W T W b x = [ A T W T W A ] - 1 A T W T W b
(15)

where WW is a diagonal matrix of the error weights, wnwn.

Iteratively Reweighted Least Squares (IRLS)

To find the minimum lplp approximate solution, we propose the iterative reweighted least squared (IRLS) error algorithm which starts with unity weighting, W=IW=I, solves for an initial xx with Equation 15, calculates a new error from Equation 13, which is then used to set a new weighting matrix WW

W = d i a g ( w n ) ( p - 2 ) / 2 W = d i a g ( w n ) ( p - 2 ) / 2
(16)

to be used in the next iteration of Equation 15. Using this, we find a new solution xx and repeat until convergence (if it happens!). To converge, this should be a contraction map which converges to a fixed point which is the solution. A simple Matlab program that implements this algorithm is:

% m-file IRLS0.m to find the optimal solution to Ax=b
%  minimizing the L_p norm ||Ax-b||_p, using IRLS.
%  csb 11/10/2012
function x = IRLS0(A,b,p,KK)
if nargin < 4, KK=10;  end;
x  = pinv(A)*b;                              % Initial L_2 solution
E = [];
for k = 1:KK                                 % Iterate
e  = A*x - b;                             % Error vector
w  = abs(e).^((p-2)/2);                  % Error weights for IRLS
W  = diag(w/sum(w));                      % Normalize weight matrix
WA = W*A;                                 % apply weights
x  = (WA'*WA)\(WA'*W)*b;                 % weighted L_2 sol.
ee = norm(e,p);   E = [E ee];            % Error at each iteration
end
plot(E)


Program 1.

This core idea has been repeatedly proposed and developed in different application areas over the past 50 years with a variety of success [10], [35], [3]. Used in this basic form, it reliably converges for 1.5<p<31.5<p<3. In 1970 [29], a modification was made to partially update the solution each iteration using

x ( k ) = q x ^ ( k ) + ( 1 - q ) x ( k - 1 ) x ( k ) = q x ^ ( k ) + ( 1 - q ) x ( k - 1 )
(17)

where x^x^ is the new weighted least squares solution of Equation 15 which is used to only partially update the previous value x(k-1)x(k-1). The first application of this partial update optimized the value for qq each iteration to give a more robust convergence but is slowed the total algorithm considerably.

A second improvement used a specific up-date factor of

q = 1 p - 1 q = 1 p - 1
(18)

to significantly increased the speed of convergence. With this particular factor, the algorithm becomes a form of Newton's method which has quadratic convergence [28], [7] but initial convergence was less robust.

A third modification applied homotopy [6], [43], [41], [45], [28], [23] by starting with a value for pp which is equal to 2 and increasing it each iteration (or each few iterations) until it reached the desired value, or, in the case of p<2p<2, decrease it. This improved the initial performance of the algorithm and made a significant increase in both the range of pp that allowed convergence and in the speed of calculations. Some of the history and details can be found applied to digital filter design in [3], [7].

A Matlab program that implements these ideas applied to our pseudoinverse problem with more equations than unknowns is:

% m-file IRLS1.m to find the optimal solution to Ax=b
%  minimizing the L_p norm ||Ax-b||_p, using IRLS.
%  Newton iterative update of solution, x, for  M > N.
%  For 2<p<infty, use homotopy parameter K = 1.01 to 2
%  For 0<p<2, use K = approx 0.7 - 0.9
%  csb 10/20/2012
function x = IRLS1(A,b,p,K,KK)
if nargin < 5, KK=10;  end;
if nargin < 4, K = 2;  end;
if nargin < 3, p = 10; end;
pk = 2;                                      % Initial homotopy value
x  = pinv(A)*b;                              % Initial L_2 solution
E = [];
for k = 1:KK                                 % Iterate
if p >= 2, pk = min([p, K*pk]);           % Homotopy change of p
else pk = max([p, K*pk]); end
e  = A*x - b;                             % Error vector
w  = abs(e).^((pk-2)/2);                  % Error weights for IRLS
W  = diag(w/sum(w));                      % Normalize weight matrix
WA = W*A;                                 % apply weights
x1  = (WA'*WA)\(WA'*W)*b;                 % weighted L_2 sol.
q  = 1/(pk-1);                            % Newton's parameter
if p > 2, x = q*x1 + (1-q)*x; nn=p;       % partial update for p>2
else x = x1; nn=2; end                 % no partial update for p<2
ee = norm(e,nn);   E = [E ee];            % Error at each iteration
end
plot(E)


Program 2.

This can be modified to use different pp's in different bands of equations or to use weighting only when the error exceeds a certain threshold to achieve a constrained LS approximation [3], [7], [53].

This is presented as applied to the overdetermined system (Case 2a and 2b) but can also be applied to other cases. A particularly important application of this section is to the design of digital filters.

The Underdetermined System with more Unknowns than Equations

If one poses the lplp approximation problem in solving an underdetermined set of equations (case 3 from Chapter 3), it comes from defining the solution norm as

| | x | | p = n | x ( n ) | p 1 / p | | x | | p = n | x ( n ) | p 1 / p
(19)

and finding xx to minimizing this p-norm while satisfying Ax=bAx=b.

It has been shown this is equivalent to solving a least weighted norm problem for specific weights.

| | x | | p = n w ( n ) 2 | x ( n ) | 2 1 / 2 | | x | | p = n w ( n ) 2 | x ( n ) | 2 1 / 2
(20)

The development follows the same arguments as in the previous section but using the formula [44], [9]

x = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 b x = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 b
(21)

with the weights, w(n)w(n), being the diagonal of the matrix, WW, in the iterative algorithm to give the minimum weighted solution norm in the same way as Equation 15 gives the minimum weighted equation error.

A Matlab program that implements these ideas applied to our pseudoinverse problem with more unknowns than equations is:

% m-file IRLS2.m to find the optimal solution to Ax=b
%  minimizing the L_p norm ||x||_p, using IRLS.
%  Newton iterative update of solution, x, for  M < N.
%  For 2<p<infty, use homotopy parameter K = 1.01 to 2
%  For 0<p<2, use K = approx 0.7 to 0.9
%  csb 10/20/2012
function x = IRLS2(A,b,p,K,KK)
if nargin < 5, KK= 10;  end;
if nargin < 4, K = .8;  end;
if nargin < 3, p = 1.1; end;
pk = 2;                                 % Initial homotopy value
x  = pinv(A)*b;                         % Initial L_2 solution
E = [];
for k = 1:KK
if p >= 2, pk = min([p, K*pk]);      % Homotopy update of p
else pk = max([p, K*pk]); end
W  = diag(abs(x).^((2-pk)/2)+0.00001);  % norm weights for IRLS
AW = A*W;                            % applying new weights
x1 = W*AW'*((AW*AW')\b);             % Weighted L_2 solution
q  = 1/(pk-1);                       % Newton's parameter
if p >= 2, x = q*x1 + (1-q)*x; nn=p; % Newton's partial update for p>2
else x = x1; nn=1; end            % no Newton's partial update for p<2
ee = norm(x,nn);  E = [E ee];        % norm at each iteration
end;
plot(E)


Program 3.

This approach is useful in sparse signal processing and for frame representation. Our work was originally done in the context of filter design but others have done similar things in sparsity analysis [27], [20], [57].

The methods developed here [3], [6] are based on what is called an iterative reweighted least squared (IRLS) error algorithm [48], [39], [11] and they can solve certain FIR filter design problems that neither the Remez exchange algorithm nor analytical L2L2 methods can.

History

The idea of using an IRLS algorithm to achieve a Chebyshev or LL approximation was first developed by Lawson [30] and extended to LpLp by Rice and Usow [40], [38]. The basic IRLS method for LpLp was given by Karlovitz [29] and extended by Chalmers, et. al. [17], Bani and Chalmers [8], and Watson [56]. Independently, Fletcher, Grant and Hebden [24] developed a similar form of IRLS but based on Newton's method and Kahng [28] did likewise as an extension of Lawson's algorithm. Others analyzed and extended this work [23], [34], [11], [56]. Special analysis has been made for 1p<21p<2 by [51], [55], [39], [31], [34], [48], [58] and for p=p= by [24], [8], [39], [33], [2], [32]. Relations to the Remez exchange algorithm [18], [37] were suggested by [8], to homotopy [45] by [42], and to Karmarkar's linear programming algorithm [47] by [39], [46]. Applications of Lawson's algorithm to complex Chebyshev approximation in FIR filter design have been made in [25], [19], [22], [50], [4] and to 2-D filter design in [16], [5]. Application to array design can be found in [49] and to statistics in [11].

The papers [7], [53] unify and extend the IRLS techniques and applies them to the design of FIR digital filters. They develops a framework that relates all of the above referenced work and shows them to be variations of a basic IRLS method modified so as to control convergence. In particular, [7] generalizes the work of Rice and Usow on Lawson's algorithm and explain why its asymptotic convergence is slow.

The main contribution of these recent papers is a new robust IRLS method for filter design [3], [6], [53] that combines an improved convergence acceleration scheme and a Newton based method. This gives a very efficient and versatile filter design algorithm that performs significantly better than the Rice-Usow-Lawson algorithm or any of the other IRLS schemes. Both the initial and asymptotic convergence behavior of the new algorithm is examined [53] and the reason for occasional slow convergence of this and all other IRLS methods is presented.

We then show that the new IRLS method allows the use of pp as a function of frequency to achieve different error criteria in the pass and stopbands of a filter. Therefore, this algorithm can be applied to solve the constrained LpLp approximation problem. Initial results of applications to the complex and two-dimensional filter design problem are also presented.

When used for the range 1<p<21<p<2, the new IRLS algorithm works well in outlier suppression in data and in finding sparsity for data compression.

Although the traditional IRLS methods were sometimes slower than competing approaches, the results of this paper and the availability of fast modern desktop computers make them practical now and allow exploitation of their greater flexibility and generality.

Convergence

Both theory and experience indicate there are different convergence problems connected with several different ranges and values of pp. In the range 2p<32p<3, virtually all methods converge [26], [11]. In the range 3p<3p<, the algorithm diverges and the various methods discussed in this paper must be used. As pp becomes large compared to 2, the weights carry a larger contribution to the total minimization than the underlying least squared error minimization, the improvement at each iteration becomes smaller, and the likelihood of divergence becomes larger. For p=p= the optimal approximation solution to Equation 2 is unique but the weights in Equation 6 that give that solution are not. In other words, different matrices WW give the same solution to Equation 2 but will have different convergence properties. This allows certain alteration to the weights to improve convergence without harming the optimality of the results [32]. In the range 1<p<21<p<2, both convergence and numerical problems exist as, in contrast to p>2p>2, the IRLS iterations are undoing what the underlying least squares is doing. In particular, the weights near frequencies with small errors become very large. Indeed, if the error happens to be zero, the weight becomes infinite because of the negative exponent in Equation 6. For p=1p=1 the solution to the optimization problem is not even unique. The various algorithms that are presented below are based on schemes to address these problems.

For some applications, the convergence is not robust. In these cases, the use of an adaptive step size can considerably improve convergence but at the expense of speed [54]. for large and small pp, the weight functions wiwi are not unique even though the solution is unique [53]. This allows flexibility that prevents the occurrence of very large or very small weights [32].

One of the most versatile modifications of the basic IRLS algorithm is the use of different powers pp for different equations in Equation 2. For filters, this allows an l2l2 approximation in the passband and, simultaneously, an ll approximation in the stopband [7]. Another powerful modification is to use an error measure such as

q = i | e i | 2 + K | e i | 100 q = i | e i | 2 + K | e i | 100
(22)

in the IRLS algorithm. The first term dominates for ei<1ei<1 and the second term dominates for ei>1ei>1 which gives a constrained least squares approximation which is a very useful result [7], [15], [14], [53]. There are other modifications which can be made for specific applications where no alternative methods exist such as with 2D approximation [5] and cases where the error measure is not a norm nor even convex. Obviously, there are huge convergence problems but it is a good starting point. Still another application is to the design of recursive or IIR filters. It is possible to formulate these filters as a matrix multiplication and pose an iterative algorithm to find an optimal solution [52], [53], [36].

Summary

The individual ideas used in an IRLS program are

• The basic IRLS algorithm of Equation 12,Equation 14,Equation 15,Equation 16 and Program 1. [30]
• Adding a partial update to the iterations as in Equation 17 with optimization of the amount of update [29].
• Choosing an update to give a Newton's method as in Equation 18 and Program 2. [24], [28]
• Applying homotopy by increasing pp each iteration as in Program 2. [28], [23], [7]
• Adaptive step size on partial updating and/or homotopy [54], [53]
• Allowing pp to vary with different equations allowing a mixture of approximation criteria [4], [7]
• Using a polynomial error measure as in Equation 22 to give a near constrained least squares approximation. [7]
• Using weighting envelops to stablize convergence for very large pp[53]
• Applying it to complex and 2D approximation, to Chebyshev and l1l1 approximation, to filter design, outlier supression in regression, and to creating sparse solutions [4], [5].

Iteratively Reweighted Least Squares (IRLS) approximation is a powerful and flexible tool for many engineering and applied problems.

References

1. Albert, Arthur. (1972). Regression and the Moore-Penrose Pseudoinverse. New York: Academic Press.
2. Algazi, V. Ralph and Suk, Minsoo. (1975, December). On the Frequency Weighted Least-Square Design of Finite Duration Filters. IEEE Transactions on Circuits and Systems, 22(12), 943–953.
3. Burrus, C. S. and Barreto, J. A. (1992, May). Least -Power Error Design of FIR Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (Vol. 2, p. 545–548). ISCAS-92, San Diego, CA
4. Barreto, J. A. and Burrus, C. S. (1994, April 19–22). Complex Approximation using Iterative Reweighted Least Squares for FIR Digital Filters. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. III:545–548). IEEE ICASSP-94, Adelaide, Australia
5. Barreto, J. A. and Burrus, C. S. (1994, November 13-16). Iterative Reweighted Least Squares and the Design of Two-Dimensional FIR Digital Filters. In Proceedings of the IEEE International Conference on Image Processing. (Vol. I, p. I:775–779). IEEE ICIP-94, Austin, Texas
6. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1992, September 13–16). Reweighted Least Squares Design of FIR Filters. In Paper Summaries for the IEEE Signal Processing Society's Fifth DSP Workshop. (p. 3.1.1). Starved Rock Lodge, Utica, IL
7. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1994, November). Iterative Reweighted Least Squares Design of FIR Filters. IEEE Transactions on Signal Processing, 42(11), 2926–2936.
8. Bani, Mohamed S. and Chalmers, Bruce L. (1984). Best Approximation in via Iterative Hilbert Space Procedures. Journal of Approximation Theory, 42, 173–180.
9. Ben-Israel, Adi and Greville, T. N. E. (1974). Generalized Inverses: Theory and Applications. [Second edition, Springer, 2003]. New York: Wiley and Sons.
10. Björck, Åke. (1996). Numerical Methods for Least Squares Problems. Philadelphia: Blaisdell, Dover, SIAM.
11. Byrd, Richard H. and Pyne, David A. (1979, June). Convergence of the Iteratively Reweighted Least Squares Algorithm for Robust Regression. (313). Technical report. Dept. of Mathematical Sciences, The Johns Hopkins University.
12. Burrus, C. Sidney. (2008). Digital Signal Processing and Digital Filter Design. [http://cnx.org/content/col10598/latest/]. Connexions, cnx.org.
13. Burrus, C. S. (1994, 2012). Vector Space Methods in Signal and System Theory. [also in Connexions: http://cnx.org/content/col10636/latest/]. Unpublished notes. Houston, TX: ECE Dept., Rice University.
14. Burrus, C. Sidney. (1998, September 8-11). Constrained Least Squares Design of FIR Filters using Iterative Reweighted Least Squares. In Proceedings of EUSIPCO-98. (p. 281–282). Rhodes, Greece
15. Burrus, C. Sidney. (1998, August 9-12). Convergence of Constrained Optimal Design of FIR Filters using Iterative Reweighted Least Squares. In Proceedings of the IEEE DSP Workshop - 1998. [paper #113]. Bryce Canyon, Utah
16. Chi, Chong-Yung and Chiou, Shyu-Li. (1992, May). A New Self–Initiated WLS Approximation Method for the Design of Two–dimensional Equiripple FIR Digital Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (p. 1436–1439). San Diego, CA
17. Chalmers, B. L. and Egger, A. G. and Taylor, G. D. (1983). Convex Approximation. Journal of Approximation Theory, 37, 326–334.
18. Cheney, E. W. (1966). Introduction to Approximation Theory. [Second edition, AMS Chelsea, 2000]. New York: McGraw-Hill.
19. Chit, Nassim N. and Mason, John S. (1991, January). Complex Chebyshev Approximation for FIR Digital Filters. IEEE Transactions on Signal Processing, 39(1), 49–54.
20. Daubechies, Ingrid and DeVore, Ronald and Fornasier, Massimo and Gunturk, C. Sinan. (2010, January). Iteratively Reweighted Least Squares Minimization for Sparse Recovery. Communications on Pure and Applied Mathematics, 63(1), 1–38.
21. Donoho, David L. (2006, June). For Most Large Underdetermined Systems of Linear Equations the Minimal -norm Solution is also the Sparsest Solution. [http://stats.stanford.edu/ donoho/Reports/2004/l1l0EquivCorrected.pdf]. Communications on Pure and Applied Mathematics, 59(6), 797–829.
22. de Queiroz Cobra, Daniel Távora. (1986, January). FIR Digital Filter Design by the Method of Successive Projections. Masters thesis. MIT.
23. Ekblom, Hakan. (1973). Calculation of Linear Best –Approximations. BIT, 13(3), 292–300.
24. Fletcher, R. and Grant, J. A. and Hebden, M. D. (1971). The Calculation of Linear Best Approximations. Computer Journal, 14, 276–279.
25. Fisher, John D. (1973). Design of Finite Impulse Response Digital Filters. Ph. D. Thesis. Rice University.
26. Fletcher, R. (1987). Practical Methods of Optimization. (Second). New York: John Wiley & Sons.
27. Gorodnitsky, Irina F. and Rao, Bhaskar D. (1997, March). Sparse Signal Reconstruction from Limited Data using FOCUSS: a Re-weighted Minimum Norm Algorithm. IEEE Transactions on Signal Processing, 45(3),
28. Kahng, S. W. (1972, April). Best Approximation. Mathematics of Computation, 26(118), 505–508.
29. Karlovitz, L. A. (1970). Construction of Nearest Points in the , even, and norms. I. Journal of Approximation Theory, 3, 123–127.
30. Lawson, C. L. (1961). Contribution to the Theory of Linear Least Maximum Approximations. Ph. D. Thesis. University of California at Los Angeles.
31. Li, Yuying. (1991, June). A Globally Convergent Method for Problems. (TR 91-1212). Technical report. Ithaca, NY 14853-7501: Computer Science Department, Cornell University.
32. Lim, Y. C. and Lee, J. -H. and Chen, C. K. and Yang, R. -H. (1992, March). A Weighted Least Squares Algorithm for Quasi–Equripple FIR and IIR Digital Filter Design. IEEE Transactions on Signal Processing, 40(3), 551–558.
33. Mason, J. S. and Chit, N. N. (1987). New Approach to the Design of FIR Digital Filters. IEE Proceedings, Part G, 134(4), 167–180.
34. Merle, G. and Späth, H. (1974). Computational Experiences with Discrete –Approximation. Computing, 12, 315–321.
35. Osborne, M. R. (1985). Finite Algorithms in Optimization and Data Analysis. New York: Wiley.
36. Parks, T. W. and Burrus, C. S. (1987). Digital Filter Design. [most is in Connexions: http://cnx.org/content/col10598/latest/]. New York: John Wiley & Sons.
37. Powell, M. J. D. (1981). Approximation Theory and Methods. Cambridge, England: Cambridge University Press.
38. Rice, J. R. (1969). The Approximation of Functions. (Vol. 2). Reading, MA: Addison-Wesley.
39. Ruzinsky, S. A. and Olsen, E. T. (1989, February). and Minimization Via a Variant of Karmarkar's Algorithm. IEEE Transactions on ASSP, 37(2), 245–253.
40. Rice, John R. and Usow, Karl H. (1968). The Lawson Algorithm and Extensions. Math. Comp., 22, 118–127.
41. Stonick, Virginia L. and Alexander, S. T. (1991, Feburary). A Relationship Between the Recursive Least Squares Update and Homotopy Continuation Methods. IEEE Transactions on Signal Processing, 39(2), 530–532.
42. Stonick, Virginia L. and Alexander, S. T. (1991). A Relationship Between the Recursive Least Squares Update and Homotopy Continuation Methods. [preprint].
43. Stonick, Virginia L. and Alexander, S. T. (1992, September). Globally Optimal Rational Approximation Using Homotopy Continuation Methods. IEEE Transactions on Signal Processing, 40(9), 2358–2361.
44. Selesnick, Ivan W. (2009). Least Squares Solutions to Linear System of Equations. [to be published in Connexions].
45. Sieradski, Allan J. (1992). An Introduction to Topology and Homotopy. Boston: PWS–Kent.
46. Stone, Richard E. and Tovey, Craig A. (1991, June). The Simplex and Projective Scaling Algorithms as Iteratively Reweighted Least Squares Methods. SIAM Review, 33(2), 220–237.
47. Strang, Gilbert. (1986). Introduction to Applied Mathematics. Wellesley, MA: Wellesley-Cambridge Press.
48. Schroeder, Jim and Yarlagadda, Rao and Hershey, John. (1991, August). Normed Minimization with Applications to Linear Predictive Modeling for Sinusoidal Frequency Estimation. Signal Processing, 24(2), 193–216.
49. Tseng, Ching–Yih and Griffiths, Lloyd J. (1992, November). A Simple Algorithm to Achieve Desired Patterns for Arbitrary Arrays. IEEE Transaction on Signal Processing, 40(11), 2737–2746.
50. Tseng, Ching-Yih. (1992, May). A Numerical Algorithm for Complex Chebyshev FIR Filter Design. In Proceedings of the IEEE International Symposium on Circuits and Systems. (Vol. 3, p. 549–552). San Diego, CA
51. Usow, Karl H. (1967). On Approximation 1: Computation for Continuous Functions and Continuous Dependence. SIAM Journal on Numerical Analysis, 4(1), 70–88.
52. Vargas, Ricardo Arturo. (2008, May). Iterative Design of Digital Filters. [in Connexions: http://cnx.org/content/col11383/latest/]. Ph. D. Thesis. Rice University, ECE Department, Houston, TX 77251.
53. Vargas, Ricardo and Burrus, C. Sidney. (2012). Iterative Design of Digital Filters. [arXiv:1207.4526v1 [cs.IT] July 19, 2012]. arXiv.
54. Vargas, Ricardo A. and Burrus, C. Sidney. (1999, March 14–19). Adaptive Iterataive Reweighted Least Squares Design of Filters. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (Vol. III). IEEE ICASSP-99, Phoenix
55. Watson, G. A. (1981). An Algorithm for Linear Approximation of Continuous Functions. IMA Journal of Numerical Analysis, 1, 157–167.
56. Watson, G. A. (1988, October). Convex Approximation. Journal of Approximation Theory, 55(1), 1–11.
57. Yagle, Andrew E. (2008). Non-Iterative Reweighted-Norm Least-Squares Local Minimization for Sparse Solutions to Underdetermined Linear Systems of Equations. [preprint at http://web.eecs.umich.edu/ aey/sparse/sparse11.pdf].
58. Yu, Wen–Shyong and Fong, I–Kong and Chang, Kuang–Chiung. (1992, August). An –Approximation Based Method for Synthesizing FIR Filters. IEEE Transactions on Circuits and Systems II, 39(8), 578–561.

Content actions

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks