# Connexions

You are here: Home » Content » The IRLS algorithm

### Recently Viewed

This feature requires Javascript to be enabled.

# The IRLS algorithm

Module by: Ricardo Vargas. E-mail the author

Iterative Reweighted Least Squares (IRLS) algorithms define a family of iterative methods that solve an otherwise complicated numerical optimization problem by breaking it into a series of weighted least squares (WLS) problems, each one easier in principle than the original problem. At iteration ii one must solve a weighted least squares problem of the form

min h i w ( h i - 1 ) f ( h i ) 2 min h i w ( h i - 1 ) f ( h i ) 2
(1)

where w(·)w(·) is a specific weighting function and f(·)f(·) is a function of the filter. Obviously a large class of problems could be written in this form (large in the sense that both w(·)w(·) and f(·)f(·) can be defined arbitrarily). One case worth considering is the linear approximation problem defined by

min h D - C h min h D - C h
(2)

where DRMDRM and CRM×NCRM×N are given, and ·· is an arbitrary measure. One could write f(·)f(·) in Equation 1 as

f ( h ) = D - C h f ( h ) = D - C h
(3)

and attempt to find a suitable function w(·)w(·) to minimize the arbitrary norm ·· in Equation 2. In vector notation, at iteration ii one can write Equation 1 as follows,

min h i w ( h i - 1 ) D - C h i 2 min h i w ( h i - 1 ) D - C h i 2
(4)

One can show (see Appendix (Reference) for proof) that the solution of Equation 4 for any iteration is given by

h = ( C T W C ) - 1 C T W D h = ( C T W C ) - 1 C T W D
(5)

with W=diag(w2)W=diag(w2) (where ww is the weighting vector). To solve problem Equation 4 above, one could use the following algorithm:

1. Set initial weights w0w0
2. At the ii-th iteration find hi=(CTWi-1C)-1CTWi-1Dhi=(CTWi-1C)-1CTWi-1D
3. Update WiWi as a function of hihi (i.e. Wi=W(hi)Wi=W(hi) )
4. Iterate steps 2 and 3 until a certain stopping criterion is reached

This method will be referred in this work as the basic IRLS algorithm.

An IRLS algorithm is said to converge if the algorithm produces a sequence of points hihi such that

lim i h i = h * lim i h i = h *
(6)

where h*h* is a fixed point defined by

h * = ( C T W * C ) - 1 C T W * D h * = ( C T W * C ) - 1 C T W * D
(7)

with W*=W(h*)W*=W(h*). In principle one would want h*=hh*=h (as defined in (Reference)).

IRLS algorithms have been used in different areas of science and engineering. Their atractiveness stem from the idea of simplifying a difficult problem as a sequence of weighted least squares problems that can be solved efficiently with programs such as Matlab or LAPACK. However (as it was mentioned above) success is determined by the existence of a weighting function that leads to a fixed point that happens to be at least a local solution of the problem in question. This might not be the case for any given problem. In the case of lplp optimization one can justify the use of IRLS methods by means of the following theorem:

## Theorem 1: Weight Function Existence theorem

Let gk(ω)gk(ω) be a Chebyshev set and define

H ( h ; ω ) = k = 0 M h k g k ( ω ) H ( h ; ω ) = k = 0 M h k g k ( ω )
(8)

where h=(h0,h1,...,hM)Th=(h0,h1,...,hM)T. Then, given D(ω)D(ω) continuous on [0,π][0,π] and 1<q<p1<q<p the following are identical sets:

• {hH(h;ω)hH(h;ω) is a best weighted LpLp approximation toD(ω)D(ω) on [0,π][0,π]}.
• {hH(h;ω)hH(h;ω) is a best weighted LqLq approximation to D(ω)D(ω) on [0,π][0,π]}.

Furthermore, the theorem above is valid if the interval [0,π][0,π] is replaced by a finite point set Ω[0,π]Ω[0,π] (this theorem is accredited to Motzkin and Walsh [2], [1]).

Theorem 1 is fundamental since it establishes that weights exist so that the solution of an LpLp problem is indeed the solution of a weighted LqLq problem (for arbitrary p,q>1p,q>1). Furthermore the results of Theorem 1 remain valid for lplp and lqlq. For our purposes, this theorem establishes the existence of a weighting function so that the solution of a weighted l2l2 problem is indeed the solution of an lplp problem; the challenge then is to find the corresponding weighting function. The remainder of this document explores this task for a number of relevant filter design problems and provides a consistent computational framework.

## References

1. Motzkin, T. S. and Walsh, J. L. (1959, May). Polynomials of Best Approximation on a Real Finite Point Set I. Trans. American Mathematical Society, 91(2), 231-245.
2. Walsh, J. L. and Motzkin, T. S. (1959, October). Polynomials of Best Approximation on an Interval. Proceeedings of the National Academy of Sciences, USA, 45, 1523-1528.

## Content actions

### Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks