# Connexions

You are here: Home » Content » The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients

### Recently Viewed

This feature requires Javascript to be enabled.

# The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients

Module by: C. Sidney Burrus. E-mail the author

We will now look more closely at the basic scaling function and wavelet to see when they exist and what their properties are [5], [21], [13], [15], [14], [19], [4]. Using the same approach that is used in the theory of differential equations, we will examine the properties of φ(t)φ(t) by considering the equation of which it is a solution. The basic recursion (Reference) that comes from the multiresolution formulation is

φ ( t ) = n h ( n ) 2 φ ( 2 t - n ) φ ( t ) = n h ( n ) 2 φ ( 2 t - n )
(1)

with h(n)h(n) being the scaling coefficients and φ(t)φ(t) being the scaling function which satisfies this equation which is sometimes called the refinement equation, the dilation equation, or the multiresolution analysis equation (MRA).

In order to state the properties accurately, some care has to be taken in specifying just what classes of functions are being considered or are allowed. We will attempt to walk a fine line to present enough detail to be correct but not so much as to obscure the main ideas and results. A few of these ideas were presented in Section: Signal Spaces and a few more will be given in the next section. A more complete discussion can be found in [33], in the introductions to [34], [36], [11], or in any book on function analysis.

## Tools and Definitions

### Signal Classes

There are three classes of signals that we will be using. The most basic is called L2(R)L2(R) which contains all functions which have a finite, well-defined integral of the square: fL2|f(t)|2dt=E<fL2|f(t)|2dt=E<. This class is important because it is a generalization of normal Euclidean geometry and because it gives a simple representation of the energy in a signal.

The next most basic class is L1(R)L1(R), which requires a finite integral of the absolute value of the function: fL1|f(t)|dt=K<fL1|f(t)|dt=K<. This class is important because one may interchange infinite summations and integrations with these functions although not necessarily with L2L2 functions. These classes of function spaces can be generalized to those with int|f(t)|pdt=K<int|f(t)|pdt=K< and designated LpLp.

A more general class of signals than any LpLp space contains what are called distributions. These are generalized functions which are not defined by their having “values" but by the value of an “inner product" with a normal function. An example of a distribution would be the Dirac delta function δ(t)δ(t) where it is defined by the property: f(T)=f(t)δ(t-T)dtf(T)=f(t)δ(t-T)dt.

Another detail to keep in mind is that the integrals used in these definitions are Lebesque integrals which are somewhat more general than the basic Riemann integral. The value of a Lebesque integral is not affected by values of the function over any countable set of values of its argument (or, more generally, a set of measure zero). A function defined as one on the rationals and zero on the irrationals would have a zero Lebesque integral. As a result of this, properties derived using measure theory and Lebesque integrals are sometime said to be true “almost everywhere," meaning they may not be true over a set of measure zero.

Many of these ideas of function spaces, distributions, Lebesque measure, etc. came out of the early study of Fourier series and transforms. It is interesting that they are also important in the theory of wavelets. As with Fourier theory, one can often ignore the signal space classes and can use distributions as if they were functions, but there are some cases where these ideas are crucial. For an introductory reading of this book or of the literature, one can usually skip over the signal space designation or assume Riemann integrals. However, when a contradiction or paradox seems to arise, its resolution will probably require these details.

### Fourier Transforms

We will need the Fourier transform of φ(t)φ(t) which, if it exists, is defined to be Φ Φ

Φ ( ω ) = - φ ( t ) e - i ω t d t Φ ( ω ) = - φ ( t ) e - i ω t d t
(2)

and the discrete-time Fourier transform (DTFT) [23] of h(n)h(n) defined to be

H ( ω ) = n = - h ( n ) e - i ω n H ( ω ) = n = - h ( n ) e - i ω n
(3)

where i=-1i=-1 and nn is an integer (nZnZ). If convolution with h(n)h(n) is viewed as a digital filter, as defined in Section: Analysis - From Fine Scale to Coarse Scale, then the DTFT of h(n)h(n) is the filter's frequency response, [23], [24] which is 2π2π periodic.

If Φ(ω)Φ(ω) exits, the defining recursive equation Equation 1 becomes

Φ ( ω ) = 1 2 H ( ω / 2 ) Φ ( ω / 2 ) Φ ( ω ) = 1 2 H ( ω / 2 ) Φ ( ω / 2 )
(4)

which after iteration becomes

Φ ( ω ) = k = 1 1 2 H ( ω 2 k ) Φ ( 0 ) . Φ ( ω ) = k = 1 1 2 H ( ω 2 k ) Φ ( 0 ) .
(5)

if nh(n)=2nh(n)=2 and Φ(0)Φ(0) is well defined. This may be a distribution or it may be a smooth function depending on H(ω)H(ω) and, therefore, h(n)h(n)[33], [4]. This makes sense only if Φ(0)Φ(0) is well defined. Although Equation 1 and Equation 5 are equivalent term-by-term, the requirement of Φ(0)Φ(0) being well defined and the nature of the limits in the appropriate function spaces may make one preferable over the other. Notice how the zeros of H(ω)H(ω) determine the zeros of Φ(ω)Φ(ω).

### Refinement and Transition Matrices

There are two matrices that are particularly important to determining the properties of wavelet systems. The first is the refinement matrixMM, which is obtained from the basic recursion equation Equation 1 by evaluating φ(t)φ(t) at integers [22], [6], [7], [29], [30]. This looks like a convolution matrix with the even (or odd) rows removed. Two particular submatrices that are used later in Section 22 to evaluate φ(t)φ(t) on the dyadic rationals are illustrated for N=6N=6 by

2 h 0 0 0 0 0 0 h 2 h 1 h 0 0 0 0 h 4 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 h 1 0 0 0 h 5 h 4 h 3 0 0 0 0 0 h 5 φ 0 φ 1 φ 2 φ 3 φ 4 φ 5 = φ 0 φ 1 φ 2 φ 3 φ 4 φ 5 2 h 0 0 0 0 0 0 h 2 h 1 h 0 0 0 0 h 4 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 h 1 0 0 0 h 5 h 4 h 3 0 0 0 0 0 h 5 φ 0 φ 1 φ 2 φ 3 φ 4 φ 5 = φ 0 φ 1 φ 2 φ 3 φ 4 φ 5
(6)

which we write in matrix form as

M 0 φ ̲ = φ ̲ M 0 φ ̲ = φ ̲
(7)

with M0M0 being the 6×66×6 matrix of the h(n)h(n) and φ̲φ̲ being 6×16×1 vectors of integer samples of φ(t)φ(t). In other words, the vector φ̲φ̲ with entries φ(k)φ(k) is the eigenvector of M0M0 for an eigenvalue of unity.

The second submatrix is a shifted version illustrated by

2 h 1 h 0 0 0 0 0 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 0 0 0 0 h 5 h 4 0 0 0 0 0 0 φ 0 φ 1 φ 2 φ 3 φ 4 φ 5 = φ 1 / 2 φ 3 / 2 φ 5 / 2 φ 7 / 2 φ 9 / 2 φ 11 / 2 2 h 1 h 0 0 0 0 0 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 h 1 h 0 0 0 h 5 h 4 h 3 h 2 0 0 0 0 h 5 h 4 0 0 0 0 0 0 φ 0 φ 1 φ 2 φ 3 φ 4 φ 5 = φ 1 / 2 φ 3 / 2 φ 5 / 2 φ 7 / 2 φ 9 / 2 φ 11 / 2
(8)

with the matrix being denoted M1M1. The general refinement matrix MM is the infinite matrix of which M0M0 and M1M1 are partitions. If the matrix HH is the convolution matrix for h(n)h(n), we can denote the MM matrix by [2]H[2]H to indicate the down-sampled convolution matrix HH. Clearly, for φ(t)φ(t) to be defined on the dyadic rationals, M0M0 must have a unity eigenvalue.

A third, less obvious but perhaps more important, matrix is called the transition matrixTT and it is built up from the autocorrelation matrix of h(n)h(n). The transition matrix is constructed by

T = [ 2 ] H H T . T = [ 2 ] H H T .
(9)

This matrix (sometimes called the Lawton matrix) was used by Lawton (who originally called it the Wavelet-Galerkin matrix) [14] to derive necessary and sufficient conditions for an orthogonal wavelet basis. As we will see later in this chapter, its eigenvalues are also important in determining the properties of φ(t)φ(t) and the associated wavelet system.

## Necessary Conditions

Theorem 1 If φ(t)L1φ(t)L1 is a solution to the basic recursion equation Equation 1 and if φ(t)dt0φ(t)dt0 , then

n h ( n ) = 2 . n h ( n ) = 2 .
(10)

The proof of this theorem requires only an interchange in the order of a summation and integration (allowed in L1L1) but no assumption of orthogonality of the basis functions or any other properties of φ(t)φ(t) other than a nonzero integral. The proof of this theorem and several of the others stated here are contained in Appendix A.

This theorem shows that, unlike linear constant coefficient differential equations, not just any set of coefficients will support a solution. The coefficients must satisfy the linear equation Equation 10. This is the weakest condition on the h(n)h(n).

Theorem 2 If φ(t)φ(t) is an L1L1 solution to the basic recursion equation Equation 1 with φ(t)dt=1φ(t)dt=1, and

φ ( t - ) = φ ( ) = 1 φ ( t - ) = φ ( ) = 1
(11)

with Φ(π+2πk)0Φ(π+2πk)0 for some kk, then

n h ( 2 n ) = n h ( 2 n + 1 ) n h ( 2 n ) = n h ( 2 n + 1 )
(12)

where Equation 11 may have to be a distributional sum. Conversely, if Equation 12 is satisfied, then Equation 11 is true.

Equation Equation 12 is called the fundamental condition, and it is weaker than requiring orthogonality but stronger than Equation 10. It is simply a result of requiring the equations resulting from evaluating Equation 1 on the integers be consistent. Equation Equation 11 is called a partitioning of unity (or the Strang condition or the Shoenberg condition).

A similar theorem by Cavaretta, Dahman and Micchelli [1] and by Jia [10] states that if φLpφLp and the integer translates of φ(t)φ(t) form a Riesz basis for the space they span, then nh(2n)=nh(2n+1)nh(2n)=nh(2n+1).

Theorem 3 If φ(t)φ(t) is an L2L1L2L1 solution to Equation 1 and if integer translates of φ(t)φ(t) are orthogonal as defined by

φ ( t ) φ ( t - k ) d t = E δ ( k ) = E if k = 0 0 otherwise, φ ( t ) φ ( t - k ) d t = E δ ( k ) = E if k = 0 0 otherwise,
(13)

then

n h ( n ) h ( n - 2 k ) = δ ( k ) = 1 if k = 0 0 otherwise, n h ( n ) h ( n - 2 k ) = δ ( k ) = 1 if k = 0 0 otherwise,
(14)

Notice that this does not depend on a particular normalization of φ(t)φ(t).

If φ(t)φ(t) is normalized by dividing by the square root of its energy EE, then integer translates of φ(t)φ(t) are orthonormal defined by

φ ( t ) φ ( t - k ) d t = δ ( k ) = 1 if k = 0 0 otherwise, φ ( t ) φ ( t - k ) d t = δ ( k ) = 1 if k = 0 0 otherwise,
(15)

This theorem shows that in order for the solutions of Equation 1 to be orthogonal under integer translation, it is necessary that the coefficients of the recursive equation be orthogonal themselves after decimating or downsampling by two. If φ(t)φ(t) and/or h(n)h(n) are complex functions, complex conjugation must be used in Equation 13, Equation 14, and Equation 15.

Coefficients h(n)h(n) that satisfy Equation 14 are called a quadrature mirror filter (QMF) or conjugate mirror filter (CMF), and the condition Equation 14 is called the quadratic condition for obvious reasons.

Corollary 1 Under the assumptions of Theorem Paragraph 30, the norm of h(n)h(n) is automatically unity.

n | h ( n ) | 2 = 1 n | h ( n ) | 2 = 1
(16)

Not only must the sum of h(n)h(n) equal 22, but for orthogonality of the solution, the sum of the squares of h(n)h(n) must be one, both independent of any normalization of φ(t)φ(t). This unity normalization of h(n)h(n) is the result of the 22 term in Equation 1.

Corollary 2 Under the assumptions of Theorem Paragraph 30,

n h ( 2 n ) = n h ( 2 n + 1 ) = 1 2 n h ( 2 n ) = n h ( 2 n + 1 ) = 1 2
(17)

This result is derived in the Appendix by showing that not only must the sum of h(n)h(n) equal 22, but for orthogonality of the solution, the individual sums of the even and odd terms in h(n)h(n) must be 1/21/2, independent of any normalization of φ(t)φ(t). Although stated here as necessary for orthogonality, the results hold under weaker non-orthogonal conditions as is stated in Theorem Paragraph 25.

Theorem 4 If φ(t)φ(t) has compact support on 0tN-10tN-1 and if φ(t-k)φ(t-k) are linearly independent, then h(n)h(n) also has compact support over 0nN-10nN-1:

h ( n ) = 0 for n < 0 and n > N - 1 h ( n ) = 0 for n < 0 and n > N - 1
(18)

Thus NN is the length of the h(n)h(n) sequence.

If the translates are not independent (or some equivalent restriction), one can have h(n)h(n) with infinite support while φ(t)φ(t) has finite support [26].

These theorems state that if φ(t)φ(t) has compact support and is orthogonal over integer translates, N2N2 bilinear or quadratic equations Equation 14 must be satisfied in addition to the one linear equation Equation 10. The support or length of h(n)h(n) is NN, which must be an even number. The number of degrees of freedom in choosing these NN coefficients is then N2-1N2-1. This freedom will be used in the design of a wavelet system developed in Chapter: Regularity, Moments, and Wavelet System Design and elsewhere.

## Frequency Domain Necessary Conditions

We turn next to frequency domain versions of the necessary conditions for the existence of φ(t)φ(t). Some care must be taken in specifying the space of functions that the Fourier transform operates on and the space that the transform resides in. We do not go into those details in this book but the reader can consult [33].

Theorem 5 If φ(t)φ(t) is a L1L1 solution of the basic recursion equation Equation 1, then the following equivalent conditions must be true:

n h ( n ) = H ( 0 ) = 2 n h ( n ) = H ( 0 ) = 2
(19)

This follows directly from Equation 3 and states that the basic existence requirement Equation 10 is equivalent to requiring that the FIR filter's frequency response at DC (ω=0ω=0) be 22.

Theorem 6 For h(n)1h(n)1, then

n h ( 2 n ) = n h ( 2 n + 1 ) if and only if H ( π ) = 0 n h ( 2 n ) = n h ( 2 n + 1 ) if and only if H ( π ) = 0
(20)

which says the frequency response of the FIR filter with impulse response h(n)h(n) is zero at the so-called Nyquist frequency (ω=πω=π). This follows from Equation 4 and (Reference), and supports the fact that h(n)h(n) is a lowpass digital filter. This is also equivalent to the MM and TT matrices having a unity eigenvalue.

Theorem 7 If φ(t)φ(t) is a solution to Equation 1 in L2L1L2L1 and Φ(ω)Φ(ω) is a solution of Equation 4 such that Φ(0)0Φ(0)0, then

φ ( t ) φ ( t - k ) d t = δ ( k ) i f a n d o n l y i f | Φ ( ω + 2 π ) | 2 = 1 φ ( t ) φ ( t - k ) d t = δ ( k ) i f a n d o n l y i f | Φ ( ω + 2 π ) | 2 = 1
(21)

This is a frequency domain equivalent to the time domain definition of orthogonality of the scaling function [21], [20], [4]. It allows applying the orthonormal conditions to frequency domain arguments. It also gives insight into just what time domain orthogonality requires in the frequency domain.

Theorem 8 For any h(n)1h(n)1,

n h ( n ) h ( n - 2 k ) = δ ( k ) i f a n d o n l y i f | H ( ω ) | 2 + | H ( ω + π ) | 2 = 2 n h ( n ) h ( n - 2 k ) = δ ( k ) i f a n d o n l y i f | H ( ω ) | 2 + | H ( ω + π ) | 2 = 2
(22)

This theorem [13], [9], [4] gives equivalent time and frequency domain conditions on the scaling coefficients and states that the orthogonality requirement Equation 14 is equivalent to the FIR filter with h(n)h(n) as coefficients being what is called a Quadrature Mirror Filter (QMF) [27]. Note that Equation 22, Equation 19, and Equation 20 require |H(π/2)|=1|H(π/2)|=1 and that the filter is a “half band" filter.

## Sufficient Conditions

The above are necessary conditions for φ(t)φ(t) to exist and the following are sufficient. There are many forms these could and do take but we present the following as examples and give references for more detail [5], [21], [13], [15], [14], [19], [4], [18], [17], [16].

Theorem 9 If nh(n)=2nh(n)=2 and h(n)h(n) has finite support or decays fast enough so that |h(n)|(1+|n|)ϵ<|h(n)|(1+|n|)ϵ< for some ϵ>0ϵ>0, then a unique (within a scalar multiple) φ(t)φ(t) (perhaps a distribution) exists that satisfies Equation 1 and whose distributional Fourier transform satisfies Equation 5.

This [5], [13], [12] can be obtained in the frequency domain by considering the convergence of Equation 5. It has recently been obtained using a much more powerful approach in the time domain by Lawton [16].

Because this theorem uses the weakest possible condition, the results are weak. The scaling function obtained from only requiring nh(n)=2nh(n)=2 may be so poorly behaved as to be impossible to calculate or use. The worst cases will not support a multiresolution analysis or provide a useful expansion system.

Theorem 10 If nh(2n)=nh(2n+1)=1/2nh(2n)=nh(2n+1)=1/2 and h(n)h(n) has finite support or decays fast enough so that |h(n)|(1+|n|)ϵ<|h(n)|(1+|n|)ϵ< for some ϵ>0ϵ>0, then a φ(t)φ(t) (perhaps a distribution) that satisfies Equation 1 exists, is unique, and is well-defined on the dyadic rationals. In addition, the distributional sum

k φ ( t - k ) = 1 k φ ( t - k ) = 1
(23)

holds.

This condition, called the fundamental condition [29], [18], gives a slightly tighter result than Theorem Paragraph 54. While the scaling function still may be a distribution not in L1L1 or L2L2, it is better behaved than required by Theorem Paragraph 54 in being defined on the dense set of dyadic rationals. This theorem is equivalent to requiring H(π)=0H(π)=0 which from the product formula Equation 5 gives a better behaved Φ(ω)Φ(ω). It also guarantees a unity eigenvalue for MM and TT but not that other eigenvalues do not exist with magnitudes larger than one.

The next several theorems use the transition matrix TT defined in Equation 9 which is a down-sampled autocorrelation matrix.

Theorem 11 If the transition matrix TT has eigenvalues on or in the unit circle of the complex plane and if any on the unit circle are multiple, they have a complete set of eigenvectors, then φ(t)L2φ(t)L2.

If TT has unity magnitude eigenvalues, the successive approximation algorithm (cascade algorithm) Equation 71 converges weakly to φ(t)L2φ(t)L2[12].

Theorem 12 If the transition matrix TT has a simple unity eigenvalue with all other eigenvalues having magnitude less than one, then φ(t)L2φ(t)L2.

Here the successive approximation algorithm (cascade algorithm) converges strongly to φ(t)L2φ(t)L2. This is developed in [29].

If in addition to requiring Equation 10, we require the quadratic coefficient conditions Equation 14, a tighter result occurs which gives φ(t)L2(R)φ(t)L2(R) and a multiresolution tight frame system.

Theorem 13 (Lawton) If h(n)h(n) has finite support or decays fast enough and if nh(n)=2nh(n)=2 and if nh(n)h(n-2k)=δ(k)nh(n)h(n-2k)=δ(k), then φ(t)L2(R)φ(t)L2(R) exists, and generates a wavelet system that is a tight frame in L2L2.

This important result from Lawton [13], [15] gives the sufficient conditions for φ(t)φ(t) to exist and generate wavelet tight frames. The proof uses an iteration of the basic recursion equation Equation 1 as a successive approximation similar to Picard's method for differential equations. Indeed, this method is used to calculate φ(t)φ(t) in Section 22. It is interesting to note that the scaling function may be very rough, even “fractal" in nature. This may be desirable if the signal being analyzed is also rough.

Although this theorem guarantees that φ(t)φ(t) generates a tight frame, in most practical situations, the resulting system is an orthonormal basis [15]. The conditions in the following theorems are generally satisfied.

Theorem 14 (Lawton) If h(n) has compact support, nh(n)=2nh(n)=2, and nh(n)h(n-2k)=δ(k)nh(n)h(n-2k)=δ(k), then φ(t-k)φ(t-k) forms an orthogonal set if and only if the transition matrix TT has a simple unity eigenvalue.

This powerful result allows a simple evaluation of h(n)h(n) to see if it can support a wavelet expansion system [13], [15], [14]. An equivalent result using the frequency response of the FIR digital filter formed from h(n)h(n) was given by Cohen.

Theorem 15 (Cohen) If H(ω)H(ω) is the DTFT of h(n)h(n) with compact support and nh(n)=2nh(n)=2 with nh(n)h(n-2k)=δ(k)nh(n)h(n-2k)=δ(k),and if H(ω)0H(ω)0 for -π/3ωπ/3-π/3ωπ/3, then the φ(t-k)φ(t-k) satisfying Equation 1 generate an orthonormal basis in L2L2.

A slightly weaker version of this frequency domain sufficient condition is easier to prove [21], [20] and to extend to the M-band case for the case of no zeros allowed in -π/2ωπ/2-π/2ωπ/2[4]. There are other sufficient conditions that, together with those in Theorem Paragraph 66, will guarantee an orthonormal basis. Daubechies' vanishing moments will guarantee an orthogonal basis.

Theorems (Reference), (Reference), and (Reference) show that h(n)h(n) has the characteristics of a lowpass FIR digital filter. We will later see that the FIR filter made up of the wavelet coefficients is a high pass filter and the filter bank view developed in Chapter: Filter Banks and the Discrete Wavelet Transform and Section: Multiplicity-M (M-Band) Scaling Functions and Wavelets further explains this view.

Theorem 16 If h(n)h(n) has finite support and if φ(t)L1φ(t)L1, then φ(t)φ(t) has finite support [12].

If φ(t)φ(t) is not restricted to L1L1, it may have infinite support even if h(n)h(n) has finite support.

These theorems give a good picture of the relationship between the recursive equation coefficients h(n)h(n) and the scaling function φ(t)φ(t) as a solution of Equation 1. More properties and characteristics are presented in Section 15.

### Wavelet System Design

One of the main purposes for presenting the rather theoretical results of this chapter is to set up the conditions for designing wavelet systems. One approach is to require the minimum sufficient conditions as constraints in an optimization or approximation, then use the remaining degrees of freedom to choose h(n)h(n) that will give the best signal representation, decomposition, or compression. In some cases, the sufficient conditions are overly restrictive and it is worthwhile to use the necessary conditions and then check the design to see if it is satisfactory. In many cases, wavelet systems are designed by a frequency domain design of H(ω)H(ω) using digital filter design techniques with wavelet based constraints.

## The Wavelet

Although this chapter is primarily about the scaling function, some basic wavelet properties are included here.

Theorem 17 If the scaling coefficients h(n)h(n) satisfy the conditions for existence and orthogonality of the scaling function and the wavelet is defined by (Reference), then the integer translates of this wavelet span W0W0, the orthogonal compliment of V0V0, both being in V1V1, i.e., the wavelet is orthogonal to the scaling function at the same scale,

φ ( t - n ) ψ ( t - m ) d t = 0 , φ ( t - n ) ψ ( t - m ) d t = 0 ,
(24)

if and only if the coefficients h1(n)h1(n) are given by

h 1 ( n ) = ± ( - 1 ) n h ( N - n ) h 1 ( n ) = ± ( - 1 ) n h ( N - n )
(25)

where NN is an arbitrary odd integer chosen to conveniently position h1(n)h1(n).

An outline proof is in Appendix A.

Theorem 18 If the scaling coefficients h(n)h(n) satisfy the conditions for existence and orthogonality of the scaling function and the wavelet is defined by (Reference), then the integer translates of this wavelet span W0W0, the orthogonal compliment of V0V0, both being in V1V1; i.e., the wavelet is orthogonal to the scaling function at the same scale. If

φ ( t - n ) ψ ( t - m ) d t = 0 φ ( t - n ) ψ ( t - m ) d t = 0
(26)

then

n h ( n ) h 1 ( n - 2 k ) = 0 n h ( n ) h 1 ( n - 2 k ) = 0
(27)

which is derived in Appendix A, (Reference).

The translation orthogonality and scaling function-wavelet orthogonality conditions in Equation 14 and Equation 27 can be combined to give

n h ( n ) h m ( n - 2 k ) = δ ( k ) δ ( - m ) n h ( n ) h m ( n - 2 k ) = δ ( k ) δ ( - m )
(28)

if h0(n)h0(n) is defined as h(n)h(n).

Theorem 19 If h(n)h(n) satisfies the linear and quadratic admissibility conditions of Equation 10 and Equation 14, then

n h 1 ( n ) = H 1 ( 0 ) = 0 , n h 1 ( n ) = H 1 ( 0 ) = 0 ,
(29)
| H 1 ( ω ) | = | H ( ω + π ) | , | H 1 ( ω ) | = | H ( ω + π ) | ,
(30)
| H ( ω ) | 2 + | H 1 ( ω ) | 2 = 2 , | H ( ω ) | 2 + | H 1 ( ω ) | 2 = 2 ,
(31)

and

ψ ( t ) d t = 0 . ψ ( t ) d t = 0 .
(32)

The wavelet is usually scaled so that its norm is unity.

The results in this section have not included the effects of integer shifts of the scaling function or wavelet coefficients h(n)h(n) or h1(n)h1(n). In a particular situation, these sequences may be shifted to make the corresponding FIR filter causal.

## Alternate Normalizations

An alternate normalization of the scaling coefficients is used by some authors. In some ways, it is a cleaner form than that used here, but it does not state the basic recursion as a normalized expansion, and it does not result in a unity norm for h(n)h(n). The alternate normalization uses the basic multiresolution recursive equation with no 22

φ ( t ) = n h ( n ) φ ( 2 t - n ) . φ ( t ) = n h ( n ) φ ( 2 t - n ) .
(33)

Some of the relationships and results using this normalization are:

n h ( n ) = 2 n | h ( n ) | 2 = 2 n h ( n ) h ( h - 2 k ) = 2 δ ( k ) n h ( 2 n ) = n h ( 2 n + 1 ) = 1 H ( 0 ) = 2 | H ( ω ) | 2 + | H ( ω + π ) | 2 = 4 n h ( n ) = 2 n | h ( n ) | 2 = 2 n h ( n ) h ( h - 2 k ) = 2 δ ( k ) n h ( 2 n ) = n h ( 2 n + 1 ) = 1 H ( 0 ) = 2 | H ( ω ) | 2 + | H ( ω + π ) | 2 = 4
(34)

A still different normalization occasionally used has a factor of 2 in Equation 33 rather than 22 or unity, giving nh(n)=1nh(n)=1. Other obvious modifications of the results in other places in this book can be worked out. Take care in using scaling coefficients h(n)h(n) from the literature as some must be multiplied or divided by 22 to be consistent with this book.

## Example Scaling Functions and Wavelets

Several of the modern wavelets had never been seen or described before the 1980's. This section looks at some of the most common wavelet systems.

### Haar Wavelets

The oldest and most basic of the wavelet systems that has most of our desired properties is constructed from the Haar basis functions. If one chooses a length N=2N=2 scaling coefficient set, after satisfying the necessary conditions in Equation 10 and Equation 14, there are no remaining degrees or freedom. The unique (within normalization) coefficients are

h ( n ) = 1 2 , 1 2 h ( n ) = 1 2 , 1 2
(35)

and the resulting normalized scaling function is

φ ( t ) = 1 for 0 < t < 1 0 otherwise. φ ( t ) = 1 for 0 < t < 1 0 otherwise.
(36)

The wavelet is, therefore,

ψ ( t ) = 1 for 0 < t < 1 / 2 - 1 for 1 / 2 < t < 1 0 otherwise . ψ ( t ) = 1 for 0 < t < 1 / 2 - 1 for 1 / 2 < t < 1 0 otherwise .
(37)

Their satisfying the multiresolution equation (Reference) is illustrated in Figure: Haar and Triangle Scaling Functions. Haar showed that translates and scalings of these functions form an orthonormal basis for L2(R)L2(R). We can easily see that the Haar functions are also a compact support orthonormal wavelet system that satisfy Daubechies' conditions. Although they are as regular as can be achieved for N=2N=2, they are not even continuous. The orthogonality and nesting of spanned subspaces are easily seen because the translates have no overlap in the time domain. It is instructive to apply the various properties of Section 9 and Section 15 to these functions and see how they are satisfied. They are illustrated in the example in Figure: Haar Scaling Functions and Wavelets that Span VjVj through Figure: Haar Function Approximation in VjVj.

### Sinc Wavelets

The next best known (perhaps the best known) basis set is that formed by the sinc functions. The sinc functions are usually presented in the context of the Shannon sampling theorem, but we can look at translates of the sinc function as an orthonormal set of basis functions (or, in some cases, a tight frame). They, likewise, usually form a orthonormal wavelet system satisfying the various required conditions of a multiresolution system.

The sinc function is defined as

sinc ( t ) = sin ( t ) t sinc ( t ) = sin ( t ) t
(38)

where sinc(0)=1sinc(0)=1. This is a very versatile and useful function because its Fourier transform is a simple rectangle function and the Fourier transform of a rectangle function is a sinc function. In order to be a scaling function, the sinc must satisfy (Reference) as

sinc ( K t ) = n h ( n ) sinc ( K 2 t - K n ) sinc ( K t ) = n h ( n ) sinc ( K 2 t - K n )
(39)

for the appropriate scaling coefficients h(n)h(n) and some KK. If we construct the scaling function from the generalized sampling function as presented in (Reference), the sinc function becomes

sinc ( K t ) = n sinc ( K T n ) sinc ( π R T t - π R n ) . sinc ( K t ) = n sinc ( K T n ) sinc ( π R T t - π R n ) .
(40)

In order for these two equations to be true, the sampling period must be T=1/2T=1/2 and the parameter

K = π R K = π R
(41)

which gives the scaling coefficients as

h ( n ) = sinc ( π 2 R n ) . h ( n ) = sinc ( π 2 R n ) .
(42)

We see that φ(t)=sinc(Kt)φ(t)=sinc(Kt) is a scaling function with infinite support and its corresponding scaling coefficients are samples of a sinc function. It R=1R=1, then K=πK=π and the scaling function generates an orthogonal wavelet system. For R>1R>1, the wavelet system is a tight frame, the expansion set is not orthogonal or a basis, and RR is the amount of redundancy in the system as discussed in this chapter. For the orthogonal sinc scaling function, the wavelet is simply expressed by

ψ ( t ) = 2 φ ( 2 t ) - φ ( t ) . ψ ( t ) = 2 φ ( 2 t ) - φ ( t ) .
(43)

The sinc scaling function and wavelet do not have compact support, but they do illustrate an infinitely differentiable set of functions that result from an infinitely long h(n)h(n). The orthogonality and multiresolution characteristics of the orthogonal sinc basis is best seen in the frequency domain where there is no overlap of the spectra. Indeed, the Haar and sinc systems are Fourier duals of each other. The sinc generating scaling function and wavelet are shown in Figure 1.

### Spline and Battle-Lemarié Wavelet Systems

The triangle scaling function illustrated in Figure: Haar and Triangle Scaling Functions is a special case of a more general family of spline scaling functions. The scaling coefficient system h(n)={122,12,122,0}h(n)={122,12,122,0} gives rise to the piecewise linear, continuous triangle scaling function. This function is a first-order spline, being a concatenation of two first order polynomials to be continuous at the junctions or “knots". A quadratic spline is generated from h={1/4,3/4,3/4,1/4}/2h={1/4,3/4,3/4,1/4}/2 as three sections of second order polynomials connected to give continuous first order derivatives at the junctions. The cubic spline is generated from h(n)={1/16.1/4,3/8,1/4,1/16}/2h(n)={1/16.1/4,3/8,1/4,1/16}/2. This is generalized to an arbitrary NNth order spline with continuous (N-1)(N-1)th order derivatives and with compact support of N+1N+1. These functions have excellent mathematical properties, but they are not orthogonal over integer translation. If orthogonalized, their support becomes infinite (but rapidly decaying) and they generate the “Battle-Lemarié wavelet system" [4], [29], [2], [3]. Figure 2 illustrates the first-order spline scaling function which is the triangle function along with the second-, third-, and fourth-order spline scaling functions.

## Further Properties of the Scaling Function and Wavelet

The scaling function and wavelet have some remarkable properties that should be examined in order to understand wavelet analysis and to gain some intuition for these systems. Likewise, the scaling and wavelet coefficients have important properties that should be considered.

We now look further at the properties of the scaling function and the wavelet in terms of the basic defining equations and restrictions. We also consider the relationship of the scaling function and wavelet to the equation coefficients. A multiplicity or rank of two is used here but the more general multiplicity-M case is easily derived from these (See Section: Multiplicity-M (M-Band) Scaling Functions and Wavelets and Appendix B). Derivations or proofs for some of these properties are included in Appendix B.

The basic recursive equation for the scaling function, defined in Equation 1 as

φ ( t ) = n h ( n ) 2 φ ( 2 t - n ) , φ ( t ) = n h ( n ) 2 φ ( 2 t - n ) ,
(44)

is homogeneous, so its solution is unique only within a normalization factor. In most cases, both the scaling function and wavelet are normalized to unit energy or unit norm. In the properties discussed here, we normalize the energy as E=|φ(t)|2dt=1.E=|φ(t)|2dt=1. Other normalizations can easily be used if desired.

### General Properties not Requiring Orthogonality

There are several properties that are simply a result of the multiresolution equation Equation 44 and, therefore, hold for orthogonal and biorthogonal systems.

Property 1 The normalization of φ(t)φ(t) is arbitrary and is given in Equation 13 as EE. Here we usually set E=1E=1 so that the basis functions are orthonormal and coefficients can easily be calculated with inner products.

| φ ( t ) | 2 d t = E = 1 | φ ( t ) | 2 d t = E = 1
(45)

Property 2 Not only can the scaling function be written as a weighted sum of functions in the next higher scale space as stated in the basic recursion equation Equation 44, but it can also be expressed in higher resolution spaces:

φ ( t ) = n h ( j ) ( n ) 2 j / 2 φ ( 2 j t - n ) φ ( t ) = n h ( j ) ( n ) 2 j / 2 φ ( 2 j t - n )
(46)

where h(1)(n)=h(n)h(1)(n)=h(n) and for j1j1

h ( j + 1 ) ( n ) = k h ( j ) ( k ) h ( j ) ( n - 2 k ) . h ( j + 1 ) ( n ) = k h ( j ) ( k ) h ( j ) ( n - 2 k ) .
(47)

Property 3 A formula for the sum of dyadic samples of φ(t)φ(t)

k φ ( k 2 J ) = 2 J k φ ( k 2 J ) = 2 J
(48)

Property 4 A “partition of unity" follows from Equation 48 for J=0J=0

m φ ( m ) = 1 m φ ( m ) = 1
(49)

Property 5 A generalized partition of unity exists if φ(t)φ(t) is continuous

m φ ( t - m ) = 1 m φ ( t - m ) = 1
(50)

Property 6 A frequency domain statement of the basic recursion equation Equation 44

Φ ( ω ) = 1 2 H ( ω / 2 ) Φ ( ω / 2 ) Φ ( ω ) = 1 2 H ( ω / 2 ) Φ ( ω / 2 )
(51)

Property 7 Successive approximations in the frequency domain is often easier to analyze than the time domain version in Equation 44. The convergence properties of this infinite product are very important.

Φ ( ω ) = k = 1 1 2 H ( ω 2 k ) Φ ( 0 ) Φ ( ω ) = k = 1 1 2 H ( ω 2 k ) Φ ( 0 )
(52)

This formula is derived in Equation 74.

### Properties that Depend on Orthogonality

The following properties depend on the orthogonality of the scaling and wavelet functions.

Property 8 The square of the integral of φ(t)φ(t) is equal to the integral of the square of φ(t)φ(t), or A02=EA02=E.

φ ( t ) d t 2 = φ ( t ) 2 d t φ ( t ) d t 2 = φ ( t ) 2 d t
(53)

Property 9 The integral of the wavelet is necessarily zero

ψ ( t ) d t = 0 ψ ( t ) d t = 0
(54)

The norm of the wavelet is usually normalized to one such that |ψ(t)|2dt=1|ψ(t)|2dt=1.

Property 10 Not only are integer translates of the wavelet orthogonal; different scales are also orthogonal.

2 j / 2 ψ ( 2 j t - k ) 2 i / 2 ψ ( 2 i t - ) d t = δ ( k - ) δ ( j - i ) 2 j / 2 ψ ( 2 j t - k ) 2 i / 2 ψ ( 2 i t - ) d t = δ ( k - ) δ ( j - i )
(55)

where the norm of ψ(t)ψ(t) is one.

Property 11 The scaling function and wavelet are orthogonal over both scale and translation.

2 j / 2 ψ ( 2 j t - k ) 2 i / 2 φ ( 2 i t - ) d t = 0 2 j / 2 ψ ( 2 j t - k ) 2 i / 2 φ ( 2 i t - ) d t = 0
(56)

for all integer i,j,k,i,j,k, where jiji.

Property 12 A frequency domain statement of the orthogonality requirements in Equation 13 . It also is a statement of equivalent energy measures in the time and frequency domains as in Parseval's theorem, which is true with an orthogonal basis set.

k | Φ ( ω + 2 π k ) | 2 = | Φ ( ω ) | 2 d ω = | φ ( t ) | 2 d t = 1 k | Φ ( ω + 2 π k ) | 2 = | Φ ( ω ) | 2 d ω = | φ ( t ) | 2 d t = 1
(57)

Property 13 The scaling coefficients can be calculated from the orthogonal or tight frame scaling functions by

h ( n ) = 2 φ ( t ) φ ( 2 t - n ) d t . h ( n ) = 2 φ ( t ) φ ( 2 t - n ) d t .
(58)

Property 14 The wavelet coefficients can be calculated from the orthogonal or tight frame scaling functions by

h 1 ( n ) = 2 ψ ( t ) φ ( 2 t - n ) d t . h 1 ( n ) = 2 ψ ( t ) φ ( 2 t - n ) d t .
(59)

Derivations of some of these properties can be found in Appendix B. Properties in equations Equation 1, Equation 10, Equation 14, Equation 53, Equation 51, Equation 52, and Equation 57 are independent of any normalization of φ(t)φ(t). Normalization affects the others. Those in equations Equation 1, Equation 10, Equation 48, Equation 49, Equation 51, Equation 52, and Equation 57 do not require orthogonality of integer translates of φ(t)φ(t). Those in Equation 14, Equation 16, Equation 17, Equation 22, Equation 20, Equation 53, Equation 58 require orthogonality. No properties require compact support. Many of the derivations interchange order of summations or of summation and integration. Conditions for those interchanges must be met.

## Parameterization of the Scaling Coefficients

The case where φ(t)φ(t) and h(n)h(n) have compact support is very important. It aids in the time localization properties of the DWT and often reduces the computational requirements of calculating the DWT. If h(n)h(n) has compact support, then the filters described in Chapter: Filter Banks and the Discrete Wavelet Transform are simple FIR filters. We have stated that NN, the length of the sequence h(n)h(n), must be even and h(n)h(n) must satisfy the linear constraint of Equation 10 and the N2N2 bilinear constraints of Equation 14. This leaves N2-1N2-1 degrees of freedom in choosing h(n)h(n) that will still guarantee the existence of φ(t)φ(t) and a set of essentially orthogonal basis functions generated from φ(t)φ(t).

### Length-2 Scaling Coefficient Vector

For a length-2 h(n)h(n), there are no degrees of freedom left after satisfying the required conditions in Equation 10 and Equation 14. These requirements are

h ( 0 ) + h ( 1 ) = 2 h ( 0 ) + h ( 1 ) = 2
(60)

and

h 2 ( 0 ) + h 2 ( 1 ) = 1 h 2 ( 0 ) + h 2 ( 1 ) = 1
(61)

which are uniquely satisfied by

h D 2 = h ( 0 ) , h ( 1 ) = 1 2 , 1 2 . h D 2 = h ( 0 ) , h ( 1 ) = 1 2 , 1 2 .
(62)

These are the Haar scaling functions coefficients which are also the length-2 Daubechies coefficients [4] used as an example in Chapter: A multiresolution formulation of Wavelet Systems and discussed later in this book.

### Length-4 Scaling Coefficient Vector

For the length-4 coefficient sequence, there is one degree of freedom or one parameter that gives all the coefficients that satisfy the required conditions:

h ( 0 ) + h ( 1 ) + h ( 2 ) + h ( 3 ) = 2 , h ( 0 ) + h ( 1 ) + h ( 2 ) + h ( 3 ) = 2 ,
(63)
h 2 ( 0 ) + h 2 ( 1 ) + h 2 ( 2 ) + h 2 ( 3 ) = 1 h 2 ( 0 ) + h 2 ( 1 ) + h 2 ( 2 ) + h 2 ( 3 ) = 1
(64)

and

h ( 0 ) h ( 2 ) + h ( 1 ) h ( 3 ) = 0 h ( 0 ) h ( 2 ) + h ( 1 ) h ( 3 ) = 0
(65)

Letting the parameter be the angle αα, the coefficients become

h ( 0 ) = ( 1 - cos ( α ) + sin ( α ) ) / ( 2 2 ) h ( 1 ) = ( 1 + cos ( α ) + sin ( α ) ) / ( 2 2 ) h ( 2 ) = ( 1 + cos ( α ) - sin ( α ) ) / ( 2 2 ) h ( 3 ) = ( 1 - cos ( α ) - sin ( α ) ) / ( 2 2 ) . h ( 0 ) = ( 1 - cos ( α ) + sin ( α ) ) / ( 2 2 ) h ( 1 ) = ( 1 + cos ( α ) + sin ( α ) ) / ( 2 2 ) h ( 2 ) = ( 1 + cos ( α ) - sin ( α ) ) / ( 2 2 ) h ( 3 ) = ( 1 - cos ( α ) - sin ( α ) ) / ( 2 2 ) .
(66)

These equations also give the length-2 Haar coefficients Equation 62 for α=0,π/2,3π/2α=0,π/2,3π/2 and a degenerate condition for α=πα=π. We get the Daubechies coefficients (discussed later in this book) for α=π/3α=π/3. These Daubechies-4 coefficients have a particularly clean form,

h D 4 = 1 + 3 4 2 , 3 + 3 4 2 , 3 - 3 4 2 , 1 - 3 4 2 h D 4 = 1 + 3 4 2 , 3 + 3 4 2 , 3 - 3 4 2 , 1 - 3 4 2
(67)

### Length-6 Scaling Coefficient Vector

For a length-6 coefficient sequence h(n)h(n), the two parameters are defined as αα and ββ and the resulting coefficients are

h ( 0 ) = [ ( 1 + cos ( α ) + sin ( α ) ) ( 1 - cos ( β ) - sin ( β ) ) + 2 sin ( β ) cos ( α ) ] / ( 4 2 ) h ( 1 ) = [ ( 1 - cos ( α ) + sin ( α ) ) ( 1 + cos ( β ) - sin ( β ) ) - 2 sin ( β ) cos ( α ) ] / ( 4 2 ) h ( 2 ) = [ 1 + cos ( α - β ) + sin ( α - β ) ] / ( 2 2 ) h ( 3 ) = [ 1 + cos ( α - β ) - sin ( α - β ) ] / ( 2 2 ) h ( 4 ) = 1 / 2 - h ( 0 ) - h ( 2 ) h ( 5 ) = 1 / 2 - h ( 1 ) - h ( 3 ) h ( 0 ) = [ ( 1 + cos ( α ) + sin ( α ) ) ( 1 - cos ( β ) - sin ( β ) ) + 2 sin ( β ) cos ( α ) ] / ( 4 2 ) h ( 1 ) = [ ( 1 - cos ( α ) + sin ( α ) ) ( 1 + cos ( β ) - sin ( β ) ) - 2 sin ( β ) cos ( α ) ] / ( 4 2 ) h ( 2 ) = [ 1 + cos ( α - β ) + sin ( α - β ) ] / ( 2 2 ) h ( 3 ) = [ 1 + cos ( α - β ) - sin ( α - β ) ] / ( 2 2 ) h ( 4 ) = 1 / 2 - h ( 0 ) - h ( 2 ) h ( 5 ) = 1 / 2 - h ( 1 ) - h ( 3 )
(68)

Here the Haar coefficients are generated for any α=βα=β and the length-4 coefficients Equation 66 result if β=0β=0 with αα being the free parameter. The length-4 Daubechies coefficients are calculated for α=π/3α=π/3 and β=0β=0. The length-6 Daubechies coefficients result from α=1.35980373244182andβ=-0.78210638474440α=1.35980373244182andβ=-0.78210638474440.

The inverse of these formulas which will give αα and ββ from the allowed h(n)h(n) are

α = arctan 2 ( h ( 0 ) 2 + h ( 1 ) 2 ) - 1 + ( h ( 2 ) + h ( 3 ) ) / 2 2 ( h ( 1 ) h ( 2 ) - h ( 0 ) h ( 3 ) ) + 2 ( h ( 0 ) - h ( 1 ) ) α = arctan 2 ( h ( 0 ) 2 + h ( 1 ) 2 ) - 1 + ( h ( 2 ) + h ( 3 ) ) / 2 2 ( h ( 1 ) h ( 2 ) - h ( 0 ) h ( 3 ) ) + 2 ( h ( 0 ) - h ( 1 ) )
(69)
β = α - arctan h ( 2 ) - h ( 3 ) h ( 2 ) + h ( 3 ) - 1 / 2 β = α - arctan h ( 2 ) - h ( 3 ) h ( 2 ) + h ( 3 ) - 1 / 2
(70)

As αα and ββ range over -π-π to ππ all possible h(n)h(n) are generated. This allows informative experimentation to better see what these compactly supported wavelets look like. This parameterization is implemented in the Matlab programs in Appendix C and in the Aware, Inc. software, UltraWave [31].

Since the scaling functions and wavelets are used with integer translations, the location of their support is not important, only the size of the support. Some authors shift h(n)h(n), h1(n)h1(n), φ(t)φ(t), and ψ(t)ψ(t) to be approximately centered around the origin. This is achieved by having the initial nonzero scaling coefficient start at n=-N2+1n=-N2+1 rather than zero. We prefer to have the origin at n=t=0n=t=0.

Matlab programs that calculate h(n)h(n) for N=2,4,6N=2,4,6 are furnished in Appendix C. They calculate h(n)h(n) from αα and ββ according to Equation 62, Equation 66, and Equation 68. They also work backwards to calculate αα and ββ from allowable h(n)h(n) using Equation 70. A program is also included that calculates the Daubechies coefficients for any length using the spectral factorization techniques in [4] and Chapter: Regularity, Moments, and Wavelet System Design of this book.

Longer h(n)h(n) sequences are more difficult to parameterize but can be done with the techniques of Pollen [25] and Wells [35] or the lattice factorization by Viadyanathan [32] developed in Chapter: Filter Banks and Transmultiplexers. Selesnick derived explicit formulas for N=8N=8 using the symbolic software system, Maple, and set up the formulation for longer lengths [28]. It is over the space of these independent parameters that one can find optimal wavelets for a particular problem or class of signals [4], [8].

## Calculating the Basic Scaling Function and Wavelet

Although one never explicitly uses the scaling function or wavelet (one uses the scaling and wavelet coefficients) in most practical applications, it is enlightening to consider methods to calculate φ(t)φ(t) and ψ(t)ψ(t). There are two approaches that we will discuss. The first is a form of successive approximations that is used theoretically to prove existence and uniqueness of φ(t)φ(t) and can also be used to actually calculate them. This can be done in the time domain to find φ(t)φ(t) or in the frequency domain to find the Fourier transform of φ(t)φ(t) which is denoted Φ(ω)Φ(ω). The second method solves for the exact values of φ(t)φ(t) on the integers by solving a set of simultaneous equations. From these values, it is possible to then exactly calculate values at the half integers, then at the quarter integers and so on, giving values of φ(t)φ(t) on what are called the dyadic rationals.

### Successive Approximations or the Cascade Algorithm

In order to solve the basic recursion equation Equation 1, we propose an iterative algorithm that will generate successive approximations to φ(t)φ(t). If the algorithm converges to a fixed point, then that fixed point is a solution to Equation 1. The iterations are defined by

φ ( k + 1 ) ( t ) = n = 0 N - 1 h ( n ) 2 φ ( k ) ( 2 t - n ) φ ( k + 1 ) ( t ) = n = 0 N - 1 h ( n ) 2 φ ( k ) ( 2 t - n )
(71)

for the kthkth iteration where an initial φ(0)(t)φ(0)(t) must be given. Because this can be viewed as applying the same operation over and over to the output of the previous application, it is sometimes called the cascade algorithm.

Using definitions Equation 2 and Equation 3, the frequency domain form becomes

Φ ( k + 1 ) ( ω ) = 1 2 H ω 2 Φ ( k ) ω 2 Φ ( k + 1 ) ( ω ) = 1 2 H ω 2 Φ ( k ) ω 2
(72)

and the limit can be written as an infinite product in the form

Φ ( ) ( ω ) = k = 1 1 2 H ω 2 k Φ ( ) ( 0 ) . Φ ( ) ( ω ) = k = 1 1 2 H ω 2 k Φ ( ) ( 0 ) .
(73)

If this limit exists, the Fourier transform of the scaling function is

Φ ( ω ) = k = 1 1 2 H ω 2 k Φ ( 0 ) . Φ ( ω ) = k = 1 1 2 H ω 2 k Φ ( 0 ) .
(74)

The limit does not depend on the shape of the initial φ(0)(t)φ(0)(t), but only on Φ(k)(0)=φ(k)(t)dt=A0Φ(k)(0)=φ(k)(t)dt=A0, which is invariant over the iterations. This only makes sense if the limit of Φ(ω)Φ(ω) is well-defined as when it is continuous at ω=0ω=0.

The Matlab program in Appendix C implements the algorithm in Equation 71 which converges reliably to φ(t)φ(t), even when it is very discontinuous. From this scaling function, the wavelet can be generated from (Reference). It is interesting to try this algorithm, plotting the function at each iteration, on both admissible h(n)h(n) that satisfy Equation 10 and Equation 14 and on inadmissible h(n)h(n). The calculation of a scaling function for N=4N=4 is shown at each iteration in Figure 3.

Because of the iterative form of this algorithm, applying the same process over and over, it is sometimes called the cascade algorithm [29], [30].

### Iterating the Filter Bank

An interesting method for calculating the scaling function also uses an iterative procedure which consists of the stages of the filter structure of Chapter: Filter Banks and the Discrete Wavelet Transform which calculates wavelet expansions coefficients (DWT values) at one scale from those at another. A scaling function, wavelet expansion of a scaling function itself would be a single nonzero coefficient at the scale of j=1j=1. Passing this single coefficient through the synthesis filter structure of Figure: Two-Stage Two-Band Synthesis Tree and (Reference) would result in a fine scale output that for large jj would essentially be samples of the scaling function.

### Successive approximation in the frequency domain

The Fourier transform of the scaling function defined in Equation 2 is an important tool for studying and developing wavelet theory. It could be approximately calculated by taking the DFT of the samples of φ(t)φ(t) but a more direct approach is available using the infinite product in Equation 74. From this formulation we can see how the zeros of H(ω)H(ω) determine the zeros of Φ(ω)Φ(ω). The existence conditions in Theorem 5 require H(π)=0H(π)=0 or, more generally, H(ω)=0H(ω)=0 for ω=(2k+1)πω=(2k+1)π. Equation Equation 74 gives the relation of these zeros of H(ω)H(ω) to the zeros of Φ(ω)Φ(ω). For the index k=1k=1, H(ω/2)=0H(ω/2)=0 at ω=2(2k+1)πω=2(2k+1)π. For k=2k=2, H(ω/4)=0H(ω/4)=0 at ω=4(2k+1)πω=4(2k+1)π, H(ω/8)=0H(ω/8)=0

at ω=8(2k+1)πω=8(2k+1)π, etc. Because Equation 74 is a product of stretched versions of H(ω)H(ω), these zeros of H(ω/2j)H(ω/2j) are the zeros of the Fourier transform of φ(t)φ(t). Recall from Theorem 15 that H(ω)H(ω) has no zeros in -π/3<ω<π/3-π/3<ω<π/3. All of this gives a picture of the shape of Φ(ω)Φ(ω) and the location of its zeros. From an asymptotic analysis of Φ(ω)Φ(ω) as ωω, one can study the smoothness of φ(t)φ(t).

A Matlab program that calculates Φ(ω)Φ(ω) using this frequency domain successive approximations approach suggested by Equation 74 is given in Appendix C. Studying this program gives further insight into the structure of Φ(ω)Φ(ω). Rather than starting the calculations given in Equation 74 for the index j=1j=1, they are started for the largest j=Jj=J and worked backwards. If we calculate a length-N DFT consistent with j=Jj=J using the FFT, then the samples of H(ω/2j)H(ω/2j) for j=J-1j=J-1 are simply every other sample of the case for j=Jj=J. The next stage for j=J-2j=J-2 is done likewise and if the original NN is chosen a power of two, the process in continued down to j=1j=1 without calculating any more FFTs. This results in a very efficient algorithm. The details are in the program itself.

This algorithm is so efficient, using it plus an inverse FFT might be a good way to calculate φ(t)φ(t) itself. Examples of the algorithm are illustrated in Figure 4 where the transform is plotted for each step of the iteration.

### The Dyadic Expansion of the Scaling Function

The next method for evaluating the scaling function uses a completely different approach. It starts by calculating the values of the scaling function at integer values of tt, which can be done exactly (within our ability to solve simultaneous linear equations). Consider the basic recursion equation Equation 1 for integer values of t=kt=k

φ ( k ) = n h ( n ) 2 φ ( 2 k - n ) , φ ( k ) = n h ( n ) 2 φ ( 2 k - n ) ,
(75)

and assume h(n)0h(n)0 for 0nN-10nN-1.

This is the refinement matrix illustrated in Equation 6 for N=6N=6 which we write in matrix form as

M 0 φ ̲ = φ ̲ . M 0 φ ̲ = φ ̲ .
(76)

In other words, the vector of φ(k)φ(k) is the eigenvector of M0M0 for an eigenvalue of unity. The simple sum of nh(n)=2nh(n)=2 in Equation 10 does not guarantee that M0M0 always has such an eigenvalue, but nh(2n)=nh(2n+1)nh(2n)=nh(2n+1) in Equation 12 does guarantee a unity eigenvalue. This means that if Equation 12 is not satisfied, φ(t)φ(t) is not defined on the dyadic rationals and is, therefore, probably not a very nice signal.

Our problem is to now find that eigenvector. Note from Equation 6 that φ(0)=φ(N-1)=0φ(0)=φ(N-1)=0 or h(0)=h(N-1)=1/2h(0)=h(N-1)=1/2. For the Haar wavelet system, the second is true but for longer systems, this would mean all the other h(n)h(n) would have to be zero because of Equation 10 and that is not only not interesting, it produces a very poorly behaved φ(t)φ(t). Therefore, the scaling function with N>2N>2 and compact support will always be zero on the extremes of the support. This means that we can look for the eigenvector of the smaller 4 by 4 matrix obtained by eliminating the first and last rows and columns of M0M0.

From Equation 76 we form [M0-I]φ̲=0̲[M0-I]φ̲=0̲ which shows that [M0-I][M0-I] is singular, meaning its rows are not independent. We remove the last row and assume the remaining rows are now independent. If that is not true, we remove another row. We next replace that row with a row of ones in order to implement the normalizing equation

k φ ( k ) = 1 k φ ( k ) = 1
(77)

This augmented matrix, [M0-I][M0-I] with a row replaced by a row of ones, when multiplied by φ̲φ̲ gives a vector of all zeros except for a one in the position of the replaced row. This equation should not be singular and is solved for φ̲φ̲ which gives φ(k)φ(k), the scaling function evaluated at the integers.

From these values of φ(t)φ(t) on the integers, we can find the values at the half integers using the recursive equation Equation 1 or a modified form

φ ( k / 2 ) = n h ( n ) 2 φ ( k - n ) φ ( k / 2 ) = n h ( n ) 2 φ ( k - n )
(78)

This is illustrated with the matrix equation Equation 8 as

M 1 φ ̲ = φ 2 ̲ M 1 φ ̲ = φ 2 ̲
(79)

Here, the first and last columns and last row are not needed (because φ0=φ5=φ11/2=0φ0=φ5=φ11/2=0) and can be eliminated to save some arithmetic.

The procedure described here can be repeated to find a matrix that when multiplied by a vector of the scaling function evaluated at the odd integers divided by kk will give the values at the odd integers divided by 2k2k. This modified matrix corresponds to convolving the samples of φ(t)φ(t) by an up-sampled h(n)h(n). Again, convolution combined with up- and down-sampling is the basis of wavelet calculations. It is also the basis of digital filter bank theory. Figure 5 illustrates the dyadic expansion calculation of a Daubechies scaling function for N=4N=4 at each iteration of this method.

Not only does this dyadic expansion give an explicit method for finding the exact values of φ(t)φ(t) of the dyadic rationals (t=k/2jt=k/2j), but it shows how the eigenvalues of MM say something about the φ(t)φ(t). Clearly, if φ(t)φ(t) is continuous, it says everything.

Matlab programs are included in Appendix C to implement the successive approximation and dyadic expansion approaches to evaluating the scaling function from the scaling coefficients. They were used to generate the figures in this section. It is very illuminating to experiment with different h(n)h(n) and observe the effects on φ(t)φ(t) and ψ(t)ψ(t).

## References

1. Cavaretta, S. and Dahmen, W. and Micchelli, C. A. (1991). Stationary Subdivision. (Vol. 93). American Mathematical Society.
2. Chui, Charles K. (1992). An Introduction to Wavelets. [Volume 1 in the series: Wavelet Analysis and its Applications]. San Diego, CA: Academic Press.
3. Chui, Charles K. (1992). Wavelets: A Tutorial in Theory and Applications. [Volume 2 in the series: Wavelet Analysis and its Applications]. San Diego, CA: Academic Press.
4. Daubechies, Ingrid. (1992). Ten Lectures on Wavelets. [Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA]. Philadelphia, PA: SIAM.
5. Deslauriers, G. and Dubuc, S. (1987). Interpolation Dyadique. In Fractals, Dimensions Non Entiérs et Applications. (p. 44–45). Paris: Masson.
6. Daubechies, Ingrid and Lagarias, Jeffrey C. (1991). Two-Scale Difference Equations, Part I. Existence and Global Regularity of Solutions. [From an internal report, AT&T Bell Labs, Sept. 1988]. SIAM Journal of Mathematical Analysis, 22, 1388–1410.
7. Daubechies, Ingrid and Lagarias, Jeffrey C. (1992, July). Two-Scale Difference Equations, Part II. Local Regularity, Infinite Products of Matrices and Fractals. [From an internal report, AT&T Bell Labs, Sept. 1988]. SIAM Journal of Mathematical Analysis, 23, 1031–1079.
8. Gopinath, R. A. and Odegard, J. E. and Burrus, C. S. (1994, April). Optimal Wavelet Representation of Signals and the Wavelet Sampling Theorem. [Also a Tech. Report No. CML TR-92-05, April 1992, revised Aug. 1993.]. IEEE Transactions on Circuits and Systems II, 41(4), 262–277.
9. Heijmans, Henk J. A. M. (1993). Descrete Wavelets and Multiresolution Analysis. In Wavelets: An Elementary Treatment of Theory and Applications. (p. 49–80). Singapore: World Scientific.
10. Jia, R. Q. (1995). Subdivision Schemes in Spaces. Advances in Computational Mathematics, 3, 309–341.
11. (1993). Wavelets: An Elementary Treatment of Theory and Applications. Singapore: World Scientific.
12. Lawton, W. . [Private communication].
13. Lawton, Wayne M. (1990, August). Tight Frames of Compactly Supported Affine Wavelets. [Also Aware, Inc. Tech Report AD891012]. Journal of Mathematical Physics, 31(8), 1898-1901.
14. Lawton, Wayne M. (1991, June). Multiresolution Properties of the Wavelet Galerkin Operator. Journal of Mathematical Physics, 32(6), 1440–1443.
15. Lawton, Wayne M. (1991, January). Necessary and Sufficient Conditions for Constructing Orthonormal Wavelet Bases. [Also Aware, Inc. Tech. Report AD900402, April 1990]. Journal of Mathematical Physics, 32(1), 57–61.
16. Lawton, Wayne M. (1997, submitted). Infinite Convolution Products & Refinable Distributions on Lie Groups. Transactions of the American Mathematical Soceity.
17. Lawton, Wayne M. and Lee, S. L. and Shen, Z. (1997, to appear). Convergence of Multidimensional Cascade Algorithm. Numerische Mathematik.
18. Lawton, Wayne M. and Lee, S. L. and Shen, Z. (1997, to appear). Stability and Orthonormality of Multivariate Refinable Functions. SIAM Journal of Mathematical Analysis.
19. Lawton, Wayne M. and Resnikoff, Howard L. (1991, February). Multidimensional Wavelet Bases. (AD910130). Aware Report. Aware, Inc.
20. Mallat, S. G. (1989, July). A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Transactions on Pattern Recognition and Machine Intelligence, 11(7), 674–693.
21. Mallat, S. G. (1989). Multiresolution Approximation and Wavelet Orthonormal Bases of. Transactions of the American Mathematical Society, 315, 69-87.
22. Micchelli, C. A. and Prautzsch,. (1989). Uniform Refinement of Curves. Linear Algebra, Applications, 114/115, 841–870.
23. Oppenheim, A. V. and Schafer, R. W. (1989). Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice-Hall.
24. Parks, T. W. and Burrus, C. S. (1987). Digital Filter Design. New York: John Wiley & Sons.
25. Pollen, D. (to appear). Daubechies' scaling function on [0,3]. [Also Aware, Inc. tech. report AD891020, 1989]. J. American Math. Soc..
26. Ron, Amos. (1992, September). Characterization of Linear Independence and Stability of the Sfifts of a Univariate Refinable Function in Terms of its Refinement Mask. (CMS TR 93-3). Technical report. Madison: Computer Science Dept., University of Wisconsin.
27. Smith, M. J. and Barnwell, T. P. (1986, June). Exact Reconstruction Techniques for Tree-Structured Subband Coders. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34, 434–441.
28. Selesnick, Ivan W. (1997, May). Parameterization of Orthogonal Wavelet Systems. Technical report. Rice University, Houston, Tx.: ECE Dept. and Computational Mathematics Laboratory.
29. Strang, Gilbert and Nguyen, T. (1996). Wavelets and Filter Banks. Wellesley, MA: Wellesley–Cambridge Press.
30. Strang, G. (1996). Eigenvalues of and Convergence of the Cascade Algorithm. IEEE Transactions on Signal Processing, 44,
31. (1989, July). The UltraWave Explorer User's Manual. Aware, Inc.
32. Vaidyanathan, P. P. (1992). Multirate Systems and Filter Banks. Englewood Cliffs, NJ: Prentice-Hall.
33. Vaidyanathan, P. P. and Djokovic, Igor. (1995). Wavelet Transforms. In The Circuits and Filters Handbook. (p. 134–219). Roca Raton: CRC Press and IEEE Press.
34. Vetterli, Martin and Kovačević, Jelena. (1995). Wavelets and Subband Coding. Upper Saddle River, NJ: Prentice–Hall.
35. Wells, Jr., R. O. (1993). Parameterizing smooth compactly supported wavelets. [Also Aware tech report AD891231, Dec. 1989.]. Transactions of the American Mathematical Society, 338(2), 919–931.
36. Wickerhauser, Mladen Victor. (1995). Adapted Wavelet Analysis from Theory to Software. Wellesley, MA: A K Peters.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks