# Connexions

You are here: Home » Content » Filter Banks and Transmultiplexers

### Recently Viewed

This feature requires Javascript to be enabled.

# Filter Banks and Transmultiplexers

Module by: Ramesh Gopinath. E-mail the author

## Introduction

In this chapter, we develop the properties of wavelet systems in terms of the underlying filter banks associated with them. This is an expansion and elaboration of the material in Chapter: Filter Banks and the Discrete Wavelet Transform , where many of the conditions and properties developed from a signal expansion point of view in  Chapter: The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients are now derived from the associated filter bank. The Mallat algorithm uses a special structure of filters and downsamplers/upsamplers to calculate and invert the discrete wavelet transform. Such filter structures have been studied for over three decades in digital signal processing in the context of the filter bank and transmultiplexer problems [28], [31], [37], [33], [34], [36], [21], [32], [25]. Filter bank theory, besides providing efficient computational schemes for wavelet analysis, also gives valuable insights into the construction of wavelet bases. Indeed, some of the finer aspects of wavelet theory emanates from filter bank theory.

### The Filter Bank

A filter bank is a structure that decomposes a signal into a collection of subsignals. Depending on the application, these subsignals help emphasize specific aspects of the original signal or may be easier to work with than the original signal. We have linear or non-linear filter banks depending on whether or not the subsignals depend linearly on the original signal. Filter banks were originally studied in the context of signal compression where the subsignals were used to “represent” the original signal. The subsignals (called subband signals) are downsampled so that the data rates are the same in the subbands as in the original signal—though this is not essential. Key points to remember are that the subsignals convey salient features of the original signal and are sufficient to reconstruct the original signal.

Figure 1 shows a linear filter bank that is used in signal compression (subband coding). The analysis filters hihi are used to filter the input signal x(n)x(n). The filtered signals are downsampled to give the subband signals. Reconstruction of the original signal is achieved by upsampling, filtering and adding up the subband signals as shown in the right-hand part of Figure 1. The desire for perfect reconstruction (i.e., y(n)=x(n)y(n)=x(n)) imposes a set of bilinear constraints (since all operations in Figure 1 are linear) on the analysis and synthesis filters. This also constrains the downsampling factor, MM, to be at most the number of subband signals, say LL. Filter bank design involves choosing filters hihi and gigi that satisfy perfect reconstruction and simultaneously give informative and useful subband signals. In subband speech coding, for example, a natural choice of desired frequency responses—motivated by the nonuniform sensitivity of the human ear to various frequency bands—for the analysis and synthesis filters is shown in Figure 2.

In summary, the filter bank problem involves the design of the filters hi(n)hi(n) and gi(n)gi(n), with the following goals:

1. Perfect Reconstruction (i.e., y(n)=x(n)y(n)=x(n)).
2. Usefulness. Clearly this depends on the application. For the subband coding application, the filter frequency responses might approximate the ideal responses in Figure 2. In other applications the filters may have to satisfy other constraints or approximate other frequency responses.

If the signals and filters are multidimensional in Figure 1, we have the multidimensional filter bank design problem.

### Transmultiplexer

A transmultiplexer is a structure that combines a collection of signals into a single signal at a higher rate; i.e., it is the dual of a filter bank. If the combined signal depends linearly on the constituent signal, we have a linear transmultiplexer. Transmultiplexers were originally studied in the context of converting time-domain-multiplexed (TDM) signals into frequency domain multiplexed (FDM) signals with the goal of converting back to time-domain-multiplexed signals at some later point. A key point to remember is that the constituent signals should be recoverable from the combined signal. Figure 3 shows the structure of a transmultiplexer. The input signals yi(n)yi(n) were upsampled, filtered, and combined (by a synthesis bank of filters) to give a composite signal d(n)d(n). The signal d(n)d(n) can be filtered (by an analysis bank of filters) and downsampled to give a set of signals xi(n)xi(n). The goal in transmultiplexer design is a choice of filters that ensures perfect reconstruction (i.e., for all ii, xi(n)=yi(n)xi(n)=yi(n)). This imposes bilinear constraints on the synthesis and analysis filters. Also, the upsampling factor must be at least the number of constituent input signals, say LL. Moreover, in classical TDM-FDM conversion the analysis and synthesis filters must approximate the ideal frequency responses in Figure 2. If the input signals, analysis filters and synthesis filters are multidimensional, we have a multidimensional transmultiplexer.

### Perfect Reconstruction—A Closer Look

We now take a closer look at the set of bilinear constraints on the analysis and synthesis filters of a filter bank and/or transmultiplexer that ensures perfect reconstruction (PR). Assume that there are LL analysis filters and LL synthesis filters and that downsampling/upsampling is by some integer MM. These constraints, broadly speaking, can be viewed in three useful ways, each applicable in specific situations.

1. Direct characterization - which is useful in wavelet theory (to characterize orthonormality and frame properties), in the study of a powerful class of filter banks (modulated filter banks), etc.
2. Matrix characterization - which is useful in the study of time-varying filter banks.
3. z-transform-domain (or polyphase-representation) characterization - which is useful in the design and implementation of (unitary) filter banks and wavelets.

### Direct Characterization of PR

We will first consider the direct characterization of PR, which, for both filter banks and transmultiplexers, follows from an elementary superposition argument.

Theorem 38 A filter bank is PR if and only if, for all integers n1n1 and n2n2,

i = 0 L - 1 n h i ( M n + n 1 ) g i ( - M n - n 2 ) = δ ( n 1 - n 2 ) . i = 0 L - 1 n h i ( M n + n 1 ) g i ( - M n - n 2 ) = δ ( n 1 - n 2 ) .
(1)

A transmultiplexer is PR if and only if, for all i,j0,1,...,L-1i,j0,1,...,L-1,

n h i ( n ) g j ( - M l - n ) = δ ( l ) δ ( i - j ) . n h i ( n ) g j ( - M l - n ) = δ ( l ) δ ( i - j ) .
(2)

Moreover, if the number of channels is equal to the downsampling factor (i.e., L=ML=M),Equation 1 and Equation 2 are equivalent.

Consider a PR filter bank. Since an arbitrary signal is a linear superposition of impulses, it suffices to consider the input signal, x(n)=δ(n-n1)x(n)=δ(n-n1), for arbitrary integer n1n1. Then (see Figure 1) di(n)=hi(Mn-n1)di(n)=hi(Mn-n1) and therefore, y(n2)=ingi(n2-Mn)di(n)y(n2)=ingi(n2-Mn)di(n). But by PR, y(n2)=δ(n2-n1)y(n2)=δ(n2-n1). The filter bank PR property is precisely a statement of this fact:

y ( n 2 ) = i n g i ( n 2 - M n ) d i ( n ) = i n g i ( n 2 - M n ) h i ( M n - n 1 ) = δ ( n 2 - n 1 ) . y ( n 2 ) = i n g i ( n 2 - M n ) d i ( n ) = i n g i ( n 2 - M n ) h i ( M n - n 1 ) = δ ( n 2 - n 1 ) .
(3)

Consider a PR transmultiplexer. Once again because of linear superposition, it suffices to cosnsider only the input signals xi(n)=δ(n)δ(i-j)xi(n)=δ(n)δ(i-j) for all ii and jj. Then, d(n)=gj(n)d(n)=gj(n) (see Figure 3), and yi(l)=nhi(n)d(Ml-n)yi(l)=nhi(n)d(Ml-n). But by PR yi(l)=δ(l)δ(i-j)yi(l)=δ(l)δ(i-j). The transmultiplexer PR property is precisely a statement of this fact:

y i ( l ) = n h i ( n ) d ( M l - n ) = n h i ( n ) g j ( M l - n ) = δ ( l ) δ ( i - j ) . y i ( l ) = n h i ( n ) d ( M l - n ) = n h i ( n ) g j ( M l - n ) = δ ( l ) δ ( i - j ) .
(4)

Remark: Strictly speaking, in the superposition argument proving Equation 2, one has to consider the input signals xi(n)=δ(n-n1)δ(i-j)xi(n)=δ(n-n1)δ(i-j) for arbitrary n1n1. One readily verifies that for all n1n1 Equation 2 has to be satisfied.

The equivalence of Equation 1 and Equation 2 when L=ML=M is not obvious from the direct characterization. However, the transform domain characterization that we shall see shortly will make this connection obvious. For a PR filter, bank the LL channels should contain sufficient information to reconstruct the original signal (note the summation over ii in Equation 1), while for a transmultiplexer, the constituent channels should satisfy biorthogonality constraints so that they can be reconstructed from the composite signal (note the biorthogonality conditions suggested by Equation 2).

### Matrix characterization of PR

The second viewpoint is linear-algebraic in that it considers all signals as vectors and all filtering operations as matrix-vector multiplications [34]. In Figure 1 and Figure 3 the signals x(n)x(n), di(n)di(n) and y(n)y(n) can be naturally associated with infinite vectors xx, didi and yy respectively. For example, x=[,x(-1),x(0),x(1),]x=[,x(-1),x(0),x(1),]. Then the analysis filtering operation can be expressed as

d i = H i x , for i 0 , 1 , 2 , ... , L - 1 , d i = H i x , for i 0 , 1 , 2 , ... , L - 1 ,
(5)

where, for each ii, HiHi is a matrix with entries appropriately drawn from filter hihi. HiHi is a block Toeplitz matrix (since its obtained by retaining every MthMth row of the Toeplitz matrix representing convolution by hihi) with every row containing hihi in an index-reversed order. Then the synthesis filtering operation can be expressed as

y = i G i T d i y = i G i T d i
(6)

where, for each ii, GiGi is a matrix with entries appropriately drawn from filter gigi. GiGi is also a block Toeplitz matrix (since it is obtained by retaining every MthMth row of the Toeplitz matrix whose transpose represents convolution by gigi) with every row containing gigi in its natural order. Define dd to be the vector obtained by interlacing the entries of each of the vectors didi: d=[,d0(0),d1(0),,dM-1(0),d0(1),d1(1),]d=[,d0(0),d1(0),,dM-1(0),d0(1),d1(1),]. Also define the matrices HH and GG (in terms of HiHi and GiGi) so that

d = Hx , and y = G T d . d = Hx , and y = G T d .
(7)

HH is obtained by interlacing the rows of HiHi and GG is obtained by interlacing the rows of GiGi. For example, in the FIR case if the filters are all of length NN,

d = H x = def h 0 ( N - 1 ) ... h 0 ( N - M - 1 ) ... h 0 ( 0 ) 0 ... ... ... ... ... h L - 1 ( N - 1 ) ... h L - 1 ( N - M - 1 ) ... h L - 1 ( 0 ) 0 0 0 h 0 ( N - 1 ) ... ... ... ... ... ... 0 0 h L - 1 ( N - 1 ) ... ... x . d = H x = def h 0 ( N - 1 ) ... h 0 ( N - M - 1 ) ... h 0 ( 0 ) 0 ... ... ... ... ... h L - 1 ( N - 1 ) ... h L - 1 ( N - M - 1 ) ... h L - 1 ( 0 ) 0 0 0 h 0 ( N - 1 ) ... ... ... ... ... ... 0 0 h L - 1 ( N - 1 ) ... ... x .
(8)

From this development, we have the following result:

Theorem 39 A filter bank is PR iff

G T H = I . G T H = I .
(9)

A transmultiplexer is PR iff

H G T = I . H G T = I .
(10)

Moreover, when L=ML=M, both conditions are equivalent.

One can also write the PR conditions for filter banks and transmultiplexers in the following form, which explicitly shows the formal relationship between the direct and matrix characterizations. For a PR filter bank we have

i G i T H i = I . i G i T H i = I .
(11)

Correspondingly for a PR transmultiplexer we have

H i G j T = δ ( i - j ) I . H i G j T = δ ( i - j ) I .
(12)

### Polyphase (Transform-Domain) Characterization of PR

We finally look at the analysis and synthesis filter banks from a polyphase representation viewpoint. Here subsequences of the input and output signals and the filters are represented in the z-transform domain. Indeed let the z-transforms of the signals and filters be expressed in terms of the z-transforms of their subsequences as follows:

X ( z ) = k = 0 M - 1 z k X k ( z M ) X ( z ) = k = 0 M - 1 z k X k ( z M )
(13)
Y ( z ) = k = M - 1 z k Y k ( z M ) Y ( z ) = k = M - 1 z k Y k ( z M )
(14)
H i ( z ) = k = 0 M - 1 z - k H i , k ( z M ) H i ( z ) = k = 0 M - 1 z - k H i , k ( z M )
(15)
G i ( z ) = k = 0 M - 1 z k G i , k ( z M ) G i ( z ) = k = 0 M - 1 z k G i , k ( z M )
(16)

Then, along each branch of the analysis bank we have

D i ( z ) = M H i ( z ) X ( z ) = M k = 0 M - 1 z - k H i , k ( z ) l = 0 M - 1 z l X l ( z M ) = M k , l = 0 M - 1 z l - k H i , k ( z M ) X l ( z M ) = k , l = 0 M - 1 δ ( l - k ) H i , k ( z ) X l ( z ) = k M - 1 H i , k ( z ) X k ( z ) . D i ( z ) = M H i ( z ) X ( z ) = M k = 0 M - 1 z - k H i , k ( z ) l = 0 M - 1 z l X l ( z M ) = M k , l = 0 M - 1 z l - k H i , k ( z M ) X l ( z M ) = k , l = 0 M - 1 δ ( l - k ) H i , k ( z ) X l ( z ) = k M - 1 H i , k ( z ) X k ( z ) .
(17)

Similarly, from the synthesis bank, we have

Y ( z ) = i = 0 L - 1 D i ( z M ) G i ( z ) = i = 0 L - 1 D i ( z M ) k = 0 M - 1 z k G i , k ( z M ) = k = 0 M - 1 z k i = 0 L - 1 G i , k ( z M ) D i ( z M ) . Y ( z ) = i = 0 L - 1 D i ( z M ) G i ( z ) = i = 0 L - 1 D i ( z M ) k = 0 M - 1 z k G i , k ( z M ) = k = 0 M - 1 z k i = 0 L - 1 G i , k ( z M ) D i ( z M ) .
(18)

and therefore (from Equation 14)

Y k ( z ) = i = 0 L - 1 G i , k ( z ) D i ( z ) . Y k ( z ) = i = 0 L - 1 G i , k ( z ) D i ( z ) .
(19)

For i0,1,...,L-1i0,1,...,L-1 and k0,1,...,M-1k0,1,...,M-1, define the polyphase component matrices (Hp(z))i,k=Hi,k(z)(Hp(z))i,k=Hi,k(z) and (Gp(z))i,k=Gi,k(z)(Gp(z))i,k=Gi,k(z). Let Xp(z)Xp(z) and Yp(z)Yp(z) denote the z-transforms of the polyphase signals xp(n)xp(n) and yp(n)yp(n), and let Dp(z)Dp(z) be the vector whose components are Di(z)Di(z). Equations Equation 17 and Equation 19 can be written compactly as

D p ( z ) = H p ( z ) X p ( z ) , D p ( z ) = H p ( z ) X p ( z ) ,
(20)
Y p ( z ) = G p T ( z ) D p ( z ) , Y p ( z ) = G p T ( z ) D p ( z ) ,
(21)

and

Y p ( z ) = G p T ( z ) H p ( z ) X p ( z ) . Y p ( z ) = G p T ( z ) H p ( z ) X p ( z ) .
(22)

Thus, the analysis filter bank is represented by the multi-input (the polyphase components of X(z)X(z)), multi-output (the signals Di(z)Di(z)) linear-shift-invariant system Hp(z)Hp(z) that takes in Xp(z)Xp(z) and gives out Dp(z)Dp(z). Similarly, the synthesis filter bank can be interpreted as a multi-input (the signals Di(z)Di(z)), multi-output (the polyphase components of Y(z)Y(z)) system GpT(z)GpT(z), which maps Dp(z)Dp(z) to Yp(z)Yp(z). Clearly we have PR iff Yp(z)=Xp(z)Yp(z)=Xp(z). This occurs precisely when GpT(z)Hp(z)=IGpT(z)Hp(z)=I.

For the transmultiplexer problem, let Yp(z)Yp(z) and Xp(z)Xp(z) be vectorized versions of the input and output signals respectively and let Dp(z)Dp(z) be the generalized polyphase representation of the signal D(z)D(z). Now Dp(z)=GpT(z)Yp(z)Dp(z)=GpT(z)Yp(z) and Xp(z)=Hp(z)Dp(z)Xp(z)=Hp(z)Dp(z). Hence Xp(z)=Hp(z)GpT(z)Yp(z)Xp(z)=Hp(z)GpT(z)Yp(z), and for PR Hp(z)GpT(z)=IHp(z)GpT(z)=I.

Theorem 40 A filter bank has the PR property if and only if

G p T ( z ) H p ( z ) = I . G p T ( z ) H p ( z ) = I .
(23)

A transmultiplexer has the PR property if and only if

H p ( z ) G p T ( z ) = I H p ( z ) G p T ( z ) = I
(24)

where Hp(z)Hp(z) and Gp(z)Gp(z) are as defined above.

Remark: If GpT(z)Hp(z)=IGpT(z)Hp(z)=I, then Hp(z)Hp(z) must have at least as many rows as columns (i.e., LMLM is necessary for a filter bank to be PR). If Hp(z)GpT(z)=IHp(z)GpT(z)=I then Hp(z)Hp(z) must have at least as many columns as rows (i.e., MLML is necessary for a tranmultiplexer to be PR). If L=ML=M, GpT(z)Hp(z)=I=HpT(z)Gp(z)GpT(z)Hp(z)=I=HpT(z)Gp(z) and hence a filter bank is PR iff the corresponding transmultiplexer is PR. This equivalence is trivial with the polyphase representation, while it is not in the direct and matrix representations.

Notice that the PR property of a filter bank or transmultiplexer is unchanged if all the analysis and synthesis filters are shifted by the same amount in opposite directions. Also any one analysis/synthesis filter pair can be shifted by multiples of MM in opposite directions without affecting the PR property. Using these two properties (and assuming all the filters are FIR), we can assume without loss of generality that the analysis filters are supported in [0,N-1][0,N-1] (for some integer NN). This also implies that Hp(z)Hp(z) is a polynomial in z-1z-1, a fact that we will use in the parameterization of an important class of filter banks—unitary filter banks.

All through the discussion of the PR property of filter banks, we have deliberately not said anything about the length of the filters. The bilinear PR constraints are completely independent of filter lengths and hold for arbitrary sequences. However, if the sequences are infinite then one requires that the infinite summations over nn in Equation 1 and Equation 2 converge. Clearly, assuming that these filter sequences are in 2(Z)2(Z) is sufficient to ensure this since inner products are then well-defined.

## Unitary Filter Banks

From Equation 7 it follows that a filter bank can be ensured to be PR if the analysis filters are chosen such that HH is left-unitary, i.e., HTH=IHTH=I. In this case, the synthesis matrix G=HG=H (from Equation 9) and therefore Gi=HiGi=Hi for all ii. Recall that the rows of GiGi contain gigi in natural order while the rows of HiHi contains hihi in index-reversed order. Therefore, for such a filter bank, since Gi=HiGi=Hi, the synthesis filters are reflections of the analysis filters about the origin; i.e., gi(n)=hi(-n)gi(n)=hi(-n). Filter banks where the analysis and synthesis filters satisfy this reflection relationship are called unitary (or orthogonal) filter banks for the simple reason that HH is left-unitary. In a similar fashion, it is easy to see that if HH is right-unitary (i.e., HHT=IHHT=I), then the transmultiplexer associated with this set of analysis filters is PR with gi(n)=hi(-n)gi(n)=hi(-n). This defines unitary transmultiplexers.

We now examine how the three ways of viewing PR filter banks and transmultiplexers simplify when we focus on unitary ones. Since gi(n)=hi(-n)gi(n)=hi(-n), the direct characterization becomes the following:

Theorem 41 A filter bank is unitary iff

i n h i ( M n + n 1 ) h i ( M n + n 2 ) = δ ( n 1 - n 2 ) . i n h i ( M n + n 1 ) h i ( M n + n 2 ) = δ ( n 1 - n 2 ) .
(25)

A transmultiplexer is unitary iff

n h i ( n ) h j ( M l + n ) = δ ( l ) δ ( i - j ) . n h i ( n ) h j ( M l + n ) = δ ( l ) δ ( i - j ) .
(26)

If the number of channels is equal to the downsampling factor, then a filter bank is unitary iff the corresponding transmultiplexer is unitary.

The matrix characterization of unitary filter banks/transmultiplexers should be clear from the above discussion:

Theorem 42 A filter bank is unitary iff HTH=IHTH=I, and a transmultiplexer is unitary iff HHT=IHHT=I.

The z-transform domain characterization of unitary filter banks and transmultiplexers is given below:

Theorem 43 A filter bank is unitary iff HpT(z-1)Hp(z)=IHpT(z-1)Hp(z)=I, and a transmultiplexer is unitary iff Hp(z)HpT(z-1)=IHp(z)HpT(z-1)=I.

In this book (as in most of the work in the literature) one primarily considers the situation where the number of channels equals the downsampling factor. For such a unitary filter bank (transmultiplexer), Equation 11 and Equation 12 become:

i H i T H i = I , i H i T H i = I ,
(27)

and

H i H j T = δ ( i - j ) I . H i H j T = δ ( i - j ) I .
(28)

The matrices HiHi are pairwise orthogonal and form a resolution of the identity matrix. In other words, for each ii, HiTHiHiTHi is an orthogonal projection matrix and the filter bank gives an orthogonal decomposition of a given signal. Recall that for a matrix PP to be an orthogonal projection matrix, P2=PP2=P and P0P0; in our case, indeed, we do have HiTHi0HiTHi0 and HiTHiHiTHi=HiTHiHiTHiHiTHi=HiTHi.

Unitarity is a very useful constraint since it leads to orthogonal decompositions. Besides, for a unitary filter bank, one does not have to design both the analysis and synthesis filters since hi(n)=gi(-n)hi(n)=gi(-n). But perhaps the most important property of unitary filter banks and transmultiplexers is that they can be parameterized. As we have already seen, filter bank design is a nonlinear optimization (of some goodness criterion) problem subject to PR constraints. If the PR constraints are unitary, then a parameterization of unitary filters leads to an unconstrained optimization problem. Besides, for designing wavelets with high-order vanishing moments, nonlinear equations can be formulated and solved in this parameter space. A similar parameterization of nonunitary PR filter banks and transmultiplexers seems impossible and it is not too difficult to intuitively see why. Consider the following analogy: a PR filter bank is akin to a left-invertible matrix and a PR transmultiplexer to a right-invertible matrix. If L=ML=M, the PR filter bank is akin to an invertible matrix. A unitary filter bank is akin to a left-unitary matrix, a unitary transmultiplexer to a right-unitary matrix, and when L=ML=M, either of them to a unitary matrix. Left-unitary, right-unitary and in particular unitary matrices can be parameterized using Givens' rotations or Householder transformations [14]. However, left-invertible, right-invertible and, in particular, invertible matrices have no general parameterization. Also, unitariness allows explicit parameterization of filter banks and transmultiplexers which just PR alone precludes. The analogy is even more appropriate: There are two parameterizations of unitary filter banks and transmultiplexers that correspond to Givens' rotation and Householder transformations, respectively. All our discussions on filter banks and transmultiplexers carry over naturally with very small notational changes to the multi-dimensional case where downsampling is by some integer matrix [9]. However, the parameterization result we now proceed to develop is not known in the multi-dimensional case. In the two-dimensional case, however, an implicit, and perhaps not too practical (from a filter-design point of view), parameterization of unitary filter banks is described in [1].

Consider a unitary filter bank with finite-impulse response filters (i.e., for all ii, hihi is a finite sequence). Recall that without loss of generality, the filters can be shifted so that Hp(z)Hp(z) is a polynomial in z-1z-1. In this case Gp(z)=Hp(z-1)Gp(z)=Hp(z-1) is a polynomial in zz. Let

H p ( z ) = k = 0 K - 1 h p ( k ) z - k . H p ( z ) = k = 0 K - 1 h p ( k ) z - k .
(29)

That is, Hp(z)Hp(z) is a matrix polynomial in z-1z-1 with coefficients hp(k)hp(k) and degree K-1K-1. Since HpT(z-1)Hp(z)=IHpT(z-1)Hp(z)=I, from Equation 29 we must have hpT(0)hp(K-1)=0hpT(0)hp(K-1)=0 as it is the coefficient of zK-1zK-1 in the product HpT(z-1)Hp(z)HpT(z-1)Hp(z). Therefore hp(0)hp(0) is singular. Let PK-1PK-1 be the unique projection matrix onto the range of hp(K-1)hp(K-1) (say of dimension δK-1δK-1). Then hp(0)TPK-1=0=PK-1hp(0)hp(0)TPK-1=0=PK-1hp(0). Also PK-1h(K-1)=h(K-1)PK-1h(K-1)=h(K-1) and hence (I-PK-1)h(K-1)=0(I-PK-1)h(K-1)=0. Now I-PK-1+zPK-1Hp(z)I-PK-1+zPK-1Hp(z) is a matrix polynomial of degree at most K-2K-2. If h(0)h(0) and h(K-1)h(K-1) are nonzero (an assumption one makes without loss of generality), the degree is preciselyK-2K-2. Also it is unitary since I-PK-1+zPK-1I-PK-1+zPK-1 is unitary. Repeated application of this procedure (K-1)(K-1) times gives a degree zero (constant) unitary matrix V0V0. The discussion above shows that an arbitrary unitary polynomial matrix of degree K-1K-1 can be expressed algorithmically uniquely as described in the following theorem:

Theorem 44 For a polynomial matrix Hp(z)Hp(z), unitary on the unit circle (i.e., HpT(z-1)Hp(z)=IHpT(z-1)Hp(z)=I), and of polynomial degree K-1K-1, there exists a unique set of projection matrices PkPk (each of rank some integer δkδk), such that

H p ( z ) = k = K - 1 1 I - P k + z - 1 P k V 0 . H p ( z ) = k = K - 1 1 I - P k + z - 1 P k V 0 .
(30)

Remark: Since the projection PkPk is of rank δkδk, it can be written as v1v1T+...+vδkvδkTv1v1T+...+vδkvδkT, for a nonunique set of orthonormal vectors vivi. Using the fact that

I - v j v j T - v j - 1 v j - 1 T + z - 1 ( v j v j T + v j - 1 v j - 1 T ) = i = j j - 1 I - v i v i T + z - 1 v i v i T , I - v j v j T - v j - 1 v j - 1 T + z - 1 ( v j v j T + v j - 1 v j - 1 T ) = i = j j - 1 I - v i v i T + z - 1 v i v i T ,
(31)

defining Δ=kδkΔ=kδk and collecting all the vjvj's that define the PkPk's into a single pool (and reindexing) we get the following factorization:

H p ( z ) = k = Δ 1 I - v k v k T + z - 1 v k v k T V 0 . H p ( z ) = k = Δ 1 I - v k v k T + z - 1 v k v k T V 0 .
(32)

If Hp(z)Hp(z) is the analysis bank of a filter bank, then notice that ΔΔ (from Equation 32) is the number of storage elements required to implement the analysis bank. The minimum number of storage elements to implement any transfer function is called the McMillan degree and in this case ΔΔ is indeed the McMillan degree [32]. Recall that PKPK is chosen to be the projection matrix onto the range of hp(K-1)hp(K-1). Instead we could have chosen PKPK to be the projection onto the nullspace of hp(0)hp(0) (which contains the range of hp(K-1)hp(K-1)) or any space sandwiched between the two. Each choice leads to a different sequence of factors PKPK and corresponding δkδk (except when the range and nullspaces in question coincide at some stage during the order reduction process). However, ΔΔ, the McMillan degree is constant.

Equation Equation 32 can be used as a starting point for filter bank design. It parameterizes all unitary filter banks with McMillan degree ΔΔ. If Δ=KΔ=K, then all unitary filter banks with filters of length NMKNMK are parameterized using a collection of K-1K-1 unitary vectors, vkvk, and a unitary matrix, V0V0. Each unitary vector has (M-1)(M-1) free parameters, while the unitary matrix has M(M-1)/2M(M-1)/2 free parameters for a total of (K-1)(M-1)+M2(K-1)(M-1)+M2 free parameters for Hp(z)Hp(z). The filter bank design problem is to choose these free parameters to optimize the “usefulness” criterion of the filter bank.

If L>ML>M, and Hp(z)Hp(z) is left-unitary, a similar analysis leads to exactly the same factorization as before except that V0V0 is a left unitary matrix. In this case, the number of free parameters is given by (K-1)(L-1)+L2-M2(K-1)(L-1)+L2-M2. For a transmultiplexer with L<ML<M, one can use the same factorization above for HpT(z)HpT(z) (which is left unitary). Even for a filter bank or transmultiplexer with L=ML=M, factorizations of left-/right-unitary Hp(z)Hp(z) is useful for the following reason. Let us assume that a subset of the analysis filters has been predesigned (for example in wavelet theory one sometimes independently designs h0h0 to be a KK-regular scaling filter, as in Chapter: Regularity, Moments, and Wavelet System Design ). The submatrix of Hp(z)Hp(z) corresponding to this subset of filters is right-unitary, hence its transpose can be parameterized as above with a collection of vectors vivi and a left-unitary V0V0. Each choice for the remaining columns of V0V0 gives a choice for the remaining filters in the filter bank. In fact, all possible completions of the original subset with fixed McMillan degree are given this way.

Orthogonal filter banks are sometimes referred to as lossless filter banks because the collective energy of the subband signals is the same as that of the original signal. If UU is an orthogonal matrix, then the signals x(n)x(n) and Ux(n)Ux(n) have the same energy. If PP is an orthogonal projection matrix, then

x 2 = P x 2 + ( I - P ) x 2 . x 2 = P x 2 + ( I - P ) x 2 .
(33)

For any give X(z)X(z), X(z)X(z) and z-1X(z)z-1X(z) have the same energy. Using the above facts, we find that for any projection matrix, PP,

D p ( z ) = I - P + z - 1 P X p ( z ) = def T ( z ) X p ( z ) D p ( z ) = I - P + z - 1 P X p ( z ) = def T ( z ) X p ( z )
(34)

has the same energy as Xp(z)Xp(z). This is equivalent to the fact that T(z)T(z) is unitary on the unit circle (one can directly verify this). Therefore (from Equation 30) it follows that the subband signals have the same energy as the original signal.

In order to make the free parameters explicit for filter design, we now describe V0V0 and vivi using angle parameters. First consider vivi, with vi=1vi=1. Clearly, vivi has (M-1)(M-1) degrees of freedom. One way to parameterize vivi using (M-1)(M-1) angle parameters θi,kθi,k, k0,1,...,M-2k0,1,...,M-2 would be to define the components of vivi as follows:

( v i ) j = l = 0 j - 1 sin ( θ i , l ) cos ( θ i , j ) for j 0 , 1 , ... , M - 2 l = 0 M - 1 sin ( θ i , l ) for j = M - 1 . ( v i ) j = l = 0 j - 1 sin ( θ i , l ) cos ( θ i , j ) for j 0 , 1 , ... , M - 2 l = 0 M - 1 sin ( θ i , l ) for j = M - 1 .
(35)

As for V0V0, it being an M×MM×M orthogonal matrix, it has M2M2 degrees of freedom. There are two well known parameterizations of constant orthogonal matrices, one based on Givens' rotation (well known in QR factorization etc. [5]), and another based on Householder reflections. In the Householder parameterization

V 0 = i = 0 M - 1 I - 2 v i v i T , V 0 = i = 0 M - 1 I - 2 v i v i T ,
(36)

where vivi are unit norm vectors with the first ii components of vivi being zero. Each matrix factor I-2viviTI-2viviT when multiplied by a vector qq, reflects qq about the plane perpendicular to vivi, hence the name Householder reflections. Since the first ii components of vivi is zero, and vi=1vi=1, vivi has M-i-1M-i-1 degrees of freedom. Each being a unit vector, they can be parameterized as before using M-i-1M-i-1 angles. Therefore, the total degrees of freedom are

i = 0 M - 1 ( M - 1 - i ) = i = 0 M - 1 i = M 2 . i = 0 M - 1 ( M - 1 - i ) = i = 0 M - 1 i = M 2 .
(37)

In summary, any orthogonal matrix can be factored into a cascade of MM reflections about the planes perpendicular to the vectors vivi.

Notice the similarity between Householder reflection factors for V0V0 and the factors of Hp(z)Hp(z) in Equation 32. Based on this similarity, the factorization of unitary matrices and vectors in this section is called the Householder factorization. Analogous to the Givens' factorization for constant unitary matrices, also one can obtain a factorization of unitary matrices Hp(z)Hp(z) and unitary vectors V(z)V(z)[6]. However, from the points of view of filter bank theory and wavelet theory, the Householder factorization is simpler to understand and implement except when M=2M=2.

Perhaps the simplest and most popular way to represent a 2×22×2 unitary matrix is by a rotation parameter (not by a Householder reflection parameter). Therefore, the simplest way to represent a unitary 2×22×2 matrix Hp(z)Hp(z) is using a lattice parameterization using Given's rotations. Since two-channel unitary filter banks play an important role in the theory and design of unitary modulated filter banks (that we will shortly address), we present the lattice parameterization [35]. The lattice parameterization is also obtained by an order-reduction procedure we saw while deriving the Householder-type factorization in Equation 30.

Theorem 45 Every unitary 2×22×2 matrix Hp(z)Hp(z) (in particular the polyphase matrix of a two channel FIR unitary filter bank) is of the form

H p ( z ) = 1 0 0 ± 1 R ( θ K - 1 ) Z R ( θ K - 2 ) Z ... Z R ( θ 1 ) Z R ( θ 0 ) , H p ( z ) = 1 0 0 ± 1 R ( θ K - 1 ) Z R ( θ K - 2 ) Z ... Z R ( θ 1 ) Z R ( θ 0 ) ,
(38)

where

R ( θ ) = cos θ sin θ - sin θ cos θ and Z = 1 0 0 z - 1 R ( θ ) = cos θ sin θ - sin θ cos θ and Z = 1 0 0 z - 1
(39)

Equation Equation 38 is the unitary lattice parameterization of Hp(z)Hp(z). The filters H0(z)H0(z) and H1(z)H1(z) are given by

H 0 ( z ) H 1 ( z ) = H p ( z 2 ) 1 z - 1 . H 0 ( z ) H 1 ( z ) = H p ( z 2 ) 1 z - 1 .
(40)

By changing the sign of the filter h1(n)h1(n), if necessary, one can always write Hp(z)Hp(z) in the form

H p ( z ) = R ( θ K - 1 ) Z R ( θ K - 2 ) Z ... Z R ( θ 0 ) . H p ( z ) = R ( θ K - 1 ) Z R ( θ K - 2 ) Z ... Z R ( θ 0 ) .
(41)

Now, if H0,jR(z)H0,jR(z) is the reflection of H0,j(z)H0,j(z) (i.e., H0,j(z)=z-K+1H0,j(z-1)H0,j(z)=z-K+1H0,j(z-1)), then (from the algebraic form of R(θ)R(θ))

H 0 , 0 ( z ) H 0 , 1 ( z ) H 1 , 0 ( z ) H 1 , 1 ( z ) = H 0 , 0 ( z ) H 0 , 1 ( z ) - H 0 , 1 R ( z ) H 0 , 0 R ( z ) . H 0 , 0 ( z ) H 0 , 1 ( z ) H 1 , 0 ( z ) H 1 , 1 ( z ) = H 0 , 0 ( z ) H 0 , 1 ( z ) - H 0 , 1 R ( z ) H 0 , 0 R ( z ) .
(42)

With these parameterizations, filter banks can be designed as an unconstrained optimization problem. The parameterizations described are important for another reason. It turns out that the most efficient (from the number of arithmetic operations) implementation of unitary filter banks is using the Householder parameterization. With arbitrary filter banks, one can organize the computations so as capitalize on the rate-change operations of upsampling and downsampling. For example, one need not compute values that are thrown away by downsampling. The gain from using the parameterization of unitary filter banks is over and above this obvious gain (for example, see pages 330-331 and 386-387 in [32]). Besides, with small modifications these parameterizations allow for unitariness to be preserved, even under filter coefficient quantization—with this having implications for fixed-point implementation of these filter banks in hardware digital signal processors [32].

## Unitary Filter Banks—Some Illustrative Examples

A few concrete examples of MM-band unitary filter banks and their parameterizations should clarify our discussion.

First consider the two-band filter bank associated with Daubechies' four-coefficient scaling function and wavelet that we saw in Section: Parameterization of the Scaling Coefficients . Recall that the lowpass filter (the scaling filter) is given by

 n n 0 1 2 3 h 0 ( n ) h 0 ( n ) 1 + 3 4 2 1 + 3 4 2 3 + 3 4 2 3 + 3 4 2 3 - 3 4 2 3 - 3 4 2 1-3421-342.

The highpass filter (wavelet filter) is given by h1(n)=(-1)nh0(3-n)h1(n)=(-1)nh0(3-n), and both Equation 1 and Equation 2 are satisfied with gi(n)=hi(-n)gi(n)=hi(-n). The matrix representation of the analysis bank of this filter bank is given by

d = H x = 1 - 3 4 2 3 - 3 4 2 3 + 3 4 2 1 + 3 4 2 0 0 - 1 + 3 4 2 3 + 3 4 2 - 3 - 3 4 2 1 - 3 4 2 0 0 0 0 1 - 3 4 2 3 - 3 4 2 3 + 3 4 2 1 + 3 4 2 0 0 - 1 + 3 4 2 3 + 3 4 2 - 3 - 3 4 2 1 - 3 4 2 x . d = H x = 1 - 3 4 2 3 - 3 4 2 3 + 3 4 2 1 + 3 4 2 0 0 - 1 + 3 4 2 3 + 3 4 2 - 3 - 3 4 2 1 - 3 4 2 0 0 0 0 1 - 3 4 2 3 - 3 4 2 3 + 3 4 2 1 + 3 4 2 0 0 - 1 + 3 4 2 3 + 3 4 2 - 3 - 3 4 2 1 - 3 4 2 x .
(43)

One readily verifies that HTH=IHTH=I and HHT=IHHT=I. The polyphase representation of this filter bank is given by

H p ( z ) = 1 4 2 ( 1 + 3 ) + z - 1 ( 3 - 3 ) ( 3 + 3 ) + z - 1 ( 1 - 3 ) ( 3 + 3 ) + z - 1 ( 1 - 3 ) ( - 3 + 3 ) - z - 1 ( 1 + 3 ) , H p ( z ) = 1 4 2 ( 1 + 3 ) + z - 1 ( 3 - 3 ) ( 3 + 3 ) + z - 1 ( 1 - 3 ) ( 3 + 3 ) + z - 1 ( 1 - 3 ) ( - 3 + 3 ) - z - 1 ( 1 + 3 ) ,
(44)

and one can show that HpT(z-1)Hp(z)=IHpT(z-1)Hp(z)=I and Hp(z)HpT(z-1)=IHp(z)HpT(z-1)=I. The Householder factorization of Hp(z)Hp(z) is given by

H p ( z ) = I - v 1 v 1 T + z - 1 v 1 v 1 T V 0 , H p ( z ) = I - v 1 v 1 T + z - 1 v 1 v 1 T V 0 ,
(45)

where

v 1 = sin ( π / 12 ) cos ( π / 12 ) and V 0 = 1 / 2 1 / 2 1 / 2 - 1 / 2 . v 1 = sin ( π / 12 ) cos ( π / 12 ) and V 0 = 1 / 2 1 / 2 1 / 2 - 1 / 2 .
(46)

Incidentally, all two-band unitary filter banks associated with wavelet tight frames have the same value of V0V0. Therefore, all filter banks associated with two-band wavelet tight frames are completely specified by a set of orthogonal vectors vivi, K-1K-1 of them if the h0h0 is of length 2K2K. Indeed, for the six-coefficient Daubechies wavelets (see Section: Parameterization of the Scaling Coefficients ), the parameterization of Hp(z)Hp(z) is associated with the following two unitary vectors (since K=3K=3): v1T=-.3842.9232v1T=-.3842.9232 and v2T=-.1053-.9944v2T=-.1053-.9944.

The Givens' rotation based factorization of Hp(z)Hp(z) for the 4-coefficient Daubechies filters given by:

H p ( z ) = cos θ 0 z - 1 sin θ 0 - sin θ 0 z - 1 cos θ 0 cos θ 1 sin θ 1 - sin θ 1 cos θ 1 , H p ( z ) = cos θ 0 z - 1 sin θ 0 - sin θ 0 z - 1 cos θ 0 cos θ 1 sin θ 1 - sin θ 1 cos θ 1 ,
(47)

where θ0=π3θ0=π3 and θ1=-π12θ1=-π12. The fact that the filter bank is associated with wavelets is precisely because θ0+θ1=π4θ0+θ1=π4. More generally, for a filter bank with filters of length 2K2K to be associated with wavelets, k=0K-1θk=π4k=0K-1θk=π4. This is expected since for filters of length 2K2K to be associated with wavelets we have seen (from the Householder factorization) that there are K-1K-1 parameters vkvk. Our second example belongs to a class of unitary filter banks called modulated filter banks, which is described in a following section. A Type 1 modulated filter bank with filters of length N=2MN=2M and associated with a wavelet orthonormal basis is defined by

h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + . 5 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + . 5 ) M - ( 2 i + 1 ) π 4 , h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + . 5 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + . 5 ) M - ( 2 i + 1 ) π 4 ,
(48)

where i0,...,M-1i0,...,M-1 and n0,...,2M-1n0,...,2M-1[13], [12]. Consider a three-band example with length six filters. In this case, K=2K=2, and therefore one has one projection P1P1 and the matrix V0V0. The projection is one-dimensional and given by the Householder parameter

v 1 T = 1 6 1 - 2 1 and V 0 = 1 3 1 1 1 - 3 + 1 2 1 3 - 1 2 3 - 1 2 1 - 3 + 1 2 . v 1 T = 1 6 1 - 2 1 and V 0 = 1 3 1 1 1 - 3 + 1 2 1 3 - 1 2 3 - 1 2 1 - 3 + 1 2 .
(49)

The third example is another Type 1 modulated filter bank with M=4M=4 and N=8N=8. The filters are given in Equation 48. Hp(z)Hp(z) had the following factorization

H p ( z ) = I - P 1 + z - 1 P 1 V 0 , H p ( z ) = I - P 1 + z - 1 P 1 V 0 ,
(50)

where P1P1 is a two-dimensional projection P1=v1v1T+v2v2TP1=v1v1T+v2v2T (notice the arbitrary choice of v1v1 and v2v2) given by

v 1 = 0 . 41636433418450 - 0 . 78450701561376 0 . 32495344564406 0 . 32495344564406 , v 2 = 0 . 00000000000000 - 0 . 14088210492943 0 . 50902478635868 - 0 . 84914427477499 v 1 = 0 . 41636433418450 - 0 . 78450701561376 0 . 32495344564406 0 . 32495344564406 , v 2 = 0 . 00000000000000 - 0 . 14088210492943 0 . 50902478635868 - 0 . 84914427477499
(51)

and

V 0 = 1 2 1 1 1 1 - 2 0 2 0 0 2 0 - 2 1 - 1 1 - 1 . V 0 = 1 2 1 1 1 1 - 2 0 2 0 0 2 0 - 2 1 - 1 1 - 1 .
(52)

Notice that there are infinitely many choices of v1v1 and v2v2 that give rise to the same projection P1P1.

## MM-band Wavelet Tight Frames

In Section 8, Theorem 7 , while discussing the properties of MM-band wavelet systems, we saw that the lowpass filter h0h0 (hh in the notation used there) must satisfy the linear constraint nh0(n)=Mnh0(n)=M. Otherwise, a scaling function with nonzero integral could not exit. It turns out that this is precisely the only condition that an FIR unitary filter bank has to satisfy in order for it to generate an MM-band wavelet system [19], [7]. Indeed, if this linear constraint is not satisfied the filter bank does not generate a wavelet system. This single linear constraint (for unitary filter banks) also implies that hhi(n)=0hhi(n)=0 for i1,2,...,M-1i1,2,...,M-1 (because of Eqn. Equation 1). We now give the precise result connecting FIR unitary filter banks and wavelet tight frames.

Theorem 46 Given an FIR unitary filter bank with nh0(n)=Mnh0(n)=M, there exists an unique, compactly supported, scaling function ψ0(t)L2()ψ0(t)L2() (with support in [0,N-1M-1][0,N-1M-1], assuming h0h0 is supported in [0,N-1][0,N-1]) determined by the scaling recursion:

ψ 0 ( t ) = M k h 0 ( k ) ψ 0 ( M t - k ) . ψ 0 ( t ) = M k h 0 ( k ) ψ 0 ( M t - k ) .
(53)

Define wavelets, ψi(t)ψi(t),

ψ i ( t ) = M k h i ( k ) ψ 0 ( M t - k ) i 1 , 2 , ... , M - 1 , ψ i ( t ) = M k h i ( k ) ψ 0 ( M t - k ) i 1 , 2 , ... , M - 1 ,
(54)

and functions, ψi,j,k(t)ψi,j,k(t),

ψ i , j , k ( t ) = M j / 2 ψ i ( M j t - k ) . ψ i , j , k ( t ) = M j / 2 ψ i ( M j t - k ) .
(55)

Then ψi,j,kψi,j,k forms a tight frame for L2()L2(). That is, for all fL2()fL2()

f ( t ) = i = 1 M - 1 j , k = - f , ψ i , j , k ψ i , j , k ( t ) . f ( t ) = i = 1 M - 1 j , k = - f , ψ i , j , k ψ i , j , k ( t ) .
(56)

Also,

f ( t ) = k f , ψ 0 , 0 , k ψ 0 , 0 , k ( t ) + i = 1 M - 1 j = 1 k = - f , ψ i , j , k ψ i , j , k ( t ) . f ( t ) = k f , ψ 0 , 0 , k ψ 0 , 0 , k ( t ) + i = 1 M - 1 j = 1 k = - f , ψ i , j , k ψ i , j , k ( t ) .
(57)

Remark: A similar result relates general FIR (not necessarily unitary) filter banks and MM-band wavelet frames (Reference), (Reference), (Reference).

Starting with Equation 53, one can calculate the scaling function using either successive approximation or interpolation on the MM-adic rationals—i.e., exactly as in the two-band case in Chapter (Reference). Equation Equation 54 then gives the wavelets in terms of the scaling function. As in the two-band case, the functions ψi(t)ψi(t), so constructed, invariably turn out highly irregular and sometimes fractal. The solution, once again, is to require that several moments of the scaling function (or equivalently the moments of the scaling filter h0h0) are zero. This motivates the definition of KK-regular MM-band scaling filters: A unitary scaling filter h0h0 is said to be KK regular if its ZZ-transform can be written in the form

H 0 ( z ) = 1 + z - 1 + ... + z - ( M - 1 ) M K Q ( z ) , H 0 ( z ) = 1 + z - 1 + ... + z - ( M - 1 ) M K Q ( z ) ,
(58)

for maximal possible KK. By default, every unitary scaling filter h0h0 is one-regular (because nh0(n)=Mnh0(n)=M - see (Reference), Theorem Equation 58 for equivalent characterizations of KK-regularity). Each of the KK-identical factors in Eqn. (Reference) adds an extra linear constraint on h0h0 (actually, it is one linear constraint on each of the MM polyphase subsequences of h0h0 - see (Reference)).

There is no simple relationship between the smoothness of the scaling function and KK-regularity. However, the smoothness of the maximally regular scaling filter, h0h0, with fixed filter length NN, tends to be an increasing function of NN. Perhaps one can argue that KK-regularity is an important concept independent of the smoothness of the associated wavelet system. KK-regularity implies that the moments of the wavelets vanish up to order K-1K-1, and therefore, functions can be better approximated by using just the scaling function and its translates at a given scale. Formulae exist for MM-band maximally regular KK-regular scaling filters (i.e., only the sequence h0h0) [29]. Using the Householder parameterization, one can then design the remaining filters in the filter bank.

The linear constraints on h0h0 that constitute KK-regularity become nonexplicit nonlinear constraints on the Householder parameterization of the associated filter bank. However, one-regularity can be explicitly incorporated and this gives a parameterization of all MM-band compactly supported wavelet tight frames. To see, this consider the following two factorizations of Hp(z)Hp(z) of a unitary filter bank.

H p ( z ) = k = K - 1 1 I - P k + z - 1 P k V 0 , H p ( z ) = k = K - 1 1 I - P k + z - 1 P k V 0 ,
(59)

and

H p T ( z ) = k = K - 1 1 I - Q k + z - 1 Q k W 0 . H p T ( z ) = k = K - 1 1 I - Q k + z - 1 Q k W 0 .
(60)

Since Hp(1)=V0Hp(1)=V0 and HpT(1)=W0HpT(1)=W0, V0=W0TV0=W0T. The first column of W0W0 is the unit vector [H0,0(1),H0,1(1),...,H0,M-1(1)]T[H0,0(1),H0,1(1),...,H0,M-1(1)]T. Therefore,

k = 0 M - 1 H 0 , k ( 1 ) 2 = 1 . k = 0 M - 1 H 0 , k ( 1 ) 2 = 1 .
(61)

But since nh0(n)=H0(1)=Mnh0(n)=H0(1)=M,

k = 0 M - 1 H 0 , k ( 1 ) = M . k = 0 M - 1 H 0 , k ( 1 ) = M .
(62)

Therefore, for all kk, H0,k(1)=1MH0,k(1)=1M. Hence, the first row of V0V0 is [1/M,1/M,...,1/M][1/M,1/M,...,1/M]. In other words, a unitary filter bank gives rise to a WTF iff the first row of V0V0 in the Householder parameterization is the vector with all entries 1/M1/M.

Alternatively, consider the Given's factorization of Hp(z)Hp(z) for a two-channel unitary filter bank.

H p ( z ) = i = K - 1 1 cos θ i z - 1 sin θ i - sin θ i z - 1 cos θ i cos θ 0 sin θ 0 - sin θ 0 cos θ 0 . H p ( z ) = i = K - 1 1 cos θ i z - 1 sin θ i - sin θ i z - 1 cos θ i cos θ 0 sin θ 0 - sin θ 0 cos θ 0 .
(63)

Since for a WTF we require

1 2 1 2 - 1 2 1 2 = H 0 , 0 ( 1 ) H 1 , 0 ( 1 ) H 0 , 1 ( 1 ) H 1 , 1 ( 1 ) = cos ( Θ ) sin ( Θ ) - sin ( Θ ) cos ( Θ ) , 1 2 1 2 - 1 2 1 2 = H 0 , 0 ( 1 ) H 1 , 0 ( 1 ) H 0 , 1 ( 1 ) H 1 , 1 ( 1 ) = cos ( Θ ) sin ( Θ ) - sin ( Θ ) cos ( Θ ) ,
(64)

we have Θ=k=0K-1θk=π4Θ=k=0K-1θk=π4. This is the condition for the lattice parameterization to be associated with wavelets.

## Modulated Filter Banks

Filter bank design typically entails optimization of the filter coefficients to maximize some goodness measure subject to the perfect reconstruction constraint. Being a constrained (or unconstrained for parameterized unitary filter bank design) nonlinear programming problem, numerical optimization leads to local minima, with the problem exacerbated when there are a large number of filter coefficients. To alleviate this problem one can try to impose structural constraints on the filters. For example, if Figure 2 is the desired ideal response, one can impose the constraint that all analysis (synthesis) filters are obtained by modulation of a single “prototype” analysis (synthesis) filter. This is the basic idea behind modulated filter banks [21], [18], [32], [22], [13], [8], [16], [20]. In what follows, we only consider the case where the number of filters is equal to the downsampling factor; i.e., L=ML=M.

The frequency responses in Figure 2 can be obtained by shifting an the response of an ideal lowpass filter (supported in [-π2M,π2M][-π2M,π2M]) by (i+12)πM(i+12)πM, i0,...,M-1i0,...,M-1. This can be achieved by modulating with a cosine (or sine) with appropriate frequency and arbitrary phase. However, some choices of phase may be incompatible with perfect reconstruction. A general choice of phase (and hence modulation, that covers all modulated filter banks of this type) is given by the following definition of the analysis and synthesis filters:

h i ( n ) = h ( n ) cos π M ( i + 1 2 ) ( n - α 2 ) , i R ( M ) h i ( n ) = h ( n ) cos π M ( i + 1 2 ) ( n - α 2 ) , i R ( M )
(65)
g i ( n ) = g ( n ) cos π M ( i + 1 2 ) ( n + α 2 ) , i R ( M ) g i ( n ) = g ( n ) cos π M ( i + 1 2 ) ( n + α 2 ) , i R ( M )
(66)

Here αα is an integer parameter called the modulation phase. Now one can substitute these forms for the filters in Equation 1 to explicit get PR constraints on the prototype filters hh and gg. This is a straightforward algebraic exercise, since the summation over ii in Equation 1 is a trigonometric sum that can be easily computed. It turns out that the PR conditions depend only on the parity of the modulation phase αα. Hence without loss of generality, we choose αM-1,M-2αM-1,M-2—other choices being incorporated as a preshift into the prototype filters hh and gg.

Thus there are two types of MFBs depending on the choice of modulation phase:

α = M - 1 Type 1 Filter Bank M - 2 Type 2 Filter Bank α = M - 1 Type 1 Filter Bank M - 2 Type 2 Filter Bank
(67)

The PR constraints on hh and gg are quite messy to write down without more notational machinery. But the basic nature of the constraints can be easily understood pictorially. Let the MM polyphase components of hh and gg respectively be partitioned into pairs as suggested in Item 4. Each polyphase pair from hh and an associated polyphase pair gg (i.e., those four sequences) satisfy the PR conditions for a two-channel filter bank. In other words, these subsequences could be used as analysis and synthesis filters respectively in a two-channel PR filter bank. As seen in Item 4, some polyphase components are not paired. The constraints on these sequences that PR imposes will be explicitly described soon. Meanwhile, notice that the PR constraints on the coefficients are decoupled into roughly M/2M/2 independent sets of constraints (since there are roughly M/2M/2 PR pairs in Item 4). To quantify this, define JJ:

J = M 2 Type 1, M even M - 1 2 Type 1, M odd M - 2 2 Type 2, M even M - 1 2 Type 2, M odd . J = M 2 Type 1, M even M - 1 2 Type 1, M odd M - 2 2 Type 2, M even M - 1 2 Type 2, M odd .
(68)

In other words, the MFB PR constraint decomposes into a set of JJ two-channel PR constraints and a few additional conditions on the unpaired polyphase components of hh and gg.

We first define MM polyphase components of the analysis and synthesis prototype filters, viz., Pl(z)Pl(z) and Ql(z)Ql(z) respectively. We split these sequences further into their even and odd components to give Pl,0(z)Pl,0(z), Pl,1(z)Pl,1(z), Ql,0(z)Ql,0(z) and Ql,1(z)Ql,1(z) respectively. More precisely, let

H ( z ) = l = 0 M - 1 z - l P l ( z M ) = l = 0 M - 1 z - l ( P l , 0 ( z 2 M ) + z - M P l , 1 ( z 2 M ) ) , H ( z ) = l = 0 M - 1 z - l P l ( z M ) = l = 0 M - 1 z - l ( P l , 0 ( z 2 M ) + z - M P l , 1 ( z 2 M ) ) ,
(69)
G ( z ) = l = 0 M - 1 z l Q l ( z M ) = l = 0 M - 1 z l ( Q l , 0 ( z 2 M ) + z M Q l , 1 ( z 2 M ) ) , G ( z ) = l = 0 M - 1 z l Q l ( z M ) = l = 0 M - 1 z l ( Q l , 0 ( z 2 M ) + z M Q l , 1 ( z 2 M ) ) ,
(70)

and let

P ( z ) = P l , 0 ( z ) P l , 1 ( z ) P α - l , 0 ( z ) - P α - l , 1 ( z ) P ( z ) = P l , 0 ( z ) P l , 1 ( z ) P α - l , 0 ( z ) - P α - l , 1 ( z )
(71)

with Q(z)Q(z) defined similarly. Let II be the 2×22×2 identity matrix.

Theorem 47 (Modulated Filter Banks PR Theorem) An modulated filter bank (Type 1 or Type 2) (as defined in Equation 65 and Equation 66) is PR iff for lR(J)lR(J)

P l ( z ) Q l T ( z ) = P l ( z ) Q l T ( z ) =
(72)

and furthermore if αα is even Pα2,0(z)Qα2,0(z)=1MPα2,0(z)Qα2,0(z)=1M. In the Type 2 case, we further require PM-1(z)QM-1(z)=2MPM-1(z)QM-1(z)=2M.

The result says that Pl,Pα-l,QlPl,Pα-l,Ql and Qα-lQα-l form analysis and synthesis filters of a two-channel PR filter bank (Equation 1 in Z-transform domain).

Modulated filter bank design involves choosing hh and gg to optimize some goodness criterion while subject to the constraints in the theorem above.

### Unitary Modulated Filter Bank

In a unitary bank, the filters satisfy gi(n)=hi(-n)gi(n)=hi(-n). From Equation 15 and Equation 16, it is clear that in a modulated filter bank if g(n)=h(-n)g(n)=h(-n), then gi(n)=hi(-n)gi(n)=hi(-n). Imposing this restriction (that the analysis and synthesis prototype filters are reflections of each other) gives PR conditions for unitary modulated filter banks. That g(n)=h(-n)g(n)=h(-n) means that Pl(z)=Ql(z-1)Pl(z)=Ql(z-1) and therefore Ql(z)=Pl(z-1)Ql(z)=Pl(z-1). Indeed, for PR, we require

P l ( z ) P l T ( z - 1 ) = 2 M I . P l ( z ) P l T ( z - 1 ) = 2 M I .
(73)

This condition is equivalent to requiring that PlPl and Pα-lPα-l are analysis filters of a two-channel unitary filter bank. Equivalently, for lR(M)lR(M), Pl,0Pl,0 and Pl,1Pl,1 are power-complementary.

Corollary 6 (Unitary MFB PR Theorem) A modulated filter bank (Type 1 or Type 2) is unitary iff for lR(J)lR(J), Pl,0(z)Pl,0(z) and Pl,1(z)Pl,1(z) are power complementary.

P l , 0 ( z ) P l , 0 ( z - 1 ) + P l , 1 ( z ) P l , 1 ( z - 1 ) = 2 M , l R ( M ) P l , 0 ( z ) P l , 0 ( z - 1 ) + P l , 1 ( z ) P l , 1 ( z - 1 ) = 2 M , l R ( M )
(74)

Furthermore, when αα is even Pα2,0(z)Pα2,0(z-1)=1MPα2,0(z)Pα2,0(z-1)=1M (i.e., Pα2,0(z)Pα2,0(z) has to be 1Mzk1Mzk for some integer kk). In the Type 2 case, we further require PM-1(z)PM-1(z-1)=2MPM-1(z)PM-1(z-1)=2M (i.e., PM-1(z)PM-1(z) has to be 2Mzk2Mzk for some integer kk).

Unitary modulated filter bank design entails the choice of hh, the analysis prototype filter. There are JJ associated two-channel unitary filter banks each of which can be parameterized using the lattice parameterization. Besides, depending on whether the filter is Type 2 and/or alphaalpha is even one has to choose the locations of the delays.

For the prototype filter of a unitary MFB to be linear phase, it is necessary that

P α - l ( z ) = z - 2 k + 1 P l ( z - 1 ) , P α - l ( z ) = z - 2 k + 1 P l ( z - 1 ) ,
(75)

for some integer kk. In this case, the prototype filter (if FIR) is of length 2Mk2Mk and symmetric about (Mk-12)(Mk-12) in the Type 1 case and of length 2Mk-12Mk-1 and symmetric about (Mk-1)(Mk-1) (for both Class A and Class B MFBs). In the FIR case, one can obtain linear-phase prototype filters by using the lattice parameterization [35] of two-channel unitary filter banks. Filter banks with FIR linear-phase prototype filters will be said to be canonical. In this case, Pl(z)Pl(z) is typically a filter of length 2k2k for all ll. For canonical modulated filter banks, one has to check power complementarity only for lR(J)lR(J).

## Modulated Wavelet Tight Frames

For all MM, there exist MM-band modulated WTFs. The simple linear constraint on h0h0 becomes a set of JJ linear constraints, one each, on each of the JJ two-channel unitary lattices associated with the MFB.

Theorem 48 (Modulated Wavelet Tight Frames Theorem) Every compactly supported modulated WTF is associated with an FIR unitary MFB and is parameterized by JJ unitary lattices such that the sum of the angles in the lattices satisfy (for lR(J)lR(J)) Eqn. Equation 76.

k θ l , k = def Θ l = π 4 + π 2 M ( α 2 - l ) . k θ l , k = def Θ l = π 4 + π 2 M ( α 2 - l ) .
(76)

If a canonical MFB has JkJk parameters, the corresponding WTF has J(k-1)J(k-1) parameters.

Notice that even though the PR conditions for MFBs depended on whether it is Type 1 or Type 2, the MWTF conditions are identical. Now consider a Type 1 or Type 2 MFB with one angle parameter per lattice; i.e., N=2MN=2M (Type 1) or N=2M-1N=2M-1 (Type 2). This angle parameter is specified by the MWTF theorem above if we want associated wavelets. This choice of angle parameters leads to a particularly simple form for the prototype filter.

In the Type 1 case [13], [8],

h ( n ) = 2 M sin π 4 M ( 2 n + 1 ) . h ( n ) = 2 M sin π 4 M ( 2 n + 1 ) .
(77)

and therefore

h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + . 5 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + . 5 ) M - ( 2 i + 1 ) π 4 . h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + . 5 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + . 5 ) M - ( 2 i + 1 ) π 4 .
(78)

In the Type 2 case [13],

h ( n ) = 2 M sin π 2 M ( n + 1 ) , h ( n ) = 2 M sin π 2 M ( n + 1 ) ,
(79)

and hence

h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + 1 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + 1 ) M - ( 2 i + 1 ) π 4 . h i ( n ) = 1 2 M sin π ( i + 1 ) ( n + 1 ) M - ( 2 i + 1 ) π 4 - sin π i ( n + 1 ) M - ( 2 i + 1 ) π 4 .
(80)

## Linear Phase Filter Banks

In some applications. it is desirable to have filter banks with linear-phase filters [30]. The linear-phase constraint (like the modulation constraint studied earlier) reduces the number of free parameters in the design of a filter bank. Unitary linear phase filter banks have been studied recently [30], [10]. In this section we develop algebraic characterizations of certain types of linear filter banks that can be used as a starting point for designing such filter banks.

In this section, we assume that the desired frequency responses are as in Equation 2. For simplicity we also assume that the number of channels, MM, is an even integer and that the filters are FIR. It should be possible to extend the results that follow to the case when MM is an odd integer in a straightforward manner.

Consider an MM-channel FIR filter bank with filters whose passbands approximate ideal filters. Several transformations relate the MM ideal filter responses. We have already seen one example where all the ideal filters are obtained by modulation of a prototype filter. We now look at other types of transformations that relate the filters. Specifically, the ideal frequency response of hM-1-ihM-1-i can be obtained by shifting the response of the hihi by ππ. This either corresponds to the restriction that

h M - 1 - i ( n ) = ( - 1 ) n h i ( n ) ; H M - 1 - i ( z ) = H i ( - z ) ; H M - 1 - i ( ω ) = H i ( ω + π ) , h M - 1 - i ( n ) = ( - 1 ) n h i ( n ) ; H M - 1 - i ( z ) = H i ( - z ) ; H M - 1 - i ( ω ) = H i ( ω + π ) ,
(81)

or to the restriction that

h M - 1 - i ( n ) = ( - 1 ) n h i ( N - 1 - n ) ; H M - 1 - i ( z ) = H i R ( - z ) ; H M - 1 - i ( ω ) = H i ( ω + π ) h M - 1 - i ( n ) = ( - 1 ) n h i ( N - 1 - n ) ; H M - 1 - i ( z ) = H i R ( - z ) ; H M - 1 - i ( ω ) = H i ( ω + π )
(82)

where NN is the filter length and for polynomial H(z)H(z), HR(z)HR(z) denotes its reflection polynomial (i.e. the polynomial with coefficients in the reversed order). The former will be called pairwise-shift (or PS) symmetry (it is also known as pairwise-mirror image symmetry [23]) , while the latter will be called pairwise-conjugated-shift (or PCS) symmetry (also known as pairwise-symmetry [23]). Both these symmetries relate pairs of filters in the filter bank. Another type of symmetry occurs when the filters themselves are symmetric or linear-phase. The only type of linear-phase symmetry we will consider is of the form

h i ( n ) = ± h i ( N - 1 - n ) ; H i ( z ) = ± H i R ( z ) , h i ( n ) = ± h i ( N - 1 - n ) ; H i ( z ) = ± H i R ( z ) ,
(83)

where the filters are all of fixed length NN, and the symmetry is about N-12N-12. For an MM-channel linear-phase filter bank (with MM an even integer), M/2M/2 filters each are even-symmetric and odd-symmetric respectively (Reference).

We now look at the structural restrictions on Hp(z)Hp(z), the polyphase component matrix of the analysis bank that these three types of symmetries impose. Let JJ denote the exchange matrix with ones on the antidiagonal. Postmultiplying a matrix AA by JJ is equivalent to reversing the order of the columns of AA, and premultiplying is equivalent to reversing the order of the rows of AA. Let VV denote the sign-alternating matrix, the diagonal matrix of alternating ±1±1's. Postmultiplying by VV, alternates the signs of the columns of AA, while premultiplying alternates the signs of the rows of AA. The polyphase components of H(z)H(z) are related to the polyphase components of HR(z)HR(z) by reflection and reversal of the ordering of the components. Indeed, if H(z)H(z) is of length MmMm, and H(z)=l=0M-1z-lHl(zM)H(z)=l=0M-1z-lHl(zM), then,

H R ( z ) = z - M m + 1 l = 0 M - 1 z l H l ( z - M ) = l = 0 M - 1 z - ( M - 1 - l ) ( z - m M + M H l ( z - M ) ) = l = 0 M - 1 z - l H M - 1 - l R ( z M ) . H R ( z ) = z - M m + 1 l = 0 M - 1 z l H l ( z - M ) = l = 0 M - 1 z - ( M - 1 - l ) ( z - m M + M H l ( z - M ) ) = l = 0 M - 1 z - l H M - 1 - l R ( z M ) .
(84)

Therefore

( H R ) l ( z ) = ( H M - 1 - l ) R ( z ) ( H R ) l ( z ) = ( H M - 1 - l ) R ( z )
(85)

and for linear-phase H(z)H(z), since HR(z)=±H(z)HR(z)=±H(z),

H l ( z ) = ± ( H R ) M - 1 - l ( z ) . H l ( z ) = ± ( H R ) M - 1 - l ( z ) .
(86)

Lemma 2 For even MM, Hp(z)Hp(z) is of the form

• PS Symmetry:
W 0 ( z ) W 1 ( z ) J W 0 ( z ) V ( - 1 ) M / 2 J W 1 ( z ) V = I 0 0 J W 0 ( z ) W 1 ( z ) W 0 ( z ) V ( - 1 ) M / 2 W 1 ( z ) V W 0 ( z ) W 1 ( z ) J W 0 ( z ) V ( - 1 ) M / 2 J W 1 ( z ) V = I 0 0 J W 0 ( z ) W 1 ( z ) W 0 ( z ) V ( - 1 ) M / 2 W 1 ( z ) V
(87)
• PCS Symmetry:
W 0 ( z ) W 1 ( z ) J J W 1 R ( z ) V ( - 1 ) M / 2 J W 0 R ( z ) J V = I 0 0 J W 0 ( z ) W 1 ( z ) J W 1 R ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V W 0 ( z ) W 1 ( z ) J J W 1 R ( z ) V ( - 1 ) M / 2 J W 0 R ( z ) J V = I 0 0 J W 0 ( z ) W 1 ( z ) J W 1 R ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V
(88)
• Linear Phase:
W0(z)D0W0R(z)JW1(z)D1W1R(z)J=W0(z)D0W0R(z)W1(z)D1W1R(z)I00JW0(z)D0W0R(z)JW1(z)D1W1R(z)J=W0(z)D0W0R(z)W1(z)D1W1R(z)I00J
(89)
or
QW0(z)W0R(z)JW1(z)-W1R(z)J=QW0(z)W0R(z)W1(z)-W1R(z)I00JQW0(z)W0R(z)JW1(z)-W1R(z)J=QW0(z)W0R(z)W1(z)-W1R(z)I00J
(90)
• Linear Phase and PCS:
W 0 ( z ) D W 0 R ( z ) J J D W 0 ( z ) V ( - 1 ) M / 2 J W 0 R ( z ) J V = I 0 0 J W 0 ( z ) D W 0 R ( z ) J D W 0 ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V W 0 ( z ) D W 0 R ( z ) J J D W 0 ( z ) V ( - 1 ) M / 2 J W 0 R ( z ) J V = I 0 0 J W 0 ( z ) D W 0 R ( z ) J D W 0 ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V
(91)
• Linear Phase and PS:
W 0 ( z ) D W 0 R ( z ) J J W 0 ( z ) V ( - 1 ) M / 2 J D W 0 R ( z ) J V = I 0 0 J D W 0 ( z ) D W 0 R ( z ) J D W 0 ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V W 0 ( z ) D W 0 R ( z ) J J W 0 ( z ) V ( - 1 ) M / 2 J D W 0 R ( z ) J V = I 0 0 J D W 0 ( z ) D W 0 R ( z ) J D W 0 ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V
(92)

Thus in order to generate Hp(z)Hp(z) for all symmetries other than PSPS, we need a mechanism that generates a pair of matrices and their reflection (i.e., W0(z)W0(z), W1(z)W1(z)W0R(z)W0R(z) and W1R(z)W1R(z)). In the scalar case, there are two well-known lattice structures, that generate such pairs. The first case is the orthogonal lattice [35], while the second is the linear-prediction lattice [26]. A KthKth order orthogonal lattice is generated by the product

i = 0 K a i z - 1 b i - b i z - 1 a i a 0 b 0 - b 0 a 0 = def Y 0 ( z ) Y 1 ( z ) - Y 1 R ( z ) Y 0 R ( z ) = def X ( z ) . i = 0 K a i z - 1 b i - b i z - 1 a i a 0 b 0 - b 0 a 0 = def Y 0 ( z ) Y 1 ( z ) - Y 1 R ( z ) Y 0 R ( z ) = def X ( z ) .
(93)

This lattice is always invertible (unless aiai and bibi are both zero!), and the inverse is anticausal since

a i z - 1 b i - b i z - 1 a i - 1 = 1 a 1 2 + b i 2 a i - b i z b i z a i . a i z - 1 b i - b i z - 1 a i - 1 = 1 a 1 2 + b i 2 a i - b i z b i z a i .
(94)

As we have seen, this lattice plays a fundamental role in the theory of two-channel FIR unitary modulated filter banks. The hyperbolic lattice of order KK generates the product

i = 0 K a i z - 1 b i b i z - 1 a i a 0 b 0 b 0 a 0 = def Y 0 ( z ) Y 1 ( z ) Y 1 R ( z ) Y 0 R ( z ) = def X ( z ) . i = 0 K a i z - 1 b i b i z - 1 a i a 0 b 0 b 0 a 0 = def Y 0 ( z ) Y 1 ( z ) Y 1 R ( z ) Y 0 R ( z ) = def X ( z ) .
(95)

where Y0(z)Y0(z) and Y1(z)Y1(z) are of order KK. This lattice is invertible only when ai2bi2ai2bi2 (or equivalently (ai+bi)/2(ai+bi)/2 and (ai-bi)/2(ai-bi)/2 are nonzero) in which case the inverse is noncausal since

a i z - 1 b i b i z - 1 a i - 1 = 1 a 1 2 - b i 2 a i - b i - z b i z a i . a i z - 1 b i b i z - 1 a i - 1 = 1 a 1 2 - b i 2 a i - b i - z b i z a i .
(96)

Since the matrix aibibiaiaibibiai can be orthogonal iff ai,bi=±1,0ai,bi=±1,0, or ai,bi=0,±1ai,bi=0,±1, the (2×2)(2×2) matrix generated by the lattice can never be unitary.

Formally, it is clear that if we replace the scalars aiai and bibi with square matrices of size M/2×M/2M/2×M/2 then we would be able to generate matrix versions of these two lattices which can then be used to generate filter banks with the symmetries we have considered. We will shortly see that both the lattices can generate unitary matrices, and this will lead to a parameterization of FIR unitary Hp(z)Hp(z) for PCS, linear-phase, and PCS plus linear-phase symmetries. We prefer to call the generalization of the orthogonal lattice, the antisymmetric lattice and to call the generalization of the hyperbolic lattice, the symmetric lattice, which should be obvious from the form of the product. The reason for this is that the antisymmetric lattice may not generate a unitary matrix transfer function (in the scalar case, the 2×22×2 transfer function generated is always unitary). The antisymmetric lattice is defined by the product

X ( z ) = def i = 1 K A i z - 1 B i - B i z - 1 A i A 0 B 0 - B 0 A 0 X ( z ) = def i = 1 K A i z - 1 B i - B i z - 1 A i A 0 B 0 - B 0 A 0
(97)

where AiAi and BiBi are constant square matrices of size M/2×M/2M/2×M/2. It is readily verified that X(z)X(z) is of the form

X ( z ) = Y 0 ( z ) Y 1 ( z ) - Y 1 R ( z ) Y 0 R ( z ) X ( z ) = Y 0 ( z ) Y 1 ( z ) - Y 1 R ( z ) Y 0 R ( z )
(98)

Given X(z)X(z), its invertibility is equivalent to the invertibility of the constant matrices,

A i B i - B i A i since A i z - 1 B i - B i z - 1 A i = A i B i - B i A i I 0 0 z - 1 I , A i B i - B i A i since A i z - 1 B i - B i z - 1 A i = A i B i - B i A i I 0 0 z - 1 I ,
(99)

which, in turn is related to the invertibility of the complex matrices Ci=(Ai+ıBi)Ci=(Ai+ıBi) and Di=(Ai-ıBi)Di=(Ai-ıBi), since,

1 2 I I ı I - ı I C i 0 0 D i I - ı I I ı I = A i B i - B i A i . 1 2 I I ı I - ı I C i 0 0 D i I - ı I I ı I = A i B i - B i A i .
(100)

Moreover, the orthogonality of the matrix is equivalent to the unitariness of the complex matrix CiCi (since DiDi is just its Hermitian conjugate). Since an arbitrary complex matrix of size M/2×M/2M/2×M/2 is determined by precisely 2M/222M/22 parameters, each of the matrices AiBi-BiAiAiBi-BiAi has that many degrees of freedom. Clearly when these matrices are orthogonal X(z)X(z) is unitary (on the unit circle) and XT(z-1)X(z)=IXT(z-1)X(z)=I. For unitary X(z)X(z) the converse is also true as will be shortly proved.

The symmetric lattice is defined by the product

X ( z ) = def i = 1 K A i z - 1 B i B i z - 1 A i A 0 B 0 B 0 A 0 X ( z ) = def i = 1 K A i z - 1 B i B i z - 1 A i A 0 B 0 B 0 A 0
(101)

Once again AiAi and BiBi are constant square matrices, and it is readily verified that X(z)X(z) written as a product above is of the form

X ( z ) = Y 0 ( z ) Y 1 ( z ) Y 1 R ( z ) Y 0 R ( z ) X ( z ) = Y 0 ( z ) Y 1 ( z ) Y 1 R ( z ) Y 0 R ( z )
(102)

The invertibility of X(z)X(z) is equivalent to the invertibility of

A i B i B i A i since A i z - 1 B i B i z - 1 A i = A i B i B i A i I 0 0 z - 1 I , A i B i B i A i since A i z - 1 B i B i z - 1 A i = A i B i B i A i I 0 0 z - 1 I ,
(103)

which in turn is equivalent to the invertibility of Ci=(Ai+Bi)Ci=(Ai+Bi) and Di=(Ai-Bi)Di=(Ai-Bi) since

1 2 I I I - I C i 0 0 D i I I I - I = A i B i B i A i . 1 2 I I I - I C i 0 0 D i I I I - I = A i B i B i A i .
(104)

The orthogonality of the constant matrix is equivalent to the orthogonality of the real matrices CiCi and DiDi, and since each real orthogonal matrix of size M/2×M/2M/2×M/2 is determined by M/22M/22 parameters, the constant orthogonal matrices have 2M/222M/22 degrees of freedom. Clearly when the matrices are orthogonal XT(z-1)X(z)=IXT(z-1)X(z)=I. For the hyperbolic lattice too, the converse is true.

We now give a theorem that leads to a parameterization of unitary filter banks with the symmetries we have considered (for a proof, see [10]).

Theorem 49 Let X(z)X(z) be a unitary M×MM×M polynomial matrix of degree KK. Depending on whether X(z)X(z) is of the form in Equation 98, or Equation 102, it is generated by an order KKantisymmetric or symmetric lattice.

### Characterization of Unitary Hp(z)Hp(z) — PS Symmetry

The form of Hp(z)Hp(z) for PS symmetry in Equation 87 can be simplified by a permutation. Let PP be the permutation matrix that exchanges the first column with the last column, the third column with the last but third, etc. That is,

P = 0 0 0 ... 0 0 1 0 1 0 ... 0 0 0 0 0 0 ... 1 0 0 ... 0 0 1 ... 0 0 0 0 0 0 ... 0 1 0 1 0 0 ... 0 0 0 . P = 0 0 0 ... 0 0 1 0 1 0 ... 0 0 0 0 0 0 ... 1 0 0 ... 0 0 1 ... 0 0 0 0 0 0 ... 0 1 0 1 0 0 ... 0 0 0 .
(105)

Then the matrix W0(z)W1(z)W0(z)V(-1)M/2W1(z)VW0(z)W1(z)W0(z)V(-1)M/2W1(z)V in Equation 87 can be rewritten as 12W0'(z)W1'(z)-W0'(z)W1'(z)P12W0'(z)W1'(z)-W0'(z)W1'(z)P, and therefore

H p ( z ) = I 0 0 J W 0 ( z ) W 1 ( z ) W 0 ( z ) V ( - 1 ) M / 2 W 1 ( z ) V = 1 2 I 0 0 J W 0 ' ( z ) W 1 ' ( z ) - W 0 ' ( z ) W 1 ' ( z ) P = 1 2 I 0 0 J I I - I I W 0 ' ( z ) 0 0 W 1 ' ( z ) P = 1 2 I I - J J W 0 ' ( z ) 0 0 W 1 ' ( z ) P . H p ( z ) = I 0 0 J W 0 ( z ) W 1 ( z ) W 0 ( z ) V ( - 1 ) M / 2 W 1 ( z ) V = 1 2 I 0 0 J W 0 ' ( z ) W 1 ' ( z ) - W 0 ' ( z ) W 1 ' ( z ) P = 1 2 I 0 0 J I I - I I W 0 ' ( z ) 0 0 W 1 ' ( z ) P = 1 2 I I - J J W 0 ' ( z ) 0 0 W 1 ' ( z ) P .
(106)

For PS symmetry, one has the following parameterization of unitary filter banks.

Theorem 50 (Unitary PS Symmetry)Hp(z)Hp(z) of order KK forms a unitary PR filter bank with PS symmetry iff there exist unitary, order KK, M/2×M/2M/2×M/2 matrices W0'(z)W0'(z) and W1'(z)W1'(z), such that

H p ( z ) = 1 2 I I - J J W 0 ' ( z ) 0 0 W 1 ' ( z ) P . H p ( z ) = 1 2 I I - J J W 0 ' ( z ) 0 0 W 1 ' ( z ) P .
(107)

A unitary HpHp, with PS symmetry is determined by precisely 2(M/2-1)(L0+L1)+2M/222(M/2-1)(L0+L1)+2M/22 parameters where L0KL0K and L1KL1K are the McMillan degrees of W0'(z)W0'(z) and W1'(z)W1'(z) respectively.

### Characterization of Unitary Hp(z)Hp(z) — PCS Symmetry

In this case

H p ( z ) = I 0 0 J W 0 ( z ) W 1 ( z ) J W 1 R ( z ) V ( - 1 ) M / 2 W 0 R ( z ) J V = def I 0 0 J W 0 ' W 1 ' J - ( W 1 ' ) R ( W 0 ' ) R J P = I 0 0 J W 0 ' W 1 ' - ( W 1 ' ) R ( W 0 ' ) R I 0 0 J P . H p ( z ) = I 0 0 J W 0 ( z ) W 1 ( z ) J W 1 R ( z ) V ( -