Skip to content Skip to navigation

OpenStax_CNX

You are here: Home » Content » Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and unconditional Bases

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and unconditional Bases

Module by: C. Sidney Burrus. E-mail the author

Summary: Development of ideas of vector expansion

Most people with technical backgrounds are familiar with the ideas of expansion vectors or basis vectors and of orthogonality; however, the related concepts of biorthogonality or of frames and tight frames are less familiar but also important. In the study of wavelet systems, we find that frames and tight frames are needed and should be understood, at least at a superficial level. One can find details in [12], [2], [1], [8]. Another perhaps unfamiliar concept is that of an unconditional basis used by Donoho, Daubechies, and others [3], [10], [2] to explain why wavelets are good for signal compression, detection, and denoising [6], [5]. In this chapter, we will very briefly define and discuss these ideas. At this point, you may want to skip these sections and perhaps refer to them later when they are specifically needed.

Bases, Orthogonal Bases, and Biorthogonal Bases

A set of vectors or functions fk(t)fk(t)spans a vector space FF (or FF is the Span of the set) if any element of that space can be expressed as a linear combination of members of that set, meaning: Given the finite or infinite set of functions fk(t)fk(t), we define Span k{fk}=F Span k{fk}=F as the vector space with all elements of the space of the form

g ( t ) = k a k f k ( t ) g ( t ) = k a k f k ( t )
(1)

with kZkZ and t,aRt,aR. An inner product is usually defined for this space and is denoted f(t),g(t)f(t),g(t). A norm is defined and is denoted by f=f,ff=f,f.

We say that the set fk(t)fk(t) is a basis set or a basis for a given space FF if the set of {ak}{ak} in Equation 1 are unique for any particular g(t)Fg(t)F. The set is called an orthogonal basis if fk(t),f(t)=0fk(t),f(t)=0 for all kk. If we are in three dimensional Euclidean space, orthogonal basis vectors are coordinate vectors that are at right (90o) angles to each other. We say the set is an orthonormal basis if fk(t),f(t)=δ(k-)fk(t),f(t)=δ(k-) i.e. if, in addition to being orthogonal, the basis vectors are normalized to unity norm: fk(t)=1fk(t)=1 for all kk.

From these definitions it is clear that if we have an orthonormal basis, we can express any element in the vector space, g(t)Fg(t)F, written as Equation 1 by

g ( t ) = k g ( t ) , f k ( t ) f k ( t ) g ( t ) = k g ( t ) , f k ( t ) f k ( t )
(2)

since by taking the inner product of fk(t)fk(t) with both sides of Equation 1, we get

a k = g ( t ) , f k ( t ) a k = g ( t ) , f k ( t )
(3)

where this inner product of the signal g(t)g(t) with the basis vector fk(t)fk(t) “picks out" the corresponding coefficient akak. This expansion formulation or representation is extremely valuable. It expresses Equation 2 as an identity operator in the sense that the inner product operates on g(t)g(t) to produce a set of coefficients that, when used to linearly combine the basis vectors, gives back the original signal g(t)g(t). It is the foundation of Parseval's theorem which says the norm or energy can be partitioned in terms of the expansion coefficients akak. It is why the interpretation, storage, transmission, approximation, compression, and manipulation of the coefficients can be very useful. Indeed, Equation 2 is the form of all Fourier type methods.

Although the advantages of an orthonormal basis are clear, there are cases where the basis system dictated by the problem is not and cannot (or should not) be made orthogonal. For these cases, one can still have the expression of Equation 1 and one similar to Equation 2 by using a dual basis setf˜k(t)f˜k(t) whose elements are not orthogonal to each other, but to the corresponding element of the expansion set

f ( t ) , f ˜ k ( t ) = δ ( - k ) f ( t ) , f ˜ k ( t ) = δ ( - k )
(4)

Because this type of “orthogonality" requires two sets of vectors, the expansion set and the dual set, the system is called biorthogonal. Using Equation 4 with the expansion in Equation 1 gives

g ( t ) = k g ( t ) , f ˜ k ( t ) f k ( t ) g ( t ) = k g ( t ) , f ˜ k ( t ) f k ( t )
(5)

Although a biorthogonal system is more complicated in that it requires, not only the original expansion set, but the finding, calculating, and storage of a dual set of vectors, it is very general and allows a larger class of expansions. There may, however, be greater numerical problems with a biorthogonal system if some of the basis vectors are strongly correlated.

The calculation of the expansion coefficients using an inner product in Equation 3 is called the analysis part of the complete process, and the calculation of the signal from the coefficients and expansion vectors in Equation 1 is called the synthesis part.

In finite dimensions, analysis and synthesis operations are simply matrix–vector multiplications. If the expansion vectors in Equation 1 are a basis, the synthesis matrix has these basis vectors as columns and the matrix is square and non singular. If the matrix is orthogonal, its rows and columns are orthogonal, its inverse is its transpose, and the identity operator is simply the matrix multiplied by its transpose. If it is not orthogonal, then the identity is the matrix multiplied by its inverse and the dual basis consists of the rows of the inverse. If the matrix is singular, then its columns are not independent and, therefore, do not form a basis.

Matrix Examples

Using a four dimensional space with matrices to illustrate the ideas of this chapter, the synthesis formula g(t)=kakfk(t)g(t)=kakfk(t) becomes

g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 ) = a 0 f 0 ( 0 ) f 0 ( 1 ) f 0 ( 2 ) f 0 ( 3 ) + a 1 f 1 ( 0 ) f 1 ( 1 ) f 1 ( 2 ) f 1 ( 3 ) + a 2 f 2 ( 0 ) f 2 ( 1 ) f 2 ( 2 ) f 2 ( 3 ) + a 3 f 3 ( 0 ) f 3 ( 1 ) f 3 ( 2 ) f 3 ( 3 ) g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 ) = a 0 f 0 ( 0 ) f 0 ( 1 ) f 0 ( 2 ) f 0 ( 3 ) + a 1 f 1 ( 0 ) f 1 ( 1 ) f 1 ( 2 ) f 1 ( 3 ) + a 2 f 2 ( 0 ) f 2 ( 1 ) f 2 ( 2 ) f 2 ( 3 ) + a 3 f 3 ( 0 ) f 3 ( 1 ) f 3 ( 2 ) f 3 ( 3 )
(6)

which can be compactly written in matrix form as

g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 ) = f 0 ( 0 ) f 1 ( 0 ) f 2 ( 0 ) f 3 ( 0 ) f 0 ( 1 ) f 1 ( 1 ) f 2 ( 1 ) f 3 ( 1 ) f 0 ( 2 ) f 1 ( 2 ) f 2 ( 2 ) f 3 ( 2 ) f 0 ( 3 ) f 1 ( 3 ) f 2 ( 3 ) f 3 ( 3 ) a 0 a 1 a 2 a 3 g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 ) = f 0 ( 0 ) f 1 ( 0 ) f 2 ( 0 ) f 3 ( 0 ) f 0 ( 1 ) f 1 ( 1 ) f 2 ( 1 ) f 3 ( 1 ) f 0 ( 2 ) f 1 ( 2 ) f 2 ( 2 ) f 3 ( 2 ) f 0 ( 3 ) f 1 ( 3 ) f 2 ( 3 ) f 3 ( 3 ) a 0 a 1 a 2 a 3
(7)

The synthesis or expansion Equation 1 or Equation 7 becomes

g = F a , g = F a ,
(8)

with the left-hand column vector gg being the signal vector, the matrix FF formed with the basis vectors fkfk as columns, and the right-hand vector aa containing the four expansion coefficients, akak.

The equation for calculating the kthkth expansion coefficient in Equation 6 is

a k = g ( t ) , f ˜ k ( t ) = f ˜ k T g a k = g ( t ) , f ˜ k ( t ) = f ˜ k T g
(9)

which can be written in vector form as

a 0 a 1 a 2 a 3 = f ˜ 0 ( 0 ) f ˜ 0 ( 1 ) f ˜ 0 ( 2 ) f ˜ 0 ( 3 ) f ˜ 1 ( 0 ) f ˜ 1 ( 1 ) f ˜ 1 ( 2 ) f ˜ 1 ( 3 ) f ˜ 2 ( 0 ) f ˜ 2 ( 1 ) f ˜ 2 ( 2 ) f ˜ 2 ( 3 ) f ˜ 3 ( 0 ) f ˜ 3 ( 1 ) f ˜ 3 ( 2 ) f ˜ 3 ( 3 ) g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 ) a 0 a 1 a 2 a 3 = f ˜ 0 ( 0 ) f ˜ 0 ( 1 ) f ˜ 0 ( 2 ) f ˜ 0 ( 3 ) f ˜ 1 ( 0 ) f ˜ 1 ( 1 ) f ˜ 1 ( 2 ) f ˜ 1 ( 3 ) f ˜ 2 ( 0 ) f ˜ 2 ( 1 ) f ˜ 2 ( 2 ) f ˜ 2 ( 3 ) f ˜ 3 ( 0 ) f ˜ 3 ( 1 ) f ˜ 3 ( 2 ) f ˜ 3 ( 3 ) g ( 0 ) g ( 1 ) g ( 2 ) g ( 3 )
(10)

where each akak is an inner product of the kthkth row of F˜TF˜T with gg and analysis or coefficient Equation 3 or Equation 10 becomes

a = F ˜ T g a = F ˜ T g
(11)

which together are Equation 2 or

g = F F ˜ T g . g = F F ˜ T g .
(12)

Therefore,

F ˜ T = F - 1 F ˜ T = F - 1
(13)

is how the dual basis in Equation 4 is found.

If the columns of FF are orthogonal and normalized, then

F F T = I . F F T = I .
(14)

This means the basis and dual basis are the same, and Equation 12 and Equation 13 become

g = F F T g g = F F T g
(15)

and

F ˜ T = F T F ˜ T = F T
(16)

which are both simpler and more numerically stable than Equation 13.

The discrete Fourier transform (DFT) is an interesting example of a finite dimensional Fourier transform with orthogonal basis vectors where matrix and vector techniques can be informative as to the DFT's characteristics and properties. That can be found developed in several signal processing books.

Fourier Series Example

The Fourier Series is an excellent example of an infinite dimensional composition (synthesis) and decomposition (analysis). The expansion formula for an even function g(t)g(t) over 0<x<2π0<x<2π is

g ( t ) = k a k cos ( k t ) g ( t ) = k a k cos ( k t )
(17)

where the basis vectors (functions) are

f k ( t ) = cos ( k t ) f k ( t ) = cos ( k t )
(18)

and the expansion coefficients are obtained as

a k = g ( t ) , f k ( t ) = 2 π 0 π g ( t ) cos ( k t ) d x . a k = g ( t ) , f k ( t ) = 2 π 0 π g ( t ) cos ( k t ) d x .
(19)

The basis vector set is easily seen to be orthonormal by verifying

f ( t ) , f k ( t ) = δ ( k - ) . f ( t ) , f k ( t ) = δ ( k - ) .
(20)

These basis functions span an infinite dimensional vector space and the convergence of Equation 17 must be examined. Indeed, it is the robustness of that convergence that is discussed in this section under the topic of unconditional bases.

Sinc Expansion Example

Another example of an infinite dimensional orthogonal basis is Shannon's sampling expansion [9]. If f(t)f(t) is band limited, then

f ( t ) = k f ( T k ) sin ( π T t - π k ) π T t - π k f ( t ) = k f ( T k ) sin ( π T t - π k ) π T t - π k
(21)

for a sampling interval T<πWT<πW if the spectrum of f(t)f(t) is zero for |ω|>W|ω|>W. In this case the basis functions are the sinc functions with coefficients which are simply samples of the original function. This means the inner product of a sinc basis function with a bandlimited function will give a sample of that function. It is easy to see that the sinc basis functions are orthogonal by taking the inner product of two sinc functions which will sample one of them at the points of value one or zero.

Frames and Tight Frames

While the conditions for a set of functions being an orthonormal basis are sufficient for the representation in Equation 2 and the requirement of the set being a basis is sufficient for Equation 5, they are not necessary. To be a basis requires uniqueness of the coefficients. In other words it requires that the set be independent, meaning no element can be written as a linear combination of the others.

If the set of functions or vectors is dependent and yet does allow the expansion described in Equation 5, then the set is called a frame. Thus, a frame is a spanning set. The term frame comes from a definition that requires finite limits on an inequality bound [2], [12] of inner products.

If we want the coefficients in an expansion of a signal to represent the signal well, these coefficients should have certain properties. They are stated best in terms of energy and energy bounds. For an orthogonal basis, this takes the form of Parseval's theorem. To be a frame in a signal space, an expansion set ϕk(t)ϕk(t) must satisfy

A g 2 k | φ k , g | 2 B g 2 A g 2 k | φ k , g | 2 B g 2
(22)

for some 0<A0<A and B<B< and for all signals g(t)g(t) in the space. Dividing Equation 22 by g2g2 shows that AA and BB are bounds on the normalized energy of the inner products. They “frame" the normalized coefficient energy. If

A = B A = B
(23)

then the expansion set is called a tight frame. This case gives

A g 2 = k | φ k , g | 2 A g 2 = k | φ k , g | 2
(24)

which is a generalized Parseval's theorem for tight frames. If A=B=1A=B=1, the tight frame becomes an orthogonal basis. From this, it can be shown that for a tight frame [2]

g ( t ) = A - 1 k φ k ( t ) , g ( t ) φ k ( t ) g ( t ) = A - 1 k φ k ( t ) , g ( t ) φ k ( t )
(25)

which is the same as the expansion using an orthonormal basis except for the A-1A-1 term which is a measure of the redundancy in the expansion set.

If an expansion set is a non tight frame, there is no strict Parseval's theorem and the energy in the transform domain cannot be exactly partitioned. However, the closer AA and BB are, the better an approximate partitioning can be done. If A=BA=B, we have a tight frame and the partitioning can be done exactly with Equation 24. Daubechies [2] shows that the tighter the frame bounds in Equation 22 are, the better the analysis and synthesis system is conditioned. In other words, if AA is near or zero and/or BB is very large compared to AA, there will be numerical problems in the analysis–synthesis calculations.

Frames are an over-complete version of a basis set, and tight frames are an over-complete version of an orthogonal basis set. If one is using a frame that is neither a basis nor a tight frame, a dual frame set can be specified so that analysis and synthesis can be done as for a non-orthogonal basis. If a tight frame is being used, the mathematics is very similar to using an orthogonal basis. The Fourier type system in Equation 25 is essentially the same as Equation 2, and Equation 24 is essentially a Parseval's theorem.

The use of frames and tight frames rather than bases and orthogonal bases means a certain amount of redundancy exists. In some cases, redundancy is desirable in giving a robustness to the representation so that errors or faults are less destructive. In other cases, redundancy is an inefficiency and, therefore, undesirable. The concept of a frame originates with Duffin and Schaeffer [4] and is discussed in [12], [1], [2]. In finite dimensions, vectors can always be removed from a frame to get a basis, but in infinite dimensions, that is not always possible.

An example of a frame in finite dimensions is a matrix with more columns than rows but with independent rows. An example of a tight frame is a similar matrix with orthogonal rows. An example of a tight frame in infinite dimensions would be an over-sampled Shannon expansion. It is informative to examine this example.

Matrix Examples

An example of a frame of four expansion vectors fkfk in a three-dimensional space would be

g ( 0 ) g ( 1 ) g ( 2 ) = f 0 ( 0 ) f 1 ( 0 ) f 2 ( 0 ) f 3 ( 0 ) f 0 ( 1 ) f 1 ( 1 ) f 2 ( 1 ) f 3 ( 1 ) f 0 ( 2 ) f 1 ( 2 ) f 2 ( 2 ) f 3 ( 2 ) a 0 a 1 a 2 a 3 g ( 0 ) g ( 1 ) g ( 2 ) = f 0 ( 0 ) f 1 ( 0 ) f 2 ( 0 ) f 3 ( 0 ) f 0 ( 1 ) f 1 ( 1 ) f 2 ( 1 ) f 3 ( 1 ) f 0 ( 2 ) f 1 ( 2 ) f 2 ( 2 ) f 3 ( 2 ) a 0 a 1 a 2 a 3
(26)

which corresponds to the basis shown in the square matrix in Equation 7. The corresponding analysis equation is

a 0 a 1 a 2 a 3 = f ˜ 0 ( 0 ) f ˜ 0 ( 1 ) f ˜ 0 ( 2 ) f ˜ 1 ( 0 ) f ˜ 1 ( 1 ) f ˜ 1 ( 2 ) f ˜ 2 ( 0 ) f ˜ 2 ( 1 ) f ˜ 2 ( 2 ) f ˜ 3 ( 0 ) f ˜ 3 ( 1 ) f ˜ 3 ( 2 ) g ( 0 ) g ( 1 ) g ( 2 ) . a 0 a 1 a 2 a 3 = f ˜ 0 ( 0 ) f ˜ 0 ( 1 ) f ˜ 0 ( 2 ) f ˜ 1 ( 0 ) f ˜ 1 ( 1 ) f ˜ 1 ( 2 ) f ˜ 2 ( 0 ) f ˜ 2 ( 1 ) f ˜ 2 ( 2 ) f ˜ 3 ( 0 ) f ˜ 3 ( 1 ) f ˜ 3 ( 2 ) g ( 0 ) g ( 1 ) g ( 2 ) .
(27)

which corresponds to Equation 10. One can calculate a set of dual frame vectors by temporarily appending an arbitrary independent row to Equation 26, making the matrix square, then using the first three columns of the inverse as the dual frame vectors. This clearly illustrates the dual frame is not unique. Daubechies [2] shows how to calculate an “economical" unique dual frame.

The tight frame system occurs in wavelet infinite expansions as well as other finite and infinite dimensional systems. A numerical example of a frame which is a normalized tight frame with four vectors in three dimensions is

g ( 0 ) g ( 1 ) g ( 2 ) = 1 A 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 a 0 a 1 a 2 a 3 g ( 0 ) g ( 1 ) g ( 2 ) = 1 A 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 a 0 a 1 a 2 a 3
(28)

which includes the redundancy factor from Equation 25. Note the rows are orthogonal and the columns are normalized, which gives

F F T = 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 1 3 1 1 1 1 - 1 1 - 1 1 1 - 1 - 1 1 = 4 3 1 0 0 0 1 0 0 0 1 = 4 3 I F F T = 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 1 3 1 1 1 1 - 1 1 - 1 1 1 - 1 - 1 1 = 4 3 1 0 0 0 1 0 0 0 1 = 4 3 I
(29)

or

g = 1 A F F T g g = 1 A F F T g
(30)

which is the matrix form of Equation 25. The factor of A=4/3A=4/3 is the measure of redundancy in this tight frame using four expansion vectors in a three-dimensional space.

The identity for the expansion coefficients is

a = 1 A F T F a a = 1 A F T F a
(31)

which for the numerical example gives

F T F = 1 3 1 1 1 1 - 1 1 - 1 1 1 - 1 - 1 1 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 = 1 1 / 3 1 / 3 - 1 / 3 1 / 3 1 - 1 / 3 1 / 3 1 / 3 - 1 / 3 1 1 / 3 - 1 / 3 1 / 3 1 / 3 1 . F T F = 1 3 1 1 1 1 - 1 1 - 1 1 1 - 1 - 1 1 1 3 1 1 - 1 - 1 1 - 1 1 - 1 1 1 1 1 = 1 1 / 3 1 / 3 - 1 / 3 1 / 3 1 - 1 / 3 1 / 3 1 / 3 - 1 / 3 1 1 / 3 - 1 / 3 1 / 3 1 / 3 1 .
(32)

Although this is not a general identity operator, it is an identity operator over the three-dimensional subspace that aa is in and it illustrates the unity norm of the rows of FTFT and columns of FF.

If the redundancy measure AA in Equation 25 and Equation 29 is one, the matrices must be square and the system has an orthonormal basis.

Frames are over-complete versions of non-orthogonal bases and tight frames are over-complete versions of orthonormal bases. Tight frames are important in wavelet analysis because the restrictions on the scaling function coefficients discussed in Chapter: The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients guarantee not that the wavelets will be a basis, but a tight frame. In practice, however, they are usually a basis.

Sinc Expansion as a Tight Frame Example

An example of an infinite-dimensional tight frame is the generalized Shannon's sampling expansion for the over-sampled case [9]. If a function is over-sampled but the sinc functions remains consistent with the upper spectral limit WW, the sampling theorem becomes

g ( t ) = T W π n g ( T n ) sin ( ( t - T n ) W ) ( t - T n ) W g ( t ) = T W π n g ( T n ) sin ( ( t - T n ) W ) ( t - T n ) W
(33)

or using RR as the amount of over-sampling

R W = π T , for R 1 R W = π T , for R 1
(34)

we have

g ( t ) = 1 R n g ( T n ) sin ( π R T ( t - T n ) ) π R T ( t - T n ) g ( t ) = 1 R n g ( T n ) sin ( π R T ( t - T n ) ) π R T ( t - T n )
(35)

where the sinc functions are no longer orthogonal now. In fact, they are no longer a basis as they are not independent. They are, however, a tight frame and, therefore, act as though they were an orthogonal basis but now there is a “redundancy" factor RR as a multiplier in the formula.

Notice that as RR is increased from unity, Equation 35 starts as Equation 21 where each sample occurs where the sinc function is one or zero but becomes an expansion with the shifts still being t=Tnt=Tn, however, the sinc functions become wider so that the samples are no longer at the zeros. If the signal is over-sampled, either the expression Equation 21 or Equation 35 could be used. They both are over-sampled but Equation 21 allows the spectrum of the signal to increase up to the limit without distortion while Equation 35 does not. The generalized sampling theorem Equation 35 has a built-in filtering action which may be an advantage or it may not.

The application of frames and tight frames to what is called a redundant discrete wavelet transform (RDWT) is discussed later in Section: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive Bases and their use in Section: Nonlinear Filtering or Denoising with the DWT. They are also needed for certain adaptive descriptions discussed at the end of Section: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive Bases where an independent subset of the expansion vectors in the frame are chosen according to some criterion to give an optimal basis.

Conditional and Unconditional Bases

A powerful point of view used by Donoho [3] gives an explanation of which basis systems are best for a particular class of signals and why the wavelet system is good for a wide variety of signal classes.

Donoho defines an unconditional basis as follows. If we have a function class FF with a norm defined and denoted ||·||F||·||F and a basis set fkfk such that every function gFgF has a unique representation g=kakfkg=kakfk with equality defined as a limit using the norm, we consider the infinite expansion

g ( t ) = k m k a k f k ( t ) . g ( t ) = k m k a k f k ( t ) .
(36)

If for all gFgF, the infinite sum converges for all |mk|1|mk|1, the basis is called an unconditional basis. This is very similar to unconditional or absolute convergence of a numerical series [3], [12], [10]. If the convergence depends on mk=1mk=1 for some g(t)g(t), the basis is called a conditional basis.

An unconditional basis means all subsequences converge and all sequences of subsequences converge. It means convergence does not depend on the order of the terms in the summation or on the sign of the coefficients. This implies a very robust basis where the coefficients drop off rapidly for all members of the function class. That is indeed the case for wavelets which are unconditional bases for a very wide set of function classes [2], [11], [7].

Unconditional bases have a special property that makes them near-optimal for signal processing in several situations. This property has to do with the geometry of the space of expansion coefficients of a class of functions in an unconditional basis. This is described in [3].

The fundamental idea of bases or frames is representing a continuous function by a sequence of expansion coefficients. We have seen that the Parseval's theorem relates the L2L2 norm of the function to the 22 norm of coefficients for orthogonal bases and tight frames Equation 24. Different function spaces are characterized by different norms on the continuous function. If we have an unconditional basis for the function space, the norm of the function in the space not only can be related to some norm of the coefficients in the basis expansion, but the absolute values of the coefficients have the sufficient information to establish the relation. So there is no condition on the sign or phase information of the expansion coefficients if we only care about the norm of the function, thus unconditional.

For this tutorial discussion, it is sufficient to know that there are theoretical reasons why wavelets are an excellent expansion system for a wide set of signal processing problems. Being an unconditional basis also sets the stage for efficient and effective nonlinear processing of the wavelet transform of a signal for compression, denoising, and detection which are discussed in Chapter: The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients.

References

  1. Daubechies, Ingrid. (1990, September). The Wavelet Transform, Time-Frequency Localization and Signal Analysis. [Also a Bell Labs Technical Report]. IEEE Transaction on Information Theory, 36(5), 961–1005.
  2. Daubechies, Ingrid. (1992). Ten Lectures on Wavelets. [Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA]. Philadelphia, PA: SIAM.
  3. Donoho, David L. (1993, December). Unconditional Bases are Optimal Bases for Data Compression and for Statistical Estimation. [Also Stanford Statistics Dept. Report TR-410, Nov. 1992]. Applied and Computational Harmonic Analysis, 1(1), 100–115.
  4. Duffin, R. J. and Schaeffer, R. C. (1952). A Class of Nonharmonic Fourier Series. Transactions of the American Mathematical Society, 72, 341–366.
  5. Guo, H. and Odegard, J. E. and Lang, M. and Gopinath, R. A. and Selesnick, I. and Burrus, C. S. (1994, July). Speckle Reduction via Wavelet Soft-Thresholding with Application to SAR based ATD/R. In Proceedings of SPIE Conference 2260. (Vol. 2260). San Diego
  6. Guo, H. and Odegard, J. E. and Lang, M. and Gopinath, R. A. and Selesnick, I. W. and Burrus, C. S. (1994, November 13-16). Wavelet Based Speckle Reduction with Application to SAR Based ATD/R. In Proceedings of the IEEE International Conference on Image Processing. (Vol. I, p. I:75–79). IEEE ICIP-94, Austin, Texas
  7. Gripenberg, Gustaf. (1993, July). Unconditional Bases of Wavelets for Sobelov Spaces. SIAM Journal of Mathematical Analysis, 24(4), 1030–1042.
  8. Heil, C. E. and Walnut, D. F. (1989, December). Continuous and Discrete Wavelet Transforms. SIAM Review, 31(4), 628–666.
  9. Marks II, R. J. (1991). Introduction to Shannon Sampling and Interpolation Theory. New York: Springer-Verlag.
  10. Meyer, Y. (1990). Ondelettes et opérateurs. Paris: Hermann.
  11. Meyer, Yves. (1993). Wavelets, Algorithms and Applications. [Translated by R. D. Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991]. Philadelphia: SIAM.
  12. Young, R. M. (1980). An Introduction to Nonharmonic Fourier Series. New York: Academic Press.

Content actions

Download module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks