In order to look at a more general expansion system than a basis and to generalize the ideas
of orthogonality and of energy
being calculated in the original expansion system or the transformed system, the concept of frame is
defined. A frame decomposition or representation is generally more robust and flexible than a
basis decomposition or representation but it requires more computation and memory [11], [20], [4].
Sometimes a frame is called a redundant basis or representing an underdetermined or underspecified set of
equations.
If a set of vectors, fkfk, span a vector space (or subspace) but are not necessarily independent nor orthogonal,
bounds on the energy in the transform can still be defined. A set of vectors that span a vector space is called a
frame if two constants, AA and BB exist such that
0
<
A


x


2
≤
∑
k

(
f
k
,
x
)

2
≤
B


x


2
<
∞
0
<
A


x


2
≤
∑
k

(
f
k
,
x
)

2
≤
B


x


2
<
∞
(19)and the two constants are called the frame bounds for the system. This can be written
0
<
A


x


2
≤


c


2
≤
B


x


2
<
∞
0
<
A


x


2
≤


c


2
≤
B


x


2
<
∞
(20)where
If the fkfk are linearly independent but not orthogonal, then the frame is a nonorthogonal basis. If the
fkfk are not independent
the frame is called redundant since there are more than the minimum number of expansion vectors that
a basis would have. If the
frame bounds are equal, A=BA=B, the system is called a tight frame and it has many of features of an orthogonal basis.
If the bounds are equal to each other and to one, A=B=1A=B=1, then the frame is a basis and is tight.
It is, therefore, an orthogonal basis.
So a frame is a generalization of a basis and a tight frame is a generalization of an orthogonal basis. If , A=BA=B, the frame
is tight and we have a scaled Parseval's theorem:
A


x


2
=
∑
k

(
f
k
,
x
)

2
A


x


2
=
∑
k

(
f
k
,
x
)

2
(22)If A=B>1A=B>1, then the number of expansion vectors are more than needed for a basis and AA is a measure of the redundancy
of the system (for normalized frame vectors). For example, if there are three frame vectors in a two dimensional vector space, A=3/2A=3/2.
A finite dimensional matrix version of the redundant case would have FF in Equation 8 with more columns
than rows but with full row rank. For example
a
00
a
01
a
02
a
10
a
11
a
12
x
0
x
1
x
2
=
b
0
b
1
a
00
a
01
a
02
a
10
a
11
a
12
x
0
x
1
x
2
=
b
0
b
1
(23)has three frame vectors as the columns of AA but in a two dimensional space.
The prototypical example is called the MercedesBenz tight frame where three frame
vectors that are 120∘120∘ apart are used in a twodimensional
plane and look like the Mercedes car hood ornament. These three frame vectors must be as
far apart from each other as possible to be tight, hence the 120∘120∘ separation.
But, they can be rotated any amount and remain tight [20], [13] and, therefore,
are not unique.
1

0
.
5

0
.
5
0
0
.
866

0
.
866
x
0
x
1
x
2
=
b
0
b
1
1

0
.
5

0
.
5
0
0
.
866

0
.
866
x
0
x
1
x
2
=
b
0
b
1
(24)In the next section, we will use the pseudoinverse of AA to find the optimal xx for a given bb.
So the frame bounds AA and BB in Equation 19 are an indication of the redundancy of the expansion system fkfk and to
how close they are to being orthogonal or tight. Indeed, Equation 19 is a sort of approximate Parseval's theorem
[21], [5], [16], [12], [4], [20], [8], [14].
The dual frame vectors are also not
unique but a set can be found such that Equation 9 and, therefore,
Equation 10 hold (but Equation 13 does not). A set of dual frame vectors
could be found by adding a set of arbitrary but independent rows to F until it is square, inverting it, then taking the first NN columns to
form F˜F˜ whose rows will be a set of dual frame vectors.
This method of construction shows the nonuniqueness of the dual frame
vectors. This nonuniqueness is often resolved by optimizing some other
parameter of the system [5].
If the matrix operations are implementing a frame decomposition and the
rows of F are orthonormal, then F˜=FTF˜=FT and the vector set
is a tight frame [21], [5]. If the frame vectors are
normalized to xk=1xk=1, the decomposition in Equation 6 becomes
x
=
1
A
∑
n
(
x
,
x
˜
n
)
x
n
x
=
1
A
∑
n
(
x
,
x
˜
n
)
x
n
(25)where the constant AA is a measure of the redundancy of the expansion
which has more expansion vectors than necessary [5].
The matrix form is
x
=
1
A
F
F
T
x
x
=
1
A
F
F
T
x
(26)where FF has more columns than rows. Examples can be found in
[1].