Continuing from last time, we have a signal x∈R N x∈R N , an n×Nn×N measurement matrix ΦΦ, and y=Φx∈R n y=Φx∈R n
is the information we draw from xx.
We consider decoders ΔΔ mapping R n →R N R n →R N . We
have been discussing whether there exists a decoder with
certain properties. So for this discussion (about information
preservation), we can just think about optimal decoding.
While the previous result on sparse signal recovery is interesting, it is not very robust. What happens if our signal does not have support
kk?
Can we still obtain meaningful results? The answer is yes. To do so we introduce more general input signal classes
K⊆XK⊆X
that allows fully supported signals. For example, we will consider the signal class defined by the unit ball
K
=
U
(
ℓ
p
N
)
=
{
x
:
∥
x
∥
X
≤
1
}
.
K=U(
ℓ
p
N
)={x:
∥
x
∥
X
≤1}.
(1)
Given an encoder/decoder pair
(
Φ
,
Δ
)
(Φ,Δ),
the worst case error on a set
KK
for that pair will be given by
E
(
K
,
Φ
,
Δ
)
X
=
sup
x
∈
K
∥
x
−
Δ
(
Φ
x
)
∥
X
.
E
(
K
,
Φ
,
Δ
)
X
=
sup
x
∈
K
∥
x
−
Δ
(
Φ
x
)
∥
X
.
(2)
Finally, using minmax principles we will define the minimum error over all encoder/decoder pairs for a signal class and for a fixed number of measurements
nn to be
E
n
(
K
)
X
=
inf
(
Φ
,
Δ
)
:
Φ
is
n×N
E
(
K
,
Φ
,
Δ
)
X
.
E
n
(
K
)
X
=
inf
(
Φ
,
Δ
)
:
Φ
is
n×N
E
(
K
,
Φ
,
Δ
)
X
.
(3)
This measure
E n (K) X E n (K) X is the best we could do while measuring
distortion on the topology of
XX, using
nn linear measurements,
and using arbitrary decoding.
We will see that these questions are actually equivalent to a classical “nnwidth” problem. nnwidths have seen a great deal of work over the years by a variety of mathematicians: Kolmogorov, Tikhomirov, Kashin, Gluskin, etc. There are many different flavors of nnwidths, but we will study the Gelfand nnwidth (the least intuitive of the widths).
Let K⊆XK⊆X be compact. Given nn, the Gelfand width (also called the dual width) is given by
d
n
(K)
X
:=
inf
Y
:
codim
(Y)
=
n
sup
{
∥x∥
X
:
x∈K∩Y}
.
d
n
(K)
X
:=
inf
Y
:
codim
(Y)
=
n
sup{
∥x∥
X
:x∈K∩Y}.
(4)
where by
codimension (Y)=n codimension (Y)=n we mean that
YY has dimension
dim (X)−n dim (X)−n.
In other words, we are looking for the subspace
YY that slices through the set
KK so that the norms of the projected signals are as small as possible. We can now state the following theorem about nwidths:
Provided that KK has the properties (1) K=KK=K and (2) K+K=CKK+K=CK, then
d n (K) X ≤E n (K) X ≤Cd n (K) X d n (K) X ≤E n (K) X ≤Cd n (K) X
(5)
where
CC is the same constant listed in property
(2).
We start with the lefthand inequality. We
want to take any encoder/decoder pair and use that to construct a
YY. So let Φ,ΔΦ,Δ be an encoder/decoder. Then simply let
Y=N(Φ)Y=N(Φ). Now consider an η∈K∩Yη∈K∩Y and
note that Φ(η)=0Φ(η)=0 since η∈Yη∈Y. Let z=Δ(0)z=Δ(0)
be the decoding of 0 (practically speaking, zz should be zero
itself, but we avoid that assumption in this proof). Then
∥η∥
X
≤
max(∥ηz∥ X ,∥η+z∥ X )=max(∥ηΔΦ(η)∥ X ,∥ηΔΦ(η)∥ X )
≤
sup
x∈K
∥
x

Δ
Φ
(x)
∥
X
∥η∥
X
≤
max(∥ηz∥ X ,∥η+z∥ X )=max(∥ηΔΦ(η)∥ X ,∥ηΔΦ(η)∥ X )
≤
sup
x∈K
∥
x

Δ
Φ
(x)
∥
X
(6)
where we first employ the triangle inequality, then the fact that
multiplying by
11 does not change the norm, then the fact that
K=KK=K. So then
sup
η∈K∩Y
∥η∥
X
≤
sup
x∈K
∥
x

Δ
Φ
(
x
)
∥
X
.
sup
η∈K∩Y
∥η∥
X
≤
sup
x∈K
∥
x

Δ
Φ
(
x
)
∥
X
.
(7)
Taking the infimum over all
Φ,ΔΦ,Δ, it follows that
d
n
(K)
X
≤
E n
(K)
X
.
d
n
(K)
X
≤
E n
(K)
X
.
(8)
Since
ΦΦ is
n×Nn×N, then
dim (N(Φ))≥Nn dim (N(Φ))≥Nn.
Now we prove the righthand inequality. Assume we have a good YY.
Suppose YY has codimension NnNn. Then Y ⊥ Y ⊥ (the orthogonal
complement of YY in R N R N ) has dimension nn. Let
v 1 ,v 2 ,⋯,v n ∈R N v 1 ,v 2 ,⋯,v n ∈R N be a basis for Y ⊥ Y ⊥ . Let
ΦΦ be the n×Nn×N matrix obtained by stacking the rows
v 1 ,v 2 ,⋯,v n v 1 ,v 2 ,⋯,v n . Then N(Φ)=YN(Φ)=Y. Define
Δ(y)=Δ(y)= any element of K∩F(y)K∩F(y) if there is
one (otherwise let Δ(y)Δ(y) be anything in F(y)F(y)).
Now look at the performance ∥xΔΦ(x)∥ X ∥xΔΦ(x)∥ X for some x∈Kx∈K. Both xx and ΔΦ(x)=:x ' ΔΦ(x)=:x ' are elements of KK, so
xx ' xx ' is in N(Φ)N(Φ) and in CKCK. Therefore
xx ' C∈K∩N(Φ)xx ' C∈K∩N(Φ). Thus,
x

x
'
C
X
≤
sup
z
∈
Y
∩
K
∥
z
∥
X
,
x

x
'
C
X
≤
sup
z
∈
Y
∩
K
∥
z
∥
X
,
(9)
and so for any
x∈Kx∈K,
∥xΔΦ(x)∥ X ≤Csup z∈Y∩K ∥z∥ X .∥xΔΦ(x)∥ X ≤Csup z∈Y∩K ∥z∥ X .
(10)
Taking the infimum over all
YY, we get that
E
n
(K)
X
≤
C
d n
(K)
X
E
n
(K)
X
≤C
d n
(K)
X
.
From the proof of this theorem, we see that there is a matching
between the matrices ΦΦ and the spaces YY (via the nullspace
of ΦΦ).
An important result is that
d
n
(U(
ℓ q N
)
)
ℓ p N
d
n
(U(
ℓ q N
)
)
ℓ p N
is known for all
p,qp,q except p=1,q=∞p=1,q=∞. A precise statement of these widths can be found
in the book (Reference). A particularly important case is
C
1
log
(
N
/
n
)
n
≤
d
n
(
U
(
ℓ
1
N
)
)
ℓ
2
N
≤
C
2
log
(
N
/
n
)
n
C
1
log
(
N
/
n
)
n
≤
d
n
(
U
(
ℓ
1
N
)
)
ℓ
2
N
≤
C
2
log
(
N
/
n
)
n
(11)
for
N>2nN>2n.
This result was first proved by Kashin with a worse
logarithm in the upper inequality and later brought to the present
form by Gluskin. This result solves several important problems in
functional analysis and approximation.