Given measurements y[m]y[m], m=1,...,Mm=1,...,M corresponding to line integrals at different different offsets rmrm and angles θmθm (i.e. a finite set of samples of the Radon transform), which have possibly been corrupted by noise, we would like to estimate the underlying image f(s,t)f(s,t). If the measurements are dense in (r,θ)(r,θ) space, the natural approach to this problem is to use filtered backprojection. Our focus here will be setting this problem up as finite linear inverse problem

y
=
A
x
+
noise
,
y
∈
R
M
,
x
∈
R
N
y
=
A
x
+
noise
,
y
∈
R
M
,
x
∈
R
N

(4)so that it can be attacked with the general set of tools for solving such problems (e.g. least-squares).

We start by choosing a finite-dimensional space VV in which to perform the reconstruction that comes equipped with a set of NN basis vectors {ψγ(s,t)}{ψγ(s,t)}. We will use the general index γ∈Γγ∈Γ where ΓΓ is a set of size NN as, depending on the basis, it may be convenient to index the basis in different ways (i.e. by integers, pairs of integers over the same range, pairs of integers over different ranges, etc.).

For example, if f(s,t)f(s,t) is non-zero only for (s,t)∈[0,1]2(s,t)∈[0,1]2, we might take our reconstruction space VV to be the set of all “pixellated” images — images that are piecewise-constant on squares of side length 1/n1/n for some integer nn. A natural basis for this space is the set of indicator functions on these squares:

ψ
j
,
k
(
s
,
t
)
=
1
s
∈
[
j
/
n
,
(
j
+
1
)
/
n
]
,
t
∈
[
k
/
n
,
(
k
+
1
)
/
n
]
0
otherwise
ψ
j
,
k
(
s
,
t
)
=
1
s
∈
[
j
/
n
,
(
j
+
1
)
/
n
]
,
t
∈
[
k
/
n
,
(
k
+
1
)
/
n
]
0
otherwise

(5)Using our general index notation, we can write any f(s,t)∈Vf(s,t)∈V as

f
(
s
,
t
)
=
∑
γ
∈
Γ
x
[
γ
]
ψ
γ
(
s
,
t
)
,
f
(
s
,
t
)
=
∑
γ
∈
Γ
x
[
γ
]
ψ
γ
(
s
,
t
)
,

(6)where Γ={(j,k):j,k=0,1,...,n-1}Γ={(j,k):j,k=0,1,...,n-1} with size N=n2N=n2, and the x[γ]∈Rx[γ]∈R are the basis expansion coefficients, which are, in this case, the pixel values. (Another natural basis for VV would be the two-dimensional Haar basis we encountered earlier). The point is that knowing the discrete set of coefficients x[γ]x[γ] for all γ∈Γγ∈Γ is the same as knowing the continuous-space function f(s,t)f(s,t).

We can also write the measurements of an f(s,t)∈Vf(s,t)∈V in terms of the basis functions:

y
[
m
]
=
R
r
m
,
θ
m
∑
γ
∈
Γ
x
[
γ
]
ψ
γ
(
s
,
t
)
=
∑
γ
∈
Γ
x
[
γ
]
R
r
m
,
θ
m
ψ
γ
(
s
,
t
)
(since
R
r
,
θ
[
·
]
is
linear)
=
∑
γ
∈
Γ
A
[
m
,
γ
]
x
[
γ
]
where
A
[
m
,
γ
]
=
R
r
m
,
θ
m
ψ
γ
(
s
,
t
)
,
y
[
m
]
=
R
r
m
,
θ
m
∑
γ
∈
Γ
x
[
γ
]
ψ
γ
(
s
,
t
)
=
∑
γ
∈
Γ
x
[
γ
]
R
r
m
,
θ
m
ψ
γ
(
s
,
t
)
(since
R
r
,
θ
[
·
]
is
linear)
=
∑
γ
∈
Γ
A
[
m
,
γ
]
x
[
γ
]
where
A
[
m
,
γ
]
=
R
r
m
,
θ
m
ψ
γ
(
s
,
t
)
,

(7)
which can be written in more compact form as

The entries of the M×NM×N matrix AA contain the results of each of the MM measurements functionals Rrm,θm[·]Rrm,θm[·] applied to each of the NN basis functions ψγ(s,t)ψγ(s,t), the NN-vector xx contains the expansion coefficients for f(s,t)f(s,t) in the basis {ψγ}{ψγ}, and yy contains the MM measurements. This is illustrated in the figure below. As we can see, not too many of the ℓ→mℓ→m pass through a given pixel, meaning that the matrix AA will be very sparsely populated.

Of course, the true underlying image will in general not lie in the chosen finite-dimensional subspace VV. This means that even when there is no measurement noise, there will still be some inherent error in our calculations. But solving Equation 8 will in some sense find the member of VV that best explains the measurements that have been observed. If the true image can be closely approximated by a member of VV, then we will not lose much through this discretization. A major consideration in choosing the space VV is how well we can use it to approximate images we expect to encounter.