The received signal
rt
r
t
is assumed to have one of
MM forms
∀i,t,i∈0…M−1∧(0≤t<T):rt=
s
i
t+nt
i
t
i
0
…
M
1
0
t
T
r
t
s
i
t
n
t
(1)
where the
s
i
t
s
i
t
comprise the signal set.
nt
n
t
is usually assumed to be statistically independent
of the transmitted signal and a white, Gaussian process having
spectral height
N
0
2
N
0
2
. We represent the received signal with a
KarhunenLoève expansion.
rt=∑j=1∞
r
j
φ
j
t=∑j=1∞(
s
i
j
+
n
j
)
φ
j
t
r
t
j
1
r
j
φ
j
t
j
1
s
i
j
n
j
φ
j
t
where
s
i
j
s
i
j
and
n
j
n
j
are the representations of the signal
s
i
t
s
i
t
and the noise
nt
n
t
, respectively. To have a KarhunenLoève
expansion, it suffices to choose
φ
j
t
φ
j
t
so that the
n
j
n
j
are pairwise uncorrelated. As
nt
n
t
is white, we may choose
any
φ
j
t
φ
j
t
we want! In particular, choose
φ
j
t
φ
j
t
to be the set of functions which yield a
finitedimensional representation for the signals
s
i
t
s
i
t
. A complete, but not necessarily orthonormal, set of
functions that does this is
s
0
t…
s
M

1
t
ψ
0
t
ψ
1
t…
s
0
t
…
s
M

1
t
ψ
0
t
ψ
1
t
…
where
ψ
j
t
ψ
j
t
denotes any complete set of functions. We form the
set
φ
j
t
φ
j
t
by applying the
GramSchmidt procedure
to the set. With this basis,
s
i
j
=0
s
i
j
0
,
j≥M
j
M
. In this case, the representation of
rt
r
t
becomes
rt={
s
i
j
+
n
j
if j∈0…M−1
n
j
if j≥M
r
t
s
i
j
n
j
j
0
…
M
1
n
j
j
M
so that we may write the model evaluation problem we are
attempting to solve as
ℳ
0
:
rt=(
s
0
0
+
n
0
)
φ
0
t+…+(
s
0
,
M

1
+
n
M

1
)
φ
M

1
t+∑j≥M
n
j
φ
j
t
ℳ
0
:
r
t
s
0
0
n
0
φ
0
t
…
s
0
,
M

1
n
M

1
φ
M

1
t
j
j
M
n
j
φ
j
t
ℳ
1
:
rt=(
s
1
0
+
n
0
)
φ
0
t+…+(
s
1
,
M

1
+
n
M

1
)
φ
M

1
t+∑j≥M
n
j
φ
j
t
ℳ
1
:
r
t
s
1
0
n
0
φ
0
t
…
s
1
,
M

1
n
M

1
φ
M

1
t
j
j
M
n
j
φ
j
t
We make two observations:

We can consider the model evaluation problem
that operates on the representation of the
received signal rather than the signal itself. Recall that using
the representation is equivalent to using the original
process. We have thus created an equivalent model evaluation
problem. For the binary signal set case,
ℳ
0
:
r=
s
0
+n
ℳ
0
:
r
s
0
n
ℳ
1
:
r=
s
1
+n
ℳ
1
:
r
s
1
n
where nn contains
statistically independent Gaussian components, each of which
has variance
N
0
2
N
0
2
.

Note that the components are statistically
independent of each other and that, for
j≥M
j
M
, the representation contains no
signalrelated information. Because these components are
extraneous and will not contribute to improved performance, we
can reduce the dimension of the problem to no more than
MM by ignoring these
components. By rejecting these noiseonly components, we are
effectively filtering out "outofband" noise, retaining those
components related to the signals. Using eigenfunction
related to the signals defines signal space, allowing us to
ideally reject purenoise components.
As a consequence of these observations, we have
a model evaluation problem of the form
r=
r
0
⋮
r
K

1
=
s
i
,
0
⋮
s
i
,
K

1
+
n
0
⋮
n
K

1
r
r
0
⋮
r
K

1
s
i
,
0
⋮
s
i
,
K

1
n
0
⋮
n
K

1
We know how to solve this problem; we compute
∀i,i∈0…K−1:
ϒ
i
r=
N
0
2ln
π
i
+〈
s
i
,r〉−∥
s
i
∥22
i
i
0
…
K
1
ϒ
i
r
N
0
2
π
i
s
i
r
s
i
2
2
and choose the largest. The components of the
signal and received vectors are given by
s
i
j
=∫0T
s
i
t
φ
j
td
t
s
i
j
t
0
T
s
i
t
φ
j
t
r
j
=∫0Trt
φ
j
td
t
r
j
t
0
T
r
t
φ
j
t
Because of Parseval's Theorem, the inner product
between representations equals the timedomain inner product
between the represented signals.
〈
s
i
,r〉=∫0T
s
i
trtd
t
s
i
r
t
0
T
s
i
t
r
t
Furthermore,
∥
s
i
∥2=∫0T
s
i
2td
t
=
E
i
s
i
2
t
0
T
s
i
t
2
E
i
, the energy in the
i
th
i
th
signal. Thus, the sufficient statistic for the
optimal detector has a closed form timedomain expression.
ϒ
i
r=
N
0
2ln
π
i
+∫0T
s
i
trtd
t
−
E
i
2
ϒ
i
r
N
0
2
π
i
t
0
T
s
i
t
r
t
E
i
2
(2)
This form of the minimum probability of error receiver is
termed a
correlation receiver (see
Figure 1). Each transmitted signal and the received
signal are correlated to obtain the sufficient
statistic. These operations project the received signal onto
signal space.
An alternate structure which computes the same
quantities can be derived by noting that if
ft
f
t
and
gt
g
t
are nonzero only over
0
T
0
T
, the inner product (correlation) operation can be
written as a convolution followed by a sampler.
∫0Tftgtd
t
=ft*gT−tt=T
t
0
T
f
t
g
t
t
T
f
t
g
T
t
Consequently, we can restructure the "correlation"
operation as a filteringandsampling operation. The impulse
responses of the linear filters are timereversed, delayed
versions of the signals in the signal set. This structure for
the minimum probability of error receiver is known as the
matchedfilter receiver (see Figure 2). Each type of receiver has the same
performance; however, the matched filter receiver is usually
easier to construct because the correlation receiver requires
an analog multiplier.
As we know, receiver performance is judged by
the probability of error, which, for equally likely signals in a
binary signal set, is given by
P
e
=Q∥
s
0
−
s
1
∥2
N
0
2
P
e
Q
s
0
s
1
2
N
0
2
(3)
The computation of the probability of error and the
dimensionality of the problem can be assessed by considering
signal space: The
representation of the signals with respect to a basis. The
number of basis elements required to represent the signal set
defines dimensionality. The geometric configuration of the
signals in this space is known as the
signal
constellation. Once this constellation is found,
computing intersignal distances is easy.