Random Signals2.42003/01/062003/03/17NickKingsburyngk10@cam.ac.ukNickKingsburyngk10@cam.ac.ukrandom signalsThis module introduces random signals.
Random signals are random variables which evolve, often with
time (e.g. audio noise), but also with distance (e.g. intensity
in an image of a random texture), or sometimes another
parameter.
They can be described as usual by their cdf and either their pmf
(if the amplitude is discrete, as in a digitized signal) or
their pdf (if the amplitude is continuous, as in most analogue
signals).
However a very important additional property is how rapidly a
random signal fluctuates. Clearly a slowly varying signal such
as the waves in an ocean is very different from a rapidly
varying signal such as vibrations in a vehicle. We will see
later in how to deal with
these frequency dependent characteristics of randomness.
For the moment we shall assume that random signals are sampled
at regular intervals and that each signal is equivalent to a
sequence of samples of a given random process, as in the
following examples.
Example - Detection of a binary signal in noise
We now consider the example of detecting a binary signal after
it has passed through a channel which adds noise. The
transmitted signal is typically as shown in (a) of .
In order to reduce the channel noise, the receiver will
include a lowpass filter. The aim of the filter is to reduce
the noise as much as possible without reducing the peak values
of the signal significantly. A good filter for this has a
half-sine impulse response of the form:
ht2TbtTb0tTb0
Where
Tb = bit period.
This filter will convert the rectangular data bits into
sinusoidally shaped pulses as shown in (b) of and it will also convert wide
bandwidth channel noise into the form shown in (c) of . Bandlimited noise of this
form will usually have an approximately Gaussian pdf.
Because this filter has an impulse response limited to just
one bit period and has unit gain at zero frequency (the area
under
ht is unity), the signal values at the center of each
bit period at the detector will still be
±1. If we choose to sample each bit at the detector at
this optimal mid point, the pdfs of the signal plus noise at
the detector will be shown in .
Let the filtered data signal be
Dt and the filtered noise be
Ut, then the detector signal is
RtDtUt
If we assume that
+1 and -1 bits are
equiprobable and the noise is a symmetric zero-mean process,
the optimum detector threshold is clearly midway between these
two states, i.e. at zero. The probability of error when the
data =
+1 is then given by:
D+1errorD+1Rt0FU-1u-1fUu
where
FU and
fU are the cdf and pdf of
U. This is the shaded area in
.
Similarly the probability of error when the data =
-1 is then given by:
D-1errorD-1Rt01FU+1u1fUu
Hence the overall probability of error is:
errorD+1errorD+1D-1errorD-1u-1fUuD+1u1fUuD-1
since
fU is symmetric about zero
erroru1fUuD+1D-1u1fUu
To be a little more general and to account for signal
attenuation over the channel, we shall assume that the signal
values at the detector are
±v0 (rather than
±1) and that the filtered noise at the detector has a
zero-mean Gaussian pdf with variance
σ2:
fUu12σ2u22σ2
and so
erroruv0fUuuv0σfUσuσQv0σ
where
Qx12uxu22
This integral has no analytic solution, but a good
approximation to it exists and is discussed in some detail in
.
From we may obtain the
probability of error in the
binary detector, which is often expressed as the bit
error rate or BER. For
example, if
error2103, this would often be expressed as a bit error rate
of
2103, or alternatively as 1 error in 500 bits (on
average).
The argument (
v0σ) in is the
signal-to-noise voltage ratio (SNR) at
the detector, and the BER rapidly diminishes with increasing
SNR (see ).