Perhaps the simplest error correcting code is the
repetition code.
Here, the transmitter sends the data bit several times, an odd
number of times in fact. Because the error probability
p
e
p
e
is always less than
12
1
2
, we know that more of the bits should be correct
rather than in error.
Simple majority voting of the received bits (hence the reason
for the odd number) determines the transmitted bit more
accurately than sending it alone.
For example, let's consider the threefold repetition code:
for every bit
bn
b
n
emerging from the source coder, the channel coder
produces three. Thus, the bit stream emerging from the channel
coder
cl
c
l
has a data rate three times higher than that of the
original bit stream
bn
b
n
.
The coding table illustrates when errors can be
corrected and when they can't by the majorityvote decoder.
Table 1: Coding Table
In this example, the transmitter encodes
0
0
as
000
000.
The channel creates an error (changing a
0
0
into a
1
1)
that with probability
p
e
p
e
.
The first column lists all possible received datawords and the
second the probability of each dataword being received. The
last column shows the results of the majorityvote decoder.
When the decoder produces 00, it
successfully corrected the errors introduced by the channel
(if there were any; the top row corresponds to the case in
which no errors occurred). The error probability of the
decoders is the sum of the probabilities when the decoder
produces
1
1.
Code

Probability

Bit

000

1−
p
e
3
1
p
e
3

0

001

p
e
1−
p
e
2
p
e
1
p
e
2

0

010

p
e
1−
p
e
2
p
e
1
p
e
2

0

011

p
e
2(1−
p
e
)
p
e
2
1
p
e

1

100

p
e
1−
p
e
2
p
e
1
p
e
2

0

101

p
e
2(1−
p
e
)
p
e
2
1
p
e

1

110

p
e
2(1−
p
e
)
p
e
2
1
p
e

1

111

p
e
3
p
e
3

1

Thus, if one bit of the three bits is received in
error, the receiver can correct the error; if more than one
error occurs, the channel decoder announces the bit is
1 instead of transmitted value of
0. Using this repetition code, the
probability of
b^
n≠0
b^
n
0
equals
3
p
e
2×(1−
p
e
)+
p
e
3
3
p
e
2
1
p
e
p
e
3
.
This probability of a decoding error is always less than
p
e
p
e
, the uncoded value, so long as
p
e
<12
p
e
1
2
.
Demonstrate mathematically that this claim is
indeed true. Is
3
p
e
2×(1−
p
e
)+
p
e
3≤
p
e
3
p
e
2
1
p
e
p
e
3
p
e
?
This question is equivalent to
3
p
e
×(1−
p
e
)+
p
e
2≤1
3
p
e
1
p
e
p
e
2
1
or
2
p
e
2−3
p
e
+1≥0
2
p
e
2
3
p
e
1
0
.
Because this is an upwardgoing parabola, we need only check
where its roots are. Using the quadratic formula, we find
that they are located at
12
1
2
and
11. Consequently
in the range
0≤
p
e
≤12
0
p
e
1
2
the error rate produced by coding is smaller.
"Electrical Engineering Digital Processing Systems in Braille."