Sums of Random Variables
2.2
2003/03/13
2003/04/11
Nick
Kingsbury
ngk10@cam.ac.uk
Liqun
Wang
liqun@rice.edu
Nick
Kingsbury
ngk10@cam.ac.uk
Marginal Probability
Random Variables
This module introduces sums of random variables.
Consider the random variable Y
formed as the sum of two independent random variables
X
1
and
X
2
:
Y
X1
X2
where
X
1
has pdf
f1
x1
and
X
2
has pdf
f2
x2
.
We can write the joint pdf for y
and
x
1
by rewriting the conditional probability formula:
f
y
x1
f

y
x1
f1
x1
It is clear that the event 'Y
takes the value y conditional upon
X1
x1
' is equivalent to
X
2
taking a value
y
x1
(since
X2
Y
X1
). Hence
f

y
x1
f2
y
x1
Now
f
y
may be obtained using the Marginal
Probability formula (this equation from this discussion of probability density
functions). Hence
f
y
x1
f

y
x1
f1
x1
x1
f2
y
x1
f1
x1
f2
f1
This result may be extended to sums of three or more random
variables by repeated application of the above arguments for
each new variable in turn. Since convolution is a commutative
operation, for n independent
variables we get:
f
y
fn
f
n

1
…
f2
f1
fn
f
n

1
…
f2
f1
An example of this effect occurs when multiple dice are thrown
and the scores are added together. In the 2dice example of the
subfigures a,b,c of this figure in the discussion of probability
distributions, we saw how the pmf approximated a triangular
shape. This is just the convolution of two uniform 6point pmfs
for each of the two dice.
Similarly if two variables with Gaussian pdfs are added
together, we shall show in the discussion of the summation of two or
more Gaussian random variables that this produces
another Gaussian pdf whose variance is the sum of the two input
variances.