Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » Mathematical Expectation: Simple Random Variables

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Applied Probability"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This module is included inLens: UniqU's lens
    By: UniqU, LLCAs a part of collection: "Applied Probability"

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Mathematical Expectation: Simple Random Variables

Module by: Paul E Pfeiffer. E-mail the author

Summary: For simple, real valued random variables, the expectation is the probability weighted average of the values taken on. It may be viewed as the center of mass for the probability mass distribution on the line.

Introduction

The probability that real random variable X takes a value in a set M of real numbers is interpreted as the likelihood that the observed value X(ω)X(ω) on any trial will lie in M. Historically, this idea of likelihood is rooted in the intuitive notion that if the experiment is repeated enough times the probability is approximately the fraction of times the value of X will fall in M. Associated with this interpretation is the notion of the average of the values taken on. We incorporate the concept of mathematical expectation into the mathematical model as an appropriate form of such averages. We begin by studying the mathematical expectation of simple random variables, then extend the definition and properties to the general case. In the process, we note the relationship of mathematical expectation to the Lebesque integral, which is developed in abstract measure theory. Although we do not develop this theory, which lies beyond the scope of this study, identification of this relationship provides access to a rich and powerful set of properties which have far reaching consequences in both application and theory.

Expectation for simple random variables

The notion of mathematical expectation is closely related to the idea of a weighted mean, used extensively in the handling of numerical data. Consider the arithmetic average x¯x¯ of the following ten numbers: 1, 2, 2, 2, 4, 5, 5, 8, 8, 8, which is given by

x ¯ = 1 10 ( 1 + 2 + 2 + 2 + 4 + 5 + 5 + 8 + 8 + 8 ) x ¯ = 1 10 ( 1 + 2 + 2 + 2 + 4 + 5 + 5 + 8 + 8 + 8 )
(1)

Examination of the ten numbers to be added shows that five distinct values are included. One of the ten, or the fraction 1/10 of them, has the value 1, three of the ten, or the fraction 3/10 of them, have the value 2, 1/10 has the value 4, 2/10 have the value 5, and 3/10 have the value 8. Thus, we could write

x ¯ = ( 0 . 1 · 1 + 0 . 3 · 2 + 0 . 1 · 4 + 0 . 2 · 5 + 0 . 3 · 8 ) x ¯ = ( 0 . 1 · 1 + 0 . 3 · 2 + 0 . 1 · 4 + 0 . 2 · 5 + 0 . 3 · 8 )
(2)

The pattern in this last expression can be stated in words: Multiply each possible value by the fraction of the numbers having that value and then sum these products. The fractions are often referred to as the relative frequencies. A sum of this sort is known as a weighted average.

In general, suppose there are n numbers {x1,x2,xn}{x1,x2,xn} to be averaged, with mnmn distinct values {t1,t2,,tm}{t1,t2,,tm}. Suppose f1 have value t1, f2 have value t2,,fmt2,,fm have value tm. The fi must add to n. If we set pi=fi/npi=fi/n, then the fraction pi is called the relative frequency of those numbers in the set which have the value ti,1imti,1im. The average x¯x¯ of the n numbers may be written

x ¯ = 1 n i = 1 n x i = j = 1 m t j p j x ¯ = 1 n i = 1 n x i = j = 1 m t j p j
(3)

In probability theory, we have a similar averaging process in which the relative frequencies of the various possible values of are replaced by the probabilities that those values are observed on any trial.

Definition. For a simple random variable X with values {t1,t2,,tn}{t1,t2,,tn} and corresponding probabilities pi=P(X=ti)pi=P(X=ti), the mathematical expectation, designated E[X]E[X], is the probability weighted average of the values taken on by X. In symbols

E [ X ] = i = 1 n t i P ( X = t i ) = i = 1 n t i p i E [ X ] = i = 1 n t i P ( X = t i ) = i = 1 n t i p i
(4)

Note that the expectation is determined by the distribution. Two quite different random variables may have the same distribution, hence the same expectation. Traditionally, this average has been called the mean, or the mean value, of the random variable X.

Example 1: Some special cases

  1. Since X=aIE=0IEc+aIEX=aIE=0IEc+aIE, we have E[aIE]=aP(E)E[aIE]=aP(E).
  2. For X a constant c, X=cIΩX=cIΩ, so that E[c]=cP(Ω)=cE[c]=cP(Ω)=c.
  3. If X=i=1ntiIAiX=i=1ntiIAi then aX=i=1natiIAiaX=i=1natiIAi, so that
    E [ a X ] = i = 1 n a t i P ( A i ) = a i = 1 n t i P ( A i ) = a E [ X ] E [ a X ] = i = 1 n a t i P ( A i ) = a i = 1 n t i P ( A i ) = a E [ X ]
    (5)
Figure 1: Moment of a probability distribution about the origin.
Figure 1 is a drawing of the moment of a probability distribution about the origin. The expected value of X, E[X], is equal to the sum of the moments, which is equal to the center of mass. The drawing shows one major horizontal line split in half by one major vertical line. As a title, the top of the drawing reads Negative Moments to the left of the vertical line, and Positive Moments to the right, which are meant to distinguish the arrows and labels in the drawing. On the horizontal line are five black dots, two to the left of the vertical line and three to the right. Below the corresponding dots are the corresponding labels: t_1, t_2, t_3, t_4, and t_5. Above the black dots are the following labels: p_1, p_2, p_3, p_4, and p_5. Above the horizontal line is another smaller horizontal line with arrows pointing in both directions. The label for the arrow pointing to the left is t_2 p_2, and the label for the arrow on the left is t_3 p_3. A longer horizontal line sits further up on the drawing, which also has arrows pointing in both directions. and intersects the same vertical line. The arrows are approximately twice as long as the two arrows below. The label for the arrow pointing to the left is t_1 p_1, and the label for the arrow to the left is t_4 p_4. finally, there is one horizontal line extending only to the right of the vertical line, with an arrow pointing to the right. This line is longer in this direction than any of the arrows that sit below it pointing to the right. The arrow is labeled t_5 p_5.

Mechanical interpretation

In order to aid in visualizing an essentially abstract system, we have employed the notion of probability as mass. The distribution induced by a real random variable on the line is visualized as a unit of probability mass actually distributed along the line. We utilize the mass distribution to give an important and helpful mechanical interpretation of the expectation or mean value. In Example 6 in "Mathematical Expectation: General Random Variables", we give an alternate interpretation in terms of mean-square estimation.

Suppose the random variable X has values {ti:1in}{ti:1in}, with P(X=ti)=piP(X=ti)=pi. This produces a probability mass distribution, as shown in Figure 1, with point mass concentration in the amount of pi at the point ti. The expectation is

i t i p i i t i p i
(6)

Now |ti||ti| is the distance of point mass pi from the origin, with pi to the left of the origin iff ti is negative. Mechanically, the sum of the products tipitipi is the moment of the probability mass distribution about the origin on the real line. From physical theory, this moment is known to be the same as the product of the total mass times the number which locates the center of mass. Since the total mass is one, the mean value is the location of the center of mass. If the real line is viewed as a stiff, weightless rod with point mass pi attached at each value ti of X, then the mean value μX is the point of balance. Often there are symmetries in the distribution which make it possible to determine the expectation without detailed calculation.

Example 2: The number of spots on a die

Let X be the number of spots which turn up on a throw of a simple six-sided die. We suppose each number is equally likely. Thus the values are the integers one through six, and each probability is 1/6. By definition

E [ X ] = 1 6 · 1 + 1 6 · 2 + 1 6 · 3 + 1 6 · 4 + 1 6 · 5 + 1 6 · 6 = 1 6 ( 1 + 2 + 3 + 4 + 5 + 6 ) = 7 2 E [ X ] = 1 6 · 1 + 1 6 · 2 + 1 6 · 3 + 1 6 · 4 + 1 6 · 5 + 1 6 · 6 = 1 6 ( 1 + 2 + 3 + 4 + 5 + 6 ) = 7 2
(7)

Although the calculation is very simple in this case, it is really not necessary. The probability distribution places equal mass at each of the integer values one through six. The center of mass is at the midpoint.

Example 3: A simple choice

A child is told she may have one of four toys. The prices are $2.50. $3.00, $2.00, and $3.50, respectively. She choses one, with respective probabilities 0.2, 0.3, 0.2, and 0.3 of choosing the first, second, third or fourth. What is the expected cost of her selection?

E [ X ] = 2 . 00 · 0 . 2 + 2 . 50 · 0 . 2 + 3 . 00 · 0 . 3 + 3 . 50 · 0 . 3 = 2 . 85 E [ X ] = 2 . 00 · 0 . 2 + 2 . 50 · 0 . 2 + 3 . 00 · 0 . 3 + 3 . 50 · 0 . 3 = 2 . 85
(8)

For a simple random variable, the mathematical expectation is determined as the dot product of the value matrix with the probability matrix. This is easily calculated using MATLAB.

Example 4: MATLAB calculation for Example 3

X = [2 2.5 3 3.5];  % Matrix of values (ordered)
PX = 0.1*[2 2 3 3]; % Matrix of probabilities
EX = dot(X,PX)      % The usual MATLAB operation
EX =  2.8500
Ex = sum(X.*PX)     % An alternate calculation
Ex =  2.8500
ex = X*PX'          % Another alternate
ex =  2.8500

Expectation and primitive form

The definition and treatment above assumes X is in canonical form, in which case

X = i = 1 n t i I A i , where A i = { X = t i } , implies E [ X ] = i = 1 n t i P ( A i ) X = i = 1 n t i I A i , where A i = { X = t i } , implies E [ X ] = i = 1 n t i P ( A i )
(9)

We wish to ease this restriction to canonical form.

Suppose simple random variable X is in a primitive form

X = j = 1 m c j I C j , where { C j : 1 j m } is a partition X = j = 1 m c j I C j , where { C j : 1 j m } is a partition
(10)

We show that

E [ X ] = j = 1 m c j P ( C j ) E [ X ] = j = 1 m c j P ( C j )
(11)

Before a formal verification, we begin with an example which exhibits the essential pattern. Establishing the general case is simply a matter of appropriate use of notation.

Example 5: Simple random variable X in primitive form

X = I C 1 + 2 I C 2 + I C 3 + 3 I C 4 + 2 I C 5 + 2 I C 6 , with { C 1 , C 2 , C 3 , C 4 , C 5 . C 6 } a partition X = I C 1 + 2 I C 2 + I C 3 + 3 I C 4 + 2 I C 5 + 2 I C 6 , with { C 1 , C 2 , C 3 , C 4 , C 5 . C 6 } a partition
(12)

Inspection shows the distinct possible values of X to be 1, 2, or 3. Also,

A 1 = { X = 1 } = C 1 C 3 , A 2 = { X = 2 } = C 2 C 5 C 6 and A 3 = { X = 3 } = C 4 A 1 = { X = 1 } = C 1 C 3 , A 2 = { X = 2 } = C 2 C 5 C 6 and A 3 = { X = 3 } = C 4
(13)

so that

P ( A 1 ) = P ( C 1 ) + P ( C 3 ) , P ( A 2 ) = P ( C 2 ) + P ( C 5 ) + P ( C 6 ) , and P ( A 3 ) = P ( C 4 ) P ( A 1 ) = P ( C 1 ) + P ( C 3 ) , P ( A 2 ) = P ( C 2 ) + P ( C 5 ) + P ( C 6 ) , and P ( A 3 ) = P ( C 4 )
(14)

Now

E [ X ] = P ( A 1 ) + 2 P ( A 2 ) + 3 P ( A 3 ) = P ( C 1 ) + P ( C 3 ) + 2 [ P ( C 2 ) + P ( C 5 ) + P ( C 6 ) ] + 3 P ( C 4 ) E [ X ] = P ( A 1 ) + 2 P ( A 2 ) + 3 P ( A 3 ) = P ( C 1 ) + P ( C 3 ) + 2 [ P ( C 2 ) + P ( C 5 ) + P ( C 6 ) ] + 3 P ( C 4 )
(15)
= P ( C 1 ) + 2 P ( C 2 ) + P ( C 3 ) + 3 P ( C 4 ) + 2 P ( C 5 ) + 2 P ( C 6 ) = P ( C 1 ) + 2 P ( C 2 ) + P ( C 3 ) + 3 P ( C 4 ) + 2 P ( C 5 ) + 2 P ( C 6 )
(16)

To establish the general pattern, consider X=j=1mcjICjX=j=1mcjICj. We identify the distinct set of values contained in the set {cj:1jm}{cj:1jm}. Suppose these are t1<t2<<tnt1<t2<<tn. For any value ti in the range, identify the index set Ji of those j such that cj=ticj=ti. Then the terms

J i c j I C j = t i J i I C j = t i I A i , where A i = j J i C j J i c j I C j = t i J i I C j = t i I A i , where A i = j J i C j
(17)

By the additivity of probability

P ( A i ) = P ( X = t i ) = j J i P ( C j ) P ( A i ) = P ( X = t i ) = j J i P ( C j )
(18)

Since for each jJijJi we have cj=ticj=ti, we have

E [ X ] = i = 1 n t i P ( A i ) = i = 1 n t i j J i P ( C j ) = i = 1 n j J i c j P ( C j ) = j = 1 m c j P ( C j ) E [ X ] = i = 1 n t i P ( A i ) = i = 1 n t i j J i P ( C j ) = i = 1 n j J i c j P ( C j ) = j = 1 m c j P ( C j )
(19)

Thus, the defining expression for expectation thus holds for X in a primitive form.

An alternate approach to obtaining the expectation from a primitive form is to use the csort operation to determine the distribution of X from the coefficients and probabilities of the primitive form.

Example 6: Alternate determinations of E[X]E[X]

Suppose X in a primitive form is

X = I C 1 + 2 I C 2 + I C 3 + 3 I C 4 + 2 I C 5 + 2 I C 6 + I C 7 + 3 I C 8 + 2 I C 9 + I C 10 X = I C 1 + 2 I C 2 + I C 3 + 3 I C 4 + 2 I C 5 + 2 I C 6 + I C 7 + 3 I C 8 + 2 I C 9 + I C 10
(20)

with respective probabilities

P ( C i ) = 0 . 08 , 0 . 11 , 0 . 06 , 0 . 13 , 0 . 05 , 0 . 08 , 0 . 12 , 0 . 07 , 0 . 14 , 0 . 16 P ( C i ) = 0 . 08 , 0 . 11 , 0 . 06 , 0 . 13 , 0 . 05 , 0 . 08 , 0 . 12 , 0 . 07 , 0 . 14 , 0 . 16
(21)
c = [1 2 1 3 2 2 1 3 2 1];            % Matrix of coefficients
pc = 0.01*[8 11 6 13 5 8 12 7 14 16]; % Matrix of probabilities
EX = c*pc'
EX =   1.7800                         % Direct solution
[X,PX] = csort(c,pc);                 % Determination of dbn for X
disp([X;PX]')
    1.0000    0.4200
    2.0000    0.3800
    3.0000    0.2000
Ex = X*PX'                            %  E[X] from distribution
Ex =   1.7800

Linearity

The result on primitive forms may be used to establish the linearity of mathematical expectation for simple random variables. Because of its fundamental importance, we work through the verification in some detail.

Suppose X=i=1ntiIAiX=i=1ntiIAi and Y=j=1mujIBjY=j=1mujIBj (both in canonical form). Since

i = 1 n I A i = j = 1 m I B j = 1 i = 1 n I A i = j = 1 m I B j = 1
(22)

we have

X + Y = i = 1 n t i I A i j = 1 m I B j + j = 1 m u j I B j i = 1 n I A i = i = 1 n j = 1 m ( t i + u j ) I A i I B j X + Y = i = 1 n t i I A i j = 1 m I B j + j = 1 m u j I B j i = 1 n I A i = i = 1 n j = 1 m ( t i + u j ) I A i I B j
(23)

Note that IAiIBj=IAiBjIAiIBj=IAiBj and AiBj={X=ti,Y=uj}AiBj={X=ti,Y=uj}. The class of these sets for all possible pairs (i,j)(i,j) forms a partition. Thus, the last summation expresses Z=X+YZ=X+Y in a primitive form. Because of the result on primitive forms, above, we have

E [ X + Y ] = i = 1 n j = 1 m ( t i + u j ) P ( A i B j ) = i = 1 n j = 1 m t i P ( A i B j ) + i = 1 n j = 1 m u j P ( A i B j ) E [ X + Y ] = i = 1 n j = 1 m ( t i + u j ) P ( A i B j ) = i = 1 n j = 1 m t i P ( A i B j ) + i = 1 n j = 1 m u j P ( A i B j )
(24)
= i = 1 n t i j = 1 m P ( A i B j ) + j = 1 m u j i = 1 n P ( A i B j ) = i = 1 n t i j = 1 m P ( A i B j ) + j = 1 m u j i = 1 n P ( A i B j )
(25)

We note that for each i and for each j

P ( A i ) = j = 1 m P ( A i B j ) and P ( B j ) = i = 1 n P ( A i B j ) P ( A i ) = j = 1 m P ( A i B j ) and P ( B j ) = i = 1 n P ( A i B j )
(26)

Hence, we may write

E [ X + Y ] = i = 1 n t i P ( A i ) + j = 1 m u j P ( B j ) = E [ X ] + E [ Y ] E [ X + Y ] = i = 1 n t i P ( A i ) + j = 1 m u j P ( B j ) = E [ X ] + E [ Y ]
(27)

Now aXaX and bYbY are simple if X and Y are, so that with the aid of Example 1 we have

E [ a X + b Y ] = E [ a X ] + E [ b Y ] = a E [ X ] + b E [ Y ] E [ a X + b Y ] = E [ a X ] + E [ b Y ] = a E [ X ] + b E [ Y ]
(28)

If X,Y,ZX,Y,Z are simple, then so are aX+bYaX+bY, and cZcZ. It follows that

E [ a X + b Y + c Z ] = E [ a X + b Y ] + c E [ Z ] = a E [ X ] + b E [ Y ] + c E [ Z ] E [ a X + b Y + c Z ] = E [ a X + b Y ] + c E [ Z ] = a E [ X ] + b E [ Y ] + c E [ Z ]
(29)

By an inductive argument, this pattern may be extended to a linear combination of any finite number of simple random variables. Thus we may assert

Linearity. The expectation of a linear combination of a finite number of simple random variables is that linear combination of the expectations of the individual random variables.

Expectation of a simple random variable in affine form

As a direct consequence of linearity, whenever simple random variable X is in affine form, then

E [ X ] = E c 0 + i = 1 n c i I E i = c 0 + i = 1 n c i P ( E i ) E [ X ] = E c 0 + i = 1 n c i I E i = c 0 + i = 1 n c i P ( E i )
(30)

Thus, the defining expression holds for any affine combination of indicator functions, whether in canonical form or not.

Example 7: Binomial distribution (n,p)(n,p)

This random variable appears as the number of successes in n Bernoulli trials with probability p of success on each component trial. It is naturally expressed in affine form

X = i = 1 n I E i so that E [ X ] = i = 1 n p = n p X = i = 1 n I E i so that E [ X ] = i = 1 n p = n p
(31)

Alternately, in canonical form

X = k = 0 n k I A k n , with p k = P ( A k n ) = P ( X = k ) = C ( n , k ) p k q n - k , q = 1 - p X = k = 0 n k I A k n , with p k = P ( A k n ) = P ( X = k ) = C ( n , k ) p k q n - k , q = 1 - p
(32)

so that

E [ X ] = k = 0 n k C ( n , k ) p k q n - k , q = 1 - p E [ X ] = k = 0 n k C ( n , k ) p k q n - k , q = 1 - p
(33)

Some algebraic tricks may be used to show that the second form sums to npnp, but there is no need of that. The computation for the affine form is much simpler.

Example 8: Expected winnings

A bettor places three bets at $2.00 each. The first bet pays $10.00 with probability 0.15, the second pays $8.00 with probability 0.20, and the third pays $20.00 with probability 0.10. What is the expected gain?

SOLUTION

The net gain may be expressed

X = 10 I A + 8 I B + 20 I C - 6 , with P ( A ) = 0 . 15 , P ( B ) = 0 . 20 , P ( C ) = 0 . 10 X = 10 I A + 8 I B + 20 I C - 6 , with P ( A ) = 0 . 15 , P ( B ) = 0 . 20 , P ( C ) = 0 . 10
(34)

Then

E [ X ] = 10 · 0 . 15 + 8 · 0 . 20 + 20 · 0 . 10 - 6 = - 0 . 90 E [ X ] = 10 · 0 . 15 + 8 · 0 . 20 + 20 · 0 . 10 - 6 = - 0 . 90
(35)

These calculations may be done in MATLAB as follows:

c = [10 8 20 -6];
p = [0.15 0.20 0.10 1.00]; % Constant a = aI_(Omega), with P(Omega) = 1
E = c*p'
E =  -0.9000

Functions of simple random variables

If X is in a primitive form (including canonical form) and g is a real function defined on the range of X, then

Z = g ( X ) = j = 1 m g ( c j ) I C j a primitive form Z = g ( X ) = j = 1 m g ( c j ) I C j a primitive form
(36)

so that

E [ Z ] = E [ g ( X ) ] = j = 1 m g ( c j ) P ( C j ) E [ Z ] = E [ g ( X ) ] = j = 1 m g ( c j ) P ( C j )
(37)

Alternately, we may use csort to determine the distribution for Z and work with that distribution.

Caution. If X is in affine form (but not a primitive form)

X = c 0 + j = 1 m c j I E j then g ( X ) g ( c 0 ) + j = 1 m g ( c j ) I E j X = c 0 + j = 1 m c j I E j then g ( X ) g ( c 0 ) + j = 1 m g ( c j ) I E j
(38)

so that

E [ g ( X ) ] g ( c 0 ) + j = 1 m g ( c j ) P ( E j ) E [ g ( X ) ] g ( c 0 ) + j = 1 m g ( c j ) P ( E j )
(39)

Example 9: Expectation of a function of X

Suppose X in a primitive form is

X = - 3 I C 1 - I C 2 + 2 I C 3 - 3 I C 4 + 4 I C 5 - I C 6 + I C 7 + 2 I C 8 + 3 I C 9 + 2 I C 10 X = - 3 I C 1 - I C 2 + 2 I C 3 - 3 I C 4 + 4 I C 5 - I C 6 + I C 7 + 2 I C 8 + 3 I C 9 + 2 I C 10
(40)

with probabilities P(Ci)=0.08,0.11,0.06,0.13,0.05,0.08,0.12,0.07,0.14,0.16P(Ci)=0.08,0.11,0.06,0.13,0.05,0.08,0.12,0.07,0.14,0.16.

Let g(t)=t2+2tg(t)=t2+2t. Determine E[g(X)]E[g(X)].

c = [-3 -1 2 -3 4 -1 1 2 3 2];            % Original coefficients
pc = 0.01*[8 11 6 13 5 8 12 7 14 16];     % Probabilities for C_j
G = c.^2 + 2*c                            % g(c_j)
G =  3  -1   8   3  24  -1   3   8  15   8
EG = G*pc'                            % Direct computation
EG =  6.4200
[Z,PZ] = csort(G,pc);                 % Distribution for Z = g(X)
disp([Z;PZ]')                         % Optional display
   -1.0000    0.1900
    3.0000    0.3300
    8.0000    0.2900
   15.0000    0.1400
   24.0000    0.0500
EZ = Z*PZ'                            % E[Z] from distribution for Z
EZ =  6.4200

A similar approach can be made to a function of a pair of simple random variables, provided the joint distribution is available. Suppose X=i=1ntiIAiX=i=1ntiIAi and Y=j=1mujIBjY=j=1mujIBj (both in canonical form). Then

Z = g ( X , Y ) = i = 1 n j = 1 m g ( t i , u j ) I A i B j Z = g ( X , Y ) = i = 1 n j = 1 m g ( t i , u j ) I A i B j
(41)

The AiBjAiBj form a partition, so Z is in a primitive form. We have the same two alternative possibilities: (1) direct calculation from values of g(ti,uj)g(ti,uj) and corresponding probabilities P(AiBj)=P(X=ti,Y=uj)P(AiBj)=P(X=ti,Y=uj), or (2) use of csort to obtain the distribution for Z.

Example 10: Expectation for Z=g(X,Y)Z=g(X,Y)

We use the joint distribution in file jdemo1.m and let g(t,u)=t2+2tu-3ug(t,u)=t2+2tu-3u. To set up for calculations, we use jcalc.

% file jdemo1.m
X = [-2.37 -1.93 -0.47 -0.11  0  0.57 1.22 2.15 2.97 3.74];
Y = [-3.06 -1.44 -1.21  0.07 0.88 1.77 2.01 2.84];
P = 0.0001*[ 53   8 167 170 184  18  67 122  18  12;
             11  13 143 221 241 153  87 125 122 185;
            165 129 226 185  89 215  40  77  93 187;
            165 163 205  64  60  66 118 239  67 201;
            227   2 128  12 238 106 218 120 222  30;
             93  93  22 179 175 186 221  65 129   4;
            126  16 159  80 183 116  15  22 113 167;
            198 101 101 154 158  58 220 230 228 211];
jdemo1                   % Call for data
jcalc                    % Set up
Enter JOINT PROBABILITIES (as on the plane)  P
Enter row matrix of VALUES of X  X
Enter row matrix of VALUES of Y  Y
 Use array operations on matrices X, Y, PX, PY, t, u, and P
G = t.^2 + 2*t.*u - 3*u; % Calculation of matrix of [g(t_i, u_j)]
EG = total(G.*P)         % Direct calculation of expectation
EG =  3.2529
[Z,PZ] = csort(G,P);     % Determination of distribution for Z
EZ = Z*PZ'               % E[Z] from distribution
EZ =  3.2529

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks