Skip to content Skip to navigation

OpenStax_CNX

You are here: Home » Content » Independent Classes of Random Variables

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Applied Probability"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This module is included inLens: UniqU's lens
    By: UniqU, LLCAs a part of collection: "Applied Probability"

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Independent Classes of Random Variables

Module by: Paul E Pfeiffer. E-mail the author

Summary: The concept of independence for classes of events is developed in terms of a product rule. Recall that for a real random variable X, the inverse image of each reasonable subset M on the real line (i.e., the set of all outcomes which are mapped into M by X) is an event. Similarly, the inverse image of N by random variable Y is an event. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine in this fashion. This condition may be stated in terms of the product rule P(X in M, Y in N) = P(X in M)P(Y in N) for all Borel sets M, N. This product rule holds for the distribution functions FXY(t,u) = FX(t)FY(u) for all t, u. And similarly for density functions when they exist. This condition puts restrictions on the nature of the probability mass distribution on the plane. For a rectangle with sides M, N the probability mass in M x N is P(X in M)P(Y in N). Extension to general classes is simple and immediate.

Introduction

The concept of independence for classes of events is developed in terms of a product rule. In this unit, we extend the concept to classes of random variables.

Independent pairs

Recall that for a random variable X, the inverse image X-1(M)X-1(M) (i.e., the set of all outcomes ωΩωΩ which are mapped into M by X) is an event for each reasonable subset M on the real line. Similarly, the inverse image Y-1(N)Y-1(N) is an event determined by random variable Y for each reasonable set N. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine. More precisely,

Definition

A pair {X,Y}{X,Y} of random variables is (stochastically) independent iff each pair of events {X-1(M),Y-1(N)}{X-1(M),Y-1(N)} is independent.

This condition may be stated in terms of the product rule

P ( X M , Y N ) = P ( X M ) P ( Y N ) for all (Borel) sets M , N P ( X M , Y N ) = P ( X M ) P ( Y N ) for all (Borel) sets M , N
(1)

Independence implies

F X Y ( t , u ) = P ( X ( - , t ] , Y ( - , u ] ) = P ( X ( - , t ] ) P ( Y ( - , u ] ) = F X Y ( t , u ) = P ( X ( - , t ] , Y ( - , u ] ) = P ( X ( - , t ] ) P ( Y ( - , u ] ) =
(2)
F X ( t ) F Y ( u ) t , u F X ( t ) F Y ( u ) t , u
(3)

Note that the product rule on the distribution function is equivalent to the condition the product rule holds for the inverse images of a special class of sets {M,N}{M,N} of the form M=(-,t]M=(-,t] and N=(-,u]N=(-,u]. An important theorem from measure theory ensures that if the product rule holds for this special class it holds for the general class of {M,N}{M,N}. Thus we may assert

The pair {X,Y}{X,Y} is independent iff the following product rule holds

F X Y ( t , u ) = F X ( t ) F Y ( u ) t , u F X Y ( t , u ) = F X ( t ) F Y ( u ) t , u
(4)

Example 1: An independent pair

Suppose FXY(t,u)=1-e-αt1-e-βu0t,0uFXY(t,u)=1-e-αt1-e-βu0t,0u. Taking limits shows

F X ( t ) = lim u F X Y ( t , u ) = 1 - e - α t and F Y ( u ) = lim t F X Y ( t , u ) = 1 - e - β u F X ( t ) = lim u F X Y ( t , u ) = 1 - e - α t and F Y ( u ) = lim t F X Y ( t , u ) = 1 - e - β u
(5)

so that the product rule FXY(t,u)=FX(t)FY(u)FXY(t,u)=FX(t)FY(u) holds. The pair {X,Y}{X,Y} is therefore independent.

If there is a joint density function, then the relationship to the joint distribution function makes it clear that the pair is independent iff the product rule holds for the density. That is, the pair is independent iff

f X Y ( t , u ) = f X ( t ) f Y ( u ) t , u f X Y ( t , u ) = f X ( t ) f Y ( u ) t , u
(6)

Example 2: Joint uniform distribution on a rectangle

Suppose the joint probability mass distributions induced by the pair {X,Y}{X,Y} is uniform on a rectangle with sides I1=[a,b]I1=[a,b] and I2=[c,d]I2=[c,d]. Since the area is (b-a)(d-c)(b-a)(d-c), the constant value of fXYfXY is 1/(b-a)(d-c)1/(b-a)(d-c). Simple integration gives

f X ( t ) = 1 ( b - a ) ( d - c ) c d d u = 1 b - a a t b and f X ( t ) = 1 ( b - a ) ( d - c ) c d d u = 1 b - a a t b and
(7)
f Y ( u ) = 1 ( b - a ) ( d - c ) a b d t = 1 d - c c u d f Y ( u ) = 1 ( b - a ) ( d - c ) a b d t = 1 d - c c u d
(8)

Thus it follows that X is uniform on [a,b][a,b], Y is uniform on [c,d][c,d], and fXY(t,u)=fX(t)fY(u)fXY(t,u)=fX(t)fY(u) for all t,ut,u, so that the pair {X,Y}{X,Y} is independent. The converse is also true: if the pair is independent with X uniform on [a,b][a,b] and Y is uniform on [c,d][c,d], the the pair has uniform joint distribution on I1×I2I1×I2.

The joint mass distribution

It should be apparent that the independence condition puts restrictions on the character of the joint mass distribution on the plane. In order to describe this more succinctly, we employ the following terminology.

Definition

If M is a subset of the horizontal axis and N is a subset of the vertical axis, then the cartesian product M×NM×N is the (generalized) rectangle consisting of those points (t,u)(t,u) on the plane such that tMtM and uNuN.

Example 3: Rectangle with interval sides

The rectangle in Example 2 is the Cartesian product I1×I2I1×I2, consisting of all those points (t,u)(t,u) such that atbatb and cudcud (i.e., tI1tI1 and uI2uI2).

Figure 1: Joint distribution for an independent pair of random variables.
A graph with two horizontal lines and two vertical lines intersecting each other and creating a square. the distance between the two vertical line is labeled M and the distance between the two horizontal lines is labeled N. The area inside the square is shaded and labeled 'Mass in rectangle MxN P(X in M)P(Y in N)'. The space above the square contains the phrase 'mass in vertical strip is P(X in M)' while the area to the right of the square is labeled 'Mass in horizontal strip is P(Y in N)'.

We restate the product rule for independence in terms of cartesian product sets.

P ( X M , Y N ) = P ( X , Y ) M × N = P ( X M ) P ( Y N ) P ( X M , Y N ) = P ( X , Y ) M × N = P ( X M ) P ( Y N )
(9)

Reference to Figure 1 illustrates the basic pattern. If M, N are intervals on the horizontal and vertical axes, respectively, then the rectangle M×NM×N is the intersection of the vertical strip meeting the horizontal axis in M with the horizontal strip meeting the vertical axis in N. The probability XMXM is the portion of the joint probability mass in the vertical strip; the probability YNYN is the part of the joint probability in the horizontal strip. The probability in the rectangle is the product of these marginal probabilities.

This suggests a useful test for nonindependence which we call the rectangle test. We illustrate with a simple example.

Figure 2: Rectangle test for nonindependence of a pair of random variables.
A graph containing a square rotated on one of its corners making it look like a diamond. This diamond is shaded. There are two rectangles, one rising from the x axis and another from the y axis. These two rectangles intersect different corners of the shaded diamond and each other. The area of intersection of the rectangle originating on the y axis and the diamond is labeled P(Y in N > 0). The intersection of the two rectangles is labeled P(X in M,Y in N) = 0. The area of intersection of the rectangle originating from the x-axis and the shaded diamond is labeled P(X in M) > 0. The height of the rectangle on the y axis is labeled N, and the width of the rectangle on the x axis is labeled M.

Example 4: The rectangle test for nonindependence

Supose probability mass is uniformly distributed over the square with vertices at (1,0), (2,1), (1,2), (0,1). It is evident from Figure 2 that a value of X determines the possible values of Y and vice versa, so that we would not expect independence of the pair. To establish this, consider the small rectangle M×NM×N shown on the figure. There is no probability mass in the region. Yet P(XM)>0P(XM)>0 and P(YN)>0P(YN)>0, so that

P(XM)P(YN)>0P(XM)P(YN)>0, but P(X,Y)M×N=0P(X,Y)M×N=0. The product rule fails; hence the pair cannot be stochastically independent.

Remark. There are nonindependent cases for which this test does not work. And it does not provide a test for independence. In spite of these limitations, it is frequently useful. Because of the information contained in the independence condition, in many cases the complete joint and marginal distributions may be obtained with appropriate partial information. The following is a simple example.

Example 5: Joint and marginal probabilities from partial information

Suppose the pair {X,Y}{X,Y} is independent and each has three possible values. The following four items of information are available.

P ( X = t 1 ) = 0 . 2 , P ( Y = u 1 ) = 0 . 3 , P ( X = t 1 , Y = u 2 ) = 0 . 08 P ( X = t 1 ) = 0 . 2 , P ( Y = u 1 ) = 0 . 3 , P ( X = t 1 , Y = u 2 ) = 0 . 08
(10)
P ( X = t 2 , Y = u 1 ) = 0 . 15 P ( X = t 2 , Y = u 1 ) = 0 . 15
(11)

These values are shown in bold type on Figure 3. A combination of the product rule and the fact that the total probability mass is one are used to calculate each of the marginal and joint probabilities. For example P(X=t1)=0.2P(X=t1)=0.2 and P(X=t1,Y=u2)P(X=t1,Y=u2)

=P(X=t1)P(Y=u2)=0.08=P(X=t1)P(Y=u2)=0.08 implies P(Y=u2)=0.4P(Y=u2)=0.4. Then P(Y=u3)P(Y=u3)

=1-P(Y=u1)-P(Y=u2)=0.3=1-P(Y=u1)-P(Y=u2)=0.3. Others are calculated similarly. There is no unique procedure for solution. And it has not seemed useful to develop MATLAB procedures to accomplish this.

Figure 3: Joint and marginal probabilities from partial information.
This image consist of three elements. There is a vertical line, a horizontal line and square comprised of four square assembled together. On each of the lines there are three points indicated by a bold dot and a number. Original numbers are represented in bold while calculated ones are in italics. Vertical line has these points labeled ascending 0.3 in bold, 0.4 in italics and 0.3 in italics. The horizontal line has points progressing to the right labeled 0.2 in bold, 0.5 in italics, and 0.3 in italics. There are points at all the intersections of the lines.

Example 6: The joint normal distribution

A pair {X,Y}{X,Y} has the joint normal distribution iff the joint density is

f X Y ( t , u ) = 1 2 π σ X σ Y ( 1 - ρ 2 ) 1 / 2 e - Q ( t , u ) / 2 f X Y ( t , u ) = 1 2 π σ X σ Y ( 1 - ρ 2 ) 1 / 2 e - Q ( t , u ) / 2
(12)

where

Q ( t , u ) = 1 1 - ρ 2 t - μ X σ X 2 - 2 ρ t - μ X σ X u - μ Y σ Y + u - μ Y σ Y 2 Q ( t , u ) = 1 1 - ρ 2 t - μ X σ X 2 - 2 ρ t - μ X σ X u - μ Y σ Y + u - μ Y σ Y 2
(13)

The marginal densities are obtained with the aid of some algebraic tricks to integrate the joint density. The result is that XN(μX,σX2)XN(μX,σX2) and YN(μY,σY2)YN(μY,σY2). If the parameter ρ is set to zero, the result is

f X Y ( t , u ) = f X ( t ) f Y ( u ) f X Y ( t , u ) = f X ( t ) f Y ( u )
(14)

so that the pair is independent iff ρ=0ρ=0. The details are left as an exercise for the interested reader.

Remark. While it is true that every independent pair of normally distributed random variables is joint normal, not every pair of normally distributed random variables has the joint normal distribution.

Example 7: A normal pair not joint normally distributed

We start with the distribution for a joint normal pair and derive a joint distribution for a normal pair which is not joint normal. The function

φ ( t , u ) = 1 2 π exp - t 2 2 - u 2 2 φ ( t , u ) = 1 2 π exp - t 2 2 - u 2 2
(15)

is the joint normal density for an independent pair (ρ=0ρ=0) of standardized normal random variables. Now define the joint density for a pair {X,Y}{X,Y} by

f X Y ( t , u ) = 2 φ ( t , u ) in the first and third quadrants, and zero elsewhere f X Y ( t , u ) = 2 φ ( t , u ) in the first and third quadrants, and zero elsewhere
(16)

Both XN(0,1)XN(0,1) and YN(0,1)YN(0,1). However, they cannot be joint normal, since the joint normal distribution is positive for all (t,u)(t,u).

Independent classes

Since independence of random variables is independence of the events determined by the random variables, extension to general classes is simple and immediate.

Definition

A class {Xi:iJ}{Xi:iJ} of random variables is (stochastically) independent iff the product rule holds for every finite subclass of two or more.

Remark. The index set J in the definition may be finite or infinite.

For a finite class {Xi:1in}{Xi:1in}, independence is equivalent to the product rule

F X 1 X 2 X n ( t 1 , t 2 , , t n ) = i = 1 n F X i ( t i ) for all ( t 1 , t 2 , , t n ) F X 1 X 2 X n ( t 1 , t 2 , , t n ) = i = 1 n F X i ( t i ) for all ( t 1 , t 2 , , t n )
(17)

Since we may obtain the joint distribution function for any finite subclass by letting the arguments for the others be (i.e., by taking the limits as the appropriate ti increase without bound), the single product rule suffices to account for all finite subclasses.

Absolutely continuous random variables

If a class {Xi:iJ}{Xi:iJ} is independent and the individual variables are absolutely continuous (i.e., have densities), then any finite subclass is jointly absolutely continuous and the product rule holds for the densities of such subclasses

f X i 1 X i 2 X i m ( t i 1 , t i 2 , , t i m ) = k = 1 m f X i k ( t i k ) for all ( t 1 , t 2 , , t n ) f X i 1 X i 2 X i m ( t i 1 , t i 2 , , t i m ) = k = 1 m f X i k ( t i k ) for all ( t 1 , t 2 , , t n )
(18)

Similarly, if each finite subclass is jointly absolutely continuous, then each individual variable is absolutely continuous and the product rule holds for the densities. Frequently we deal with independent classes in which each random variable has the same marginal distribution. Such classes are referred to as iid classes (an acronym for independent,identically distributed). Examples are simple random samples from a given population, or the results of repetitive trials with the same distribution on the outcome of each component trial. A Bernoulli sequence is a simple example.

Simple random variables

Consider a pair {X,Y}{X,Y} of simple random variables in canonical form

X = i = 1 n t i I A i Y = j = 1 m u j I B j X = i = 1 n t i I A i Y = j = 1 m u j I B j
(19)

Since Ai={X=ti}Ai={X=ti} and Bj={Y=uj}Bj={Y=uj} the pair {X,Y}{X,Y} is independent iff each of the pairs {Ai,Bj}{Ai,Bj} is independent. The joint distribution has probability mass at each point (ti,uj)(ti,uj) in the range of W=(X,Y)W=(X,Y). Thus at every point on the grid,

P ( X = t i , Y = u j ) = P ( X = t i ) P ( Y = u j ) P ( X = t i , Y = u j ) = P ( X = t i ) P ( Y = u j )
(20)

According to the rectangle test, no gridpoint having one of the ti or uj as a coordinate has zero probability mass . The marginal distributions determine the joint distributions. If X has n distinct values and Y has m distinct values, then the n+mn+m marginal probabilities suffice to determine the m·nm·n joint probabilities. Since the marginal probabilities for each variable must add to one, only (n-1)+(m-1)=m+n-2(n-1)+(m-1)=m+n-2 values are needed.

Suppose X and Y are in affine form. That is,

X = a 0 + i = 1 n a i I E i Y = b 0 + j = 1 m b j I F j X = a 0 + i = 1 n a i I E i Y = b 0 + j = 1 m b j I F j
(21)

Since Ar={X=tr}Ar={X=tr} is the union of minterms generated by the Ei and Bj={Y=us}Bj={Y=us} is the union of minterms generated by the Fj, the pair {X,Y}{X,Y} is independent iff each pair of minterms {Ma,Nb}{Ma,Nb} generated by the two classes, respectivly, is independent. Independence of the minterm pairs is implied by independence of the combined class

{ E i , F j : 1 i n , 1 j m } { E i , F j : 1 i n , 1 j m }
(22)

Calculations in the joint simple case are readily handled by appropriate m-functions and m-procedures.

MATLAB and independent simple random variables

In the general case of pairs of joint simple random variables we have the m-procedure jcalc, which uses information in matrices X,Y,X,Y, and P to determine the marginal probabilities and the calculation matrices t and u. In the independent case, we need only the marginal distributions in matrices X, PXPX, Y, and PYPY to determine the joint probability matrix (hence the joint distribution) and the calculation matrices t and u. If the random variables are given in canonical form, we have the marginal distributions. If they are in affine form, we may use canonic (or the function form canonicf) to obtain the marginal distributions.

Once we have both marginal distributions, we use an m-procedure we call icalc. Formation of the joint probability matrix is simply a matter of determining all the joint probabilities

p ( i , j ) = P ( X = t i , Y = u j ) = P ( X = t i ) P ( Y = u j ) p ( i , j ) = P ( X = t i , Y = u j ) = P ( X = t i ) P ( Y = u j )
(23)

Once these are calculated, formation of the calculation matrices t and u is achieved exactly as in jcalc.

Example 8: Use of icalc to set up for joint calculations

X = [-4 -2 0 1 3];
Y = [0 1 2 4];
PX = 0.01*[12 18 27 19 24];
PY = 0.01*[15 43 31 11];
icalc
Enter row matrix of X-values  X
Enter row matrix of Y-values  Y
Enter X probabilities  PX
Enter Y probabilities  PY
 Use array operations on matrices X, Y, PX, PY, t, u, and P
disp(P)                        % Optional display of the joint matrix
    0.0132    0.0198    0.0297    0.0209    0.0264
    0.0372    0.0558    0.0837    0.0589    0.0744
    0.0516    0.0774    0.1161    0.0817    0.1032
    0.0180    0.0270    0.0405    0.0285    0.0360
disp(t)                        % Calculation matrix t
    -4    -2     0     1     3
    -4    -2     0     1     3
    -4    -2     0     1     3
    -4    -2     0     1     3
disp(u)                        % Calculation matrix u
     4     4     4     4     4
     2     2     2     2     2
     1     1     1     1     1
     0     0     0     0     0
M = (t>=-3)&(t<=2);            % M = [-3, 2]
PM = total(M.*P)               % P(X in M)
PM =   0.6400
N = (u>0)&(u.^2<=15);          % N = {u: u > 0, u^2 <= 15}
PN = total(N.*P)               % P(Y in N)
PN =   0.7400
Q = M&N;                       % Rectangle MxN
PQ = total(Q.*P)               % P((X,Y) in MxN)
PQ =   0.4736
p = PM*PN
p  =   0.4736                  % P((X,Y) in MxN) = P(X in M)P(Y in N)

As an example, consider again the problem of joint Bernoulli trials described in the treatment of Composite trials.

Example 9: The joint Bernoulli trial of Example 4.9.

1 Bill and Mary take ten basketball free throws each. We assume the two seqences of trials are independent of each other, and each is a Bernoulli sequence.

      Mary: Has probability 0.80 of success on each trial.

      Bill: Has probability 0.85 of success on each trial.

What is the probability Mary makes more free throws than Bill?

SOLUTION

Let X be the number of goals that Mary makes and Y be the number that Bill makes. Then XX binomial (10,0.8)(10,0.8) and YY binomial (10,0.85)(10,0.85).

X = 0:10;
Y = 0:10;
PX = ibinom(10,0.8,X);
PY = ibinom(10,0.85,Y);
icalc
Enter row matrix of X-values  X  % Could enter 0:10
Enter row matrix of Y-values  Y  % Could enter 0:10
Enter X probabilities  PX        % Could enter ibinom(10,0.8,X)
Enter Y probabilities  PY        % Could enter ibinom(10,0.85,Y)
 Use array operations on matrices X, Y, PX, PY, t, u, and P
PM = total((t>u).*P)
PM =  0.2738                     % Agrees with solution in Example 9 from "Composite Trials".
Pe = total((u==t).*P)            % Additional information is more easily
Pe =  0.2276                     % obtained than in the event formulation
Pm = total((t>=u).*P)            % of Example 9 from "Composite Trials".
Pm =  0.5014

Example 10: Sprinters time trials

Twelve world class sprinters in a meet are running in two heats of six persons each. Each runner has a reasonable chance of breaking the track record. We suppose results for individuals are independent.

          First heat probabilities: 0.61 0.73 0.55 0.81 0.66 0.43         

          Second heat probabilities: 0.75 0.48 0.62 0.58 0.77 0.51         

Compare the two heats for numbers who break the track record.

SOLUTION

Let X be the number of successes in the first heat and Y be the number who are successful in the second heat. Then the pair {X,Y}{X,Y} is independent. We use the m-function canonicf to determine the distributions for X and for Y, then icalc to get the joint distribution.

c1 = [ones(1,6) 0];
c2 = [ones(1,6) 0];
P1 = [0.61 0.73 0.55 0.81 0.66 0.43];
P2 = [0.75 0.48 0.62 0.58 0.77 0.51];
[X,PX] = canonicf(c1,minprob(P1));
[Y,PY] = canonicf(c2,minprob(P2));
icalc
Enter row matrix of X-values  X
Enter row matrix of Y-values  Y
Enter X probabilities  PX
Enter Y probabilities  PY
 Use array operations on matrices X, Y, PX, PY, t, u, and P
Pm1 = total((t>u).*P)   % Prob first heat has most
Pm1 =  0.3986
Pm2 = total((u>t).*P)   % Prob second heat has most
Pm2 =  0.3606
Peq = total((t==u).*P)  % Prob both have the same
Peq =  0.2408
Px3 = (X>=3)*PX'        % Prob first has 3 or more
Px3 =  0.8708
Py3 = (Y>=3)*PY'        % Prob second has 3 or more
Py3 =  0.8525

As in the case of jcalc, we have an m-function version icalcf

[ x , y , t , u , p x , p y , p ] = i c a l c f ( X , Y , P X , P Y ) [ x , y , t , u , p x , p y , p ] = i c a l c f ( X , Y , P X , P Y )
(24)

We have a related m-function idbn for obtaining the joint probability matrix from the marginal probabilities. Its formation of the joint matrix utilizes the same operations as icalc.

Example 11: A numerical example

PX = 0.1*[3 5 2];
PY = 0.01*[20 15 40 25];
P  = idbn(PX,PY)
P =
    0.0750    0.1250    0.0500
    0.1200    0.2000    0.0800
    0.0450    0.0750    0.0300
    0.0600    0.1000    0.0400

An m- procedure itest checks a joint distribution for independence. It does this by calculating the marginals, then forming an independent joint test matrix, which is compared with the original. We do not ordinarily exhibit the matrix P to be tested. However, this is a case in which the product rule holds for most of the minterms, and it would be very difficult to pick out those for which it fails. The m-procedure simply checks all of them.

Example 12

idemo1                           % Joint matrix in datafile idemo1
P =  0.0091  0.0147  0.0035  0.0049  0.0105  0.0161  0.0112
     0.0117  0.0189  0.0045  0.0063  0.0135  0.0207  0.0144
     0.0104  0.0168  0.0040  0.0056  0.0120  0.0184  0.0128
     0.0169  0.0273  0.0065  0.0091  0.0095  0.0299  0.0208
     0.0052  0.0084  0.0020  0.0028  0.0060  0.0092  0.0064
     0.0169  0.0273  0.0065  0.0091  0.0195  0.0299  0.0208
     0.0104  0.0168  0.0040  0.0056  0.0120  0.0184  0.0128
     0.0078  0.0126  0.0030  0.0042  0.0190  0.0138  0.0096
     0.0117  0.0189  0.0045  0.0063  0.0135  0.0207  0.0144
     0.0091  0.0147  0.0035  0.0049  0.0105  0.0161  0.0112
     0.0065  0.0105  0.0025  0.0035  0.0075  0.0115  0.0080
     0.0143  0.0231  0.0055  0.0077  0.0165  0.0253  0.0176
itest
Enter matrix of joint probabilities  P
The pair {X,Y} is NOT independent   % Result of test
To see where the product rule fails, call for D
disp(D)                          % Optional call for D
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     1     1     1     1     1     1     1
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     1     1     1     1     1     1     1
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0
     0     0     0     0     0     0     0

Next, we consider an example in which the pair is known to be independent.

Example 13

jdemo3      % call for data in m-file
disp(P)     % call to display P
     0.0132    0.0198    0.0297    0.0209    0.0264
     0.0372    0.0558    0.0837    0.0589    0.0744
     0.0516    0.0774    0.1161    0.0817    0.1032
     0.0180    0.0270    0.0405    0.0285    0.0360
 
itest
Enter matrix of joint probabilities  P
The pair {X,Y} is independent       % Result of test

The procedure icalc can be extended to deal with an independent class of three random variables. We call the m-procedure icalc3. The following is a simple example of its use.

Example 14: Calculations for three independent random variables

X = 0:4;
Y = 1:2:7;
Z = 0:3:12;
PX = 0.1*[1 3 2 3 1];
PY = 0.1*[2 2 3 3];
PZ = 0.1*[2 2 1 3 2];
icalc3
Enter row matrix of X-values  X
Enter row matrix of Y-values  Y
Enter row matrix of Z-values  Z
Enter X probabilities  PX
Enter Y probabilities  PY
Enter Z probabilities  PZ
Use array operations on matrices X, Y, Z,
PX, PY, PZ, t, u, v, and P
G = 3*t + 2*u - 4*v;        % W = 3X + 2Y -4Z
[W,PW] = csort(G,P);        % Distribution for W
PG = total((G>0).*P)        % P(g(X,Y,Z) > 0)
PG =  0.3370
Pg = (W>0)*PW'            % P(Z > 0)
Pg =  0.3370

An m-procedure icalc4 to handle an independent class of four variables is also available. Also several variations of the m-function mgsum and the m-function diidsum are used for obtaining distributions for sums of independent random variables. We consider them in various contexts in other units.

Approximation for the absolutely continuous case

In the study of functions of random variables, we show that an approximating simple random variable Xs of the type we use is a function of the random variable X which is approximated. Also, we show that if {X,Y}{X,Y} is an independent pair, so is {g(X),h(Y)}{g(X),h(Y)} for any reasonable functions g and h. Thus if {X,Y}{X,Y} is an independent pair, so is any pair of approximating simple functions {Xs,Ys}{Xs,Ys} of the type considered. Now it is theoretically possible for the approximating pair {Xs,Ys}{Xs,Ys} to be independent, yet have the approximated pair {X,Y}{X,Y} not independent. But this is highly unlikely. For all practical purposes, we may consider {X,Y}{X,Y} to be independent iff {Xs,Ys}{Xs,Ys} is independent. When in doubt, consider a second pair of approximating simple functions with more subdivision points. This decreases even further the likelihood of a false indication of independence by the approximating random variables.

Example 15: An independent pair

Suppose XX exponential (3) and YY exponential (2) with

f X Y ( t , u ) = 6 e - 3 t e - 2 u = 6 e - ( 3 t + 2 u ) t 0 , u 0 f X Y ( t , u ) = 6 e - 3 t e - 2 u = 6 e - ( 3 t + 2 u ) t 0 , u 0
(25)

Since e-126×10-6e-126×10-6, we approximate X for values up to 4 and Y for values up to 6.

tuappr
Enter matrix [a b] of X-range endpoints  [0 4]
Enter matrix [c d] of Y-range endpoints  [0 6]
Enter number of X approximation points  200
Enter number of Y approximation points  300
Enter expression for joint density  6*exp(-(3*t + 2*u))
Use array operations on X, Y, PX, PY, t, u, and P
itest
Enter matrix of joint probabilities  P
The pair {X,Y} is independent

Example 16: Test for independence

The pair {X,Y}{X,Y} has joint density fXY(t,u)=4tu0t1,0u1fXY(t,u)=4tu0t1,0u1. It is easy enough to determine the marginals in this case. By symmetry, they are the same.

f X ( t ) = 4 t 0 1 u d u = 2 t , 0 t 1 f X ( t ) = 4 t 0 1 u d u = 2 t , 0 t 1
(26)

so that fXY=fXfYfXY=fXfY which ensures the pair is independent. Consider the solution using tuappr and itest.

tuappr
Enter matrix [a b] of X-range endpoints  [0 1]
Enter matrix [c d] of Y-range endpoints  [0 1]
Enter number of X approximation points  100
Enter number of Y approximation points  100
Enter expression for joint density  4*t.*u
Use array operations on X, Y, PX, PY, t, u, and P
itest
Enter matrix of joint probabilities  P
The pair {X,Y} is independent

Content actions

Download module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks