# Connexions

You are here: Home » Content » Random Vectors and Joint Distributions

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Applied Probability"

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This module is included inLens: UniqU's lens
By: UniqU, LLCAs a part of collection: "Applied Probability"

Click the "UniqU content" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

# Random Vectors and Joint Distributions

Module by: Paul E Pfeiffer. E-mail the author

Summary: Often we have more than one random variable. Each can be considered separately, but usually they have some probabilistic ties which must be taken into account when they are considered jointly. We treat the joint case by considering the individual random variables as coordinates of a random vector. We extend the techniques for a single random variable to the multidimensional case. To simplify exposition and to keep calculations manageable, we consider a pair of random variables as coordinates of a two-dimensional random vector. The concepts and results extend directly to any finite number of random variables considered jointly. If the joint distribution for a random vector is known, then the distribution for each of the component random variables may be determined. These are known as marginal distributions. In general, the converse is not true. However, if the component random variables form an independent pair, the treatment in that case shows that the marginals determine the joint distribution.

## Introduction

A single, real-valued random variable is a function (mapping) from the basic space Ω to the real line. That is, to each possible outcome ω of an experiment there corresponds a real value t=X(ω)t=X(ω). The mapping induces a probability mass distribution on the real line, which provides a means of making probability calculations. The distribution is described by a distribution function FX. In the absolutely continuous case, with no point mass concentrations, the distribution may also be described by a probability density function fX. The probability density is the linear density of the probability mass along the real line (i.e., mass per unit length). The density is thus the derivative of the distribution function. For a simple random variable, the probability distribution consists of a point mass pi at each possible value ti of the random variable. Various m-procedures and m-functions aid calculations for simple distributions. In the absolutely continuous case, a simple approximation may be set up, so that calculations for the random variable are approximated by calculations on this simple distribution.

Often we have more than one random variable. Each can be considered separately, but usually they have some probabilistic ties which must be taken into account when they are considered jointly. We treat the joint case by considering the individual random variables as coordinates of a random vector. We extend the techniques for a single random variable to the multidimensional case. To simplify exposition and to keep calculations manageable, we consider a pair of random variables as coordinates of a two-dimensional random vector. The concepts and results extend directly to any finite number of random variables considered jointly.

## Random variables considered jointly; random vectors

As a starting point, consider a simple example in which the probabilistic interaction between two random quantities is evident.

### Example 1: A selection problem

Two campus jobs are open. Two juniors and three seniors apply. They seem equally qualified, so it is decided to select them by chance. Each combination of two is equally likely. Let X be the number of juniors selected (possible values 0, 1, 2) and Y be the number of seniors selected (possible values 0, 1, 2). However there are only three possible pairs of values for (X,Y):(0,2),(1,1)(X,Y):(0,2),(1,1), or (2,0)(2,0). Others have zero probability, since they are impossible. Determine the probability for each of the possible pairs.

#### SOLUTION

There are C(5,2)=10C(5,2)=10 equally likely pairs. Only one pair can be both juniors. Six pairs can be one of each. There are C(3,2)=3C(3,2)=3 ways to select pairs of seniors. Thus

P ( X = 0 , Y = 2 ) = 3 / 10 , P ( X = 1 , Y = 1 ) = 6 / 10 , P ( X = 2 , Y = 0 ) = 1 / 10 P ( X = 0 , Y = 2 ) = 3 / 10 , P ( X = 1 , Y = 1 ) = 6 / 10 , P ( X = 2 , Y = 0 ) = 1 / 10
(1)

These probabilities add to one, as they must, since this exhausts the mutually exclusive possibilities. The probability of any other combination must be zero. We also have the distributions for the random variables conisidered individually.

X = [ 0 1 2 ] P X = [ 3 / 10 6 / 10 1 / 10 ] Y = [ 0 1 2 ] P Y = [ 1 / 10 6 / 10 3 / 10 ] X = [ 0 1 2 ] P X = [ 3 / 10 6 / 10 1 / 10 ] Y = [ 0 1 2 ] P Y = [ 1 / 10 6 / 10 3 / 10 ]
(2)

We thus have a joint distribution and two individual or marginal distributions.

We formalize as follows:

A pair {X,Y}{X,Y} of random variables considered jointly is treated as the pair of coordinate functions for a two-dimensional random vector W=(X,Y)W=(X,Y). To each ωΩωΩ, W assigns the pair of real numbers (t,u)(t,u), where X(ω)=tX(ω)=t and Y(ω)=uY(ω)=u. If we represent the pair of values {t,u}{t,u} as the point (t,u)(t,u) on the plane, then W(ω)=(t,u)W(ω)=(t,u), so that

W = ( X , Y ) : Ω R 2 W = ( X , Y ) : Ω R 2
(3)

is a mapping from the basic space Ω to the plane R2. Since W is a function, all mapping ideas extend. The inverse mapping W-1W-1 plays a role analogous to that of the inverse mapping X-1X-1 for a real random variable. A two-dimensional vector W is a random vector iff W-1(Q)W-1(Q) is an event for each reasonable set (technically, each Borel set) on the plane.

A fundamental result from measure theory ensures

W=(X,Y)W=(X,Y) is a random vector iff each of the coordinate functions X and Y is a random variable.

In the selection example above, we model XX (the number of juniors selected)   and Y (the number of seniors selected) as random variables. Hence the vector-valued function

## Induced distribution and the joint distribution function

In a manner parallel to that for the single-variable case, we obtain a mapping of probability mass from the basic space to the plane. Since W-1(Q)W-1(Q) is an event for each reasonable set Q on the plane, we may assign to Q the probability mass

P X Y ( Q ) = P [ W - 1 ( Q ) ] = P [ ( X , Y ) - 1 ( Q ) ] P X Y ( Q ) = P [ W - 1 ( Q ) ] = P [ ( X , Y ) - 1 ( Q ) ]
(4)

Because of the preservation of set operations by inverse mappings as in the single-variable case, the mass assignment determines PXYPXY as a probability measure on the subsets of the plane R2. The argument parallels that for the single-variable case. The result is the probability distribution induced by W=(X,Y)W=(X,Y). To determine the probability that the vector-valued function W=(X,Y)W=(X,Y) takes on a (vector) value in region Q, we simply determine how much induced probability mass is in that region.

### Example 2: Induced distribution and probability calculations

To determine P(1X3,Y>0)P(1X3,Y>0), we determine the region for which the first coordinate value (which we call t) is between one and three and the second coordinate value (which we call u) is greater than zero. This corresponds to the set Q of points on the plane with 1t31t3 and u>0u>0. Gometrically, this is the strip on the plane bounded by (but not including) the horizontal axis and by the vertical lines t=1t=1 and t=3t=3 (included). The problem is to determine how much probability mass lies in that strip. How this is acheived depends upon the nature of the distribution and how it is described.

As in the single-variable case, we have a distribution function.

Definition

The joint distribution function FXYFXY for W=(X,Y)W=(X,Y) is given by

F X Y ( t , u ) = P ( X t , Y u ) ( t , u ) R 2 F X Y ( t , u ) = P ( X t , Y u ) ( t , u ) R 2
(5)

This means that FXY(t,u)FXY(t,u) is equal to the probability mass in the region QtuQtu on the plane such that the first coordinate is less than or equal to t and the second coordinate is less than or equal to u. Formally, we may write

F X Y ( t , u ) = P [ ( X , Y ) Q t u ] , where Q t u = { ( r , s ) : r t , s u } F X Y ( t , u ) = P [ ( X , Y ) Q t u ] , where Q t u = { ( r , s ) : r t , s u }
(6)

Now for a given point (a,b)(a,b), the region QabQab is the set of points (t,u)(t,u) on the plane which are on or to the left of the vertical line through (t,0)(t,0)and on or below the horizontal line through (0,u)(0,u) (see Figure 1 for specific point t=a,u=bt=a,u=b). We refer to such regions as semiinfinite intervals on the plane.

The theoretical result quoted in the real variable case extends to ensure that a distribution on the plane is determined uniquely by consistent assignments to the semiinfinite intervals QtuQtu. Thus, the induced distribution is determined completely by the joint distribution function.

Distribution function for a discrete random vector

The induced distribution consists of point masses. At point (ti,uj)(ti,uj) in the range of W=(X,Y)W=(X,Y) there is probability mass pij=P[W=(ti,uj)]=P(X=ti,Y=uj)pij=P[W=(ti,uj)]=P(X=ti,Y=uj). As in the general case, to determine P[(X,Y)Q]P[(X,Y)Q] we determine how much probability mass is in the region. In the discrete case (or in any case where there are point mass concentrations) one must be careful to note whether or not the boundaries are included in the region, should there be mass concentrations on the boundary.

### Example 3: Distribution function for the selection problem in Example 1

The probability distribution is quite simple. Mass 3/10 at (0,2), 6/10 at (1,1), and 1/10 at (2,0). This distribution is plotted in Figure 2. To determine (and visualize) the joint distribution function, think of moving the point (t,u)(t,u) on the plane. The region QtuQtu is a giant “sheet” with corner at (t,u)(t,u). The value of FXY(t,u)FXY(t,u) is the amount of probability covered by the sheet. This value is constant over any grid cell, including the left-hand and lower boundariies, and is the value taken on at the lower left-hand corner of the cell. Thus, if (t,u)(t,u) is in any of the three squares on the lower left hand part of the diagram, no probability mass is covered by the sheet with corner in the cell. If (t,u)(t,u) is on or in the square having probability 6/10 at the lower left-hand corner, then the sheet covers that probability, and the value of FXY(t,u)=6/10FXY(t,u)=6/10. The situation in the other cells may be checked out by this procedure.

Distribution function for a mixed distribution

### Example 4: A mixed distribution

The pair {X,Y}{X,Y} produces a mixed distribution as follows (see Figure 3)

Point masses 1/10 at points (0,0), (1,0), (1,1), (0,1)

Mass 6/10 spread uniformly over the unit square with these vertices

The joint distribution function is zero in the second, third, and fourth quadrants.

• If the point (t,u)(t,u) is in the square or on the left and lower boundaries, the sheet covers the point mass at (0,0) plus 0.6 times the area covered within the square. Thus in this region
FXY(t,u)=110(1+6tu)FXY(t,u)=110(1+6tu)
(7)
• If the pont (t,u)(t,u) is above the square (including its upper boundary) but to the left of the line t=1t=1, the sheet covers two point masses plus the portion of the mass in the square to the left of the vertical line through (t,u)(t,u). In this case
FXY(t,u)=110(2+6t)FXY(t,u)=110(2+6t)
(8)
• If the point (t,u)(t,u) is to the right of the square (including its boundary) with 0u<10u<1, the sheet covers two point masses and the portion of the mass in the square below the horizontal line through (t,u)(t,u), to give
FXY(t,u)=110(2+6u)FXY(t,u)=110(2+6u)
(9)
• If (t,u)(t,u) is above and to the right of the square (i.e., both 1t1t and 1u1u). then all probability mass is covered and FXY(t,u)=1FXY(t,u)=1 in this region.

## Marginal distributions

If the joint distribution for a random vector is known, then the distribution for each of the component random variables may be determined. These are known as marginal distributions. In general, the converse is not true. However, if the component random variables form an independent pair, the treatment in that case shows that the marginals determine the joint distribution.

To begin the investigation, note that

F X ( t ) = P ( X t ) = P ( X t , Y < ) i.e., Y can take any of its possible values F X ( t ) = P ( X t ) = P ( X t , Y < ) i.e., Y can take any of its possible values
(10)

Thus

F X ( t ) = F X Y ( t , ) = lim u F X Y ( t , u ) F X ( t ) = F X Y ( t , ) = lim u F X Y ( t , u )
(11)

This may be interpreted with the aid of Figure 4. Consider the sheet for point (t,u)(t,u).

If we push the point up vertically, the upper boundary of QtuQtu is pushed up until eventually all probability mass on or to the left of the vertical line through (t,u)(t,u) is included. This is the total probability that XtXt. Now FX(t)FX(t) describes probability mass on the line. The probability mass described by FX(t)FX(t) is the same as the total joint probability mass on or to the left of the vertical line through (t,u)(t,u). We may think of the mass in the half plane being projected onto the horizontal line to give the marginal distribution for X. A parallel argument holds for the marginal for Y.

F Y ( u ) = P ( Y u ) = F X Y ( , u ) = mass on or below horizontal line through ( t , u ) F Y ( u ) = P ( Y u ) = F X Y ( , u ) = mass on or below horizontal line through ( t , u )
(12)

This mass is projected onto the vertical axis to give the marginal distribution for Y.

Marginals for a joint discrete distribution

Consider a joint simple distribution.

P ( X = t i ) = j = 1 m P ( X = t i , Y = u j ) and P ( Y = u j ) = i = 1 n P ( X = t i , Y = u j ) P ( X = t i ) = j = 1 m P ( X = t i , Y = u j ) and P ( Y = u j ) = i = 1 n P ( X = t i , Y = u j )
(13)

Thus, all the probability mass on the vertical line through (ti,0)(ti,0) is projected onto the point ti on a horizontal line to give P(X=ti)P(X=ti). Similarly, all the probability mass on a horizontal line through (0,uj)(0,uj) is projected onto the point uj on a vertical line to give P(Y=uj)P(Y=uj).

### Example 5: Marginals for a discrete distribution

The pair {X,Y}{X,Y} produces a joint distribution that places mass 2/10 at each of the five points

( 0 , 0 ) , ( 1 , 1 ) , ( 2 , 0 ) , ( 2 , 2 ) , ( 3 , 1 ) ( 0 , 0 ) , ( 1 , 1 ) , ( 2 , 0 ) , ( 2 , 2 ) , ( 3 , 1 ) (See Figure 5)

The marginal distribution for X has masses 2/10, 2/10, 4/10, 2/10 at points t=0,1,2,3t=0,1,2,3, respectively. Similarly, the marginal distribution for Y has masses 4/10, 4/10, 2/10 at points u=0,1,2u=0,1,2, respectively.

### Example 6

Consider again the joint distribution in Example 4. The pair {X,Y}{X,Y} produces a mixed distribution as follows:

Point masses 1/10 at points (0,0), (1,0), (1,1), (0,1)

Mass 6/10 spread uniformly over the unit square with these vertices

The construction in Figure 6 shows the graph of the marginal distribution function FX. There is a jump in the amount of 0.2 at t=0t=0, corresponding to the two point masses on the vertical line. Then the mass increases linearly with t, slope 0.6, until a final jump at t=1t=1 in the amount of 0.2 produced by the two point masses on the vertical line. At t=1t=1, the total mass is “covered” and FX(t)FX(t) is constant at one for t1t1. By symmetry, the marginal distribution for Y is the same.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks