Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Introduction to Statistics » THE IVERSE PROBABILITY METHOD FOR GENERATING RANDOM VARIABLES

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

In these lenses

  • Statistics

    This collection is included inLens: Mathieu Plourde's Lens
    By: Mathieu Plourde

    Click the "Statistics" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

THE IVERSE PROBABILITY METHOD FOR GENERATING RANDOM VARIABLES

Module by: Ewa Paszek. E-mail the author

Summary: This course is a short series of lectures on Introductory Statistics. Topics covered are listed in the Table of Contents. The notes were prepared by Ewa Paszek and Marek Kimmel. The development of this course has been supported by NSF 0203396 grant.

THE IVERSE PROBABILITY METHOD FOR GENERATING RANDOM VARIABLES

Once the generation of the uniform random variable is established, it can be used to generate other types of random variables.

The Continuous Case

THEOREM I

Let X have a continuous distribution F X ( x ) F X ( x ) , so that F X 1 ( α ) F X 1 ( α ) exists for 0<α<1 0<α<1 (and is hopefully countable). Then the random variable F X 1 ( U ) F X 1 ( U ) has distribution F X ( x ) F X ( x ) , U is uniformly distributed on (0,1).

PROOF

P( F X 1 ( U )x )=P( F X ( F X 1 ( U ) ) F X ( x ) ). P( F X 1 ( U )x )=P( F X ( F X 1 ( U ) ) F X ( x ) ).
(1)

Because F X ( x ) F X ( x ) is monotone. Thus,

P( F X 1 ( U )x )=P( U F X ( x ) )= F X ( x ). P( F X 1 ( U )x )=P( U F X ( x ) )= F X ( x ).
(2)

The last step follows because U is uniformly distributed on (0,1). Diagrammatically, we have that ( Xx ) ( Xx ) if and only if [ U F X ( x ) ] [ U F X ( x ) ] , an event of probability F X ( x ) F X ( x ) .

As long as we can invert the distribution function F X ( x ) F X ( x ) to get the inverse distribution function F X 1 ( α ) F X 1 ( α ) , the theorem assures us we can start with a pseudo-random uniform variable U and turn into a random variable F X 1 ( U ) F X 1 ( U ) , which has the required distribution F X ( x ) F X ( x ) .

Example 1

The Exponential Distribution

Consider the exponential distribution defined as

α= F X ( x )={ 1 e λx ,λ>0,x0, 0,x<0. α= F X ( x )={ 1 e λx ,λ>0,x0, 0,x<0.
(3)

Then f or the inverse distribution function we have

x= 1 λ ln( 1α )= F 1 ( α ). x= 1 λ ln( 1α )= F 1 ( α ).
(4)

Thus if U is uniformly distributed on 0 to 1, then X= 1 λ ln( 1U ) X= 1 λ ln( 1U ) has the distribution of an exponential random variable with parameter λ. We say, for convenience, that X is exponential (λ).

Note that:
If U is uniform (0,1), then so is (1-U), and the pair U and (1-U) are interchangeable in terms of distribution. Hence, X'= 1 λ ln( U ) X'= 1 λ ln( U ) is exponential. However, the two variables X and X’ are correlated and are known as an antithetic pair.

Example 2

Normal and Gamma Distributions

For both these cases there is no simple functional form for the inverse distribution F X 1 ( α ) F X 1 ( α ) , but because of the importance of the Normal and Gamma distribution models, a great deal of effort has been expended in deriving good approximations.

The Normal distribution is defined through its density,

f X ( x )= 1 2πσ exp[ ( xμ ) 2 2 σ 2 ]. f X ( x )= 1 2πσ exp[ ( xμ ) 2 2 σ 2 ].
(5)

So that,

F X ( x )= 1 2πσ exp[ ( xu ) 2 2 σ 2 ] dv. F X ( x )= 1 2πσ exp[ ( xu ) 2 2 σ 2 ] dv.
(6)

The normal distribution function F X ( x ) F X ( x ) is also often denoted Φ( x ) Φ( x ) , when the parameter u and σ are set to 0 to 1, respectively. The distribution has no closed-form inverse, F X 1 ( α ) F X 1 ( α ) , but the inverse is needed do often that Φ 1 ( α ) Φ 1 ( α ) , like logarithms or exponentials, is a system function.

The inverse of the Gamma distribution function, which is given by

F X ( x )= 1 Γ( k ) 0 kx/u v k1 e v dv,x0,k>0,u>0. F X ( x )= 1 Γ( k ) 0 kx/u v k1 e v dv,x0,k>0,u>0.
(7)

Is more difficult to compute because its shape changes radically with the value of k. It is however available on most computers as a numerically reliable function.

Example 3

The Normal and Gamma Distributions

A commonly used symmetric distribution, which has a shape very much like that of the Normal distribution, is the standardized logistic distribution.

F X ( x )= e x 1+ e x = 1 1+ e x ,<x<, F X ( x )= e x 1+ e x = 1 1+ e x ,<x<,
(8)

with probability density function

F X ( x )= e x 1+ e x = 1 1+ e x ,<x<. F X ( x )= e x 1+ e x = 1 1+ e x ,<x<.
(9)
Note that:
F X ( )= e /( 1+ e )=0 F X ( )= e /( 1+ e )=0 and F X ( )=1 F X ( )=1 by using the second form for F X ( x ) F X ( x ) .

The inverse is obtained by setting α= e x 1+ e x . α= e x 1+ e x . Then, α+α e x = e x α+α e x = e x or α= e x ( 1α ). α= e x ( 1α ).

Therefore, x= F X 1 ( α )=lnαln( 1α ). x= F X 1 ( α )=lnαln( 1α ).

And the random variable is generated, using the inverse probability integral method. As follows X=lnUln( 1U ). X=lnUln( 1U ).

The Discrete Case

Let X have a discrete distribution F X ( x ) F X ( x ) that is, F X ( x ) F X ( x ) jumps at points x k =0,1,2,... x k =0,1,2,... . Usually we have the case that x k =k x k =k , so that X is an integer value.

Let the probability function be denoted by

p k =P( X= x k ),k=0,1,.... p k =P( X= x k ),k=0,1,....
(10)

The probability distribution function is then,

F X ( x k )=P( X x k )= jk p j ,k=0,1,..., F X ( x k )=P( X x k )= jk p j ,k=0,1,...,
(11)

and the reliability or survivor function is

R X ( x k )=1 F X ( x k )=P( X> x k ),k=0,1,.... R X ( x k )=1 F X ( x k )=P( X> x k ),k=0,1,....
(12)

The survivor function is sometimes easier to work with than the distribution function, and in fields such as reliability, it is habitually used. The inverse probability integral transform method of generating discrete random variables is based on the following theorem.

THEOREM

Let U be uniformly distributed in the interval (0,1). Set X= x k X= x k whenever F X ( x k1 )<U F X ( x k ) F X ( x k1 )<U F X ( x k ) , for k=0,1,2,... k=0,1,2,... with F X ( x 1 )=0 F X ( x 1 )=0 . Then X has probability function p k p k .

PROOF

By definition of the procedure,

X= x k X= x k if and only if F X ( x k1 )<U F X ( x k ) F X ( x k1 )<U F X ( x k ) .

Therefore,

P( X= x k )=P F X ( ( x k1 )<U F X ( x k ) )= F X ( x k )F( x k1 )= p k . P( X= x k )=P F X ( ( x k1 )<U F X ( x k ) )= F X ( x k )F( x k1 )= p k .
(13)

By the definition of the distribution function of a uniform (0,1) random variable.

Thus the inverse probability integral transform algorithm for generating X is to find x k x k such that U F X ( x k ) U F X ( x k ) and U> F X ( x k1 ) U> F X ( x k1 ) and then set X= x k X= x k .

In the discrete case, there is never any problem of numerically computing the inverse distribution function, but the search to find the values F X =( x k ) F X =( x k ) and F X ( x k1 ) F X ( x k1 ) between which U lies can be time-consuming, generally, sophisticated search procedures are required. In implementing this procedure, we try to minimize the number of times one compares U to F X =( x k ) F X =( x k ) . If we want to generate many of X, and F X =( x k ) F X =( x k ) is not easily computable, we may also want to store F X =( x k ) F X =( x k ) for all k rather than recomputed it. Then we have to worry about minimizing the total memory to store values of F X =( x k ) F X =( x k ) .

Example 4

The Binary Random Variable

To generate a binary-valued random variable X that is 1 with probability p and 0 with probability 1-p, the algorithm is:

  • If Up Up , set X=1.
  • Else set X=0.

Example 5

The Discrete Uniform Random Variable

Let X take on integer values between and including the integers a and b, where ab ab , with equal probabilities. Since there are ( ba+1 ) ( ba+1 ) distinct values for X, the probability of getting any one of these values is, by definition, 1/( ba+1 ) 1/( ba+1 ) . If we start with a continuous uniform (0,1) random number U, then the discrete inverse probability integral transform shows that

X= integer part of [ ( ba+1 )U+a ] [ ( ba+1 )U+a ] .

Note that:
The continuous random variable [ ( ba+1 )U+a ] [ ( ba+1 )U+a ] is uniformly distributed in the open interval ( a,b+1 ) ( a,b+1 ) .

Example 6

The Geometric Distribution

Let X take values on zero and the positive integers with a geometric distribution. Thus,

P( X=k )= p k =( 1ρ ) ρ k ,k=0,1,2,....,0<ρ<1, P( X=k )= p k =( 1ρ ) ρ k ,k=0,1,2,....,0<ρ<1,
(14)

and

P( Xk )= F X ( k )=1 ρ k+1 ,k=0,1,2,....,0<ρ<1. P( Xk )= F X ( k )=1 ρ k+1 ,k=0,1,2,....,0<ρ<1.
(15)

To generate geometrically distributed random variables then, you can proceed successively according to the following algorithm:

  • Compute F X ( 0 )=1ρ F X ( 0 )=1ρ . Generate U.
  • If U F X ( 0 ) U F X ( 0 ) set X=0 and exit.
  • Otherwise compute F X ( 1 )=1 ρ 2 F X ( 1 )=1 ρ 2 .
  • If U F X ( 1 ) U F X ( 1 ) set X=1, and exit.
  • Otherwise compute F X ( 2 ) F X ( 2 ) , and so on.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks