Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Applied Probability » Transform Methods

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Transform Methods

Module by: Paul E Pfeiffer. E-mail the author

Summary: The mathematical expectation E[X] of a random variable locates the center of mass for the induced distribution, and the expectation of the square of the distance between X and E[X] measures the spread of the distribution about its center of mass. These quantities are also known, respectively, as the mean (moment) of X and the variance or second moment of X about the mean. Other moments give added information. We examine the expectation of certain functions of X. Each of these functions involves a parameter, in a manner that completely determines the distribution. We refer to these as transforms. In particular, we consider three of the most useful of these: the moment generating function, the characteristic function, and the generating function for nonnegative, integer-valued random variables.

As pointed out in the units on Expectation and Variance, the mathematical expectation E[X]=μXE[X]=μX of a random variable X locates the center of mass for the induced distribution, and the expectation

E [ g ( X ) ] = E [ ( X - E [ X ] ) 2 ] = Var [ X ] = σ X 2 E [ g ( X ) ] = E [ ( X - E [ X ] ) 2 ] = Var [ X ] = σ X 2
(1)

measures the spread of the distribution about its center of mass. These quantities are also known, respectively, as the mean (moment) of X and the second moment of X about the mean. Other moments give added information. For example, the third moment about the mean E[(X-μX)3]E[(X-μX)3] gives information about the skew, or asymetry, of the distribution about the mean. We investigate further along these lines by examining the expectation of certain functions of X. Each of these functions involves a parameter, in a manner that completely determines the distribution. For reasons noted below, we refer to these as transforms. We consider three of the most useful of these.

Three basic transforms

We define each of three transforms, determine some key properties, and use them to study various probability distributions associated with random variables. In the section on integral transforms, we show their relationship to well known integral transforms. These have been studied extensively and used in many other applications, which makes it possible to utilize the considerable literature on these transforms.

Definition. The moment generating functionMX for random variable X (i.e., for its distribution) is the function

M X ( s ) = E [ e s X ] ( s is a real or complex parameter) M X ( s ) = E [ e s X ] ( s is a real or complex parameter)
(2)

The characteristic functionφX for random variable X is

φ X ( u ) = E [ e i u X ] ( i 2 = - 1 , u is a real parameter) φ X ( u ) = E [ e i u X ] ( i 2 = - 1 , u is a real parameter)
(3)

The generating functiongX(s)gX(s) for a nonnegative, integer-valued random variable X is

g X ( s ) = E [ s X ] = k s k P ( X = k ) g X ( s ) = E [ s X ] = k s k P ( X = k )
(4)

The generating function E[sX]E[sX] has meaning for more general random variables, but its usefulness is greatest for nonnegative, integer-valued variables, and we limit our consideration to that case.

The defining expressions display similarities which show useful relationships. We note two which are particularly useful.

M X ( s ) = E [ e s X ] = E [ ( e s ) X ] = g X ( e s ) and φ X ( u ) = E [ e i u X ] = M X ( i u ) M X ( s ) = E [ e s X ] = E [ ( e s ) X ] = g X ( e s ) and φ X ( u ) = E [ e i u X ] = M X ( i u )
(5)

Because of the latter relationship, we ordinarily use the moment generating function instead of the characteristic function to avoid writing the complex unit i. When desirable, we convert easily by the change of variable.

The integral transform character of these entities implies that there is essentially a one-to-one relationship between the transform and the distribution.

Moments

The name and some of the importance of the moment generating function arise from the fact that the derivatives of MX evaluateed at s=0s=0 are the moments about the origin. Specifically

M X ( k ) ( 0 ) = E [ X k ] , provided the k th moment exists M X ( k ) ( 0 ) = E [ X k ] , provided the k th moment exists
(6)

Since expectation is an integral and because of the regularity of the integrand, we may differentiate inside the integral with respect to the parameter.

M X ' ( s ) = d d s E [ e s X ] = E d d s e s X = E [ X e s X ] M X ' ( s ) = d d s E [ e s X ] = E d d s e s X = E [ X e s X ]
(7)

Upon setting s=0s=0, we have MX'(0)=E[X]MX'(0)=E[X]. Repeated differentiation gives the general result. The corresponding result for the characteristic function is φ(k)(0)=ikE[Xk]φ(k)(0)=ikE[Xk].

Example 1: The exponential distribution

The density function is fX(t)=λe-λtfX(t)=λe-λt for t0t0.

M X ( s ) = E [ e s X ] = 0 λ e - ( λ - s ) t d t = λ λ - s M X ( s ) = E [ e s X ] = 0 λ e - ( λ - s ) t d t = λ λ - s
(8)
M X ' ( s ) = λ ( λ - s ) 2 M X ' ' ( s ) = 2 λ ( λ - s ) 3 M X ' ( s ) = λ ( λ - s ) 2 M X ' ' ( s ) = 2 λ ( λ - s ) 3
(9)
E [ X ] = M X ' ( 0 ) = λ λ 2 = 1 λ E [ X 2 ] = M X ' ' ( 0 ) = 2 λ λ 3 = 2 λ 2 E [ X ] = M X ' ( 0 ) = λ λ 2 = 1 λ E [ X 2 ] = M X ' ' ( 0 ) = 2 λ λ 3 = 2 λ 2
(10)

From this we obtain Var [X]=2/λ2-1/λ2=1/λ2 Var [X]=2/λ2-1/λ2=1/λ2.

The generating function does not lend itself readily to computing moments, except that

g X ' ( s ) = k = 1 k s k - 1 P ( X = k ) so that g X ' ( 1 ) = k = 1 k P ( X = k ) = E [ X ] g X ' ( s ) = k = 1 k s k - 1 P ( X = k ) so that g X ' ( 1 ) = k = 1 k P ( X = k ) = E [ X ]
(11)

For higher order moments, we may convert the generating function to the moment generating function by replacing s with es, then work with MX and its derivatives.

Example 2: The Poisson (μ)(μ) distribution

P(X=k)=e-μμkk!,k0P(X=k)=e-μμkk!,k0, so that

g X ( s ) = e - μ k = 0 s k μ k k ! = e - μ k = 0 ( s μ ) k k ! = e - μ e μ s = e μ ( s - 1 ) g X ( s ) = e - μ k = 0 s k μ k k ! = e - μ k = 0 ( s μ ) k k ! = e - μ e μ s = e μ ( s - 1 )
(12)

We convert to MX by replacing s with es to get MX(s)=eμ(es-1)MX(s)=eμ(es-1). Then

M X ' ( s ) = e μ ( e s - 1 ) μ e s M X ' ' ( s ) = e μ ( e s - 1 ) [ μ 2 e 2 s + μ e s ] M X ' ( s ) = e μ ( e s - 1 ) μ e s M X ' ' ( s ) = e μ ( e s - 1 ) [ μ 2 e 2 s + μ e s ]
(13)

so that

E [ X ] = M X ' ( 0 ) = μ , E [ X 2 ] = M X ' ' ( 0 ) = μ 2 + μ , and Var [ X ] = μ 2 + μ - μ 2 = μ E [ X ] = M X ' ( 0 ) = μ , E [ X 2 ] = M X ' ' ( 0 ) = μ 2 + μ , and Var [ X ] = μ 2 + μ - μ 2 = μ
(14)

These results agree, of course, with those found by direct computation with the distribution.

Operational properties

We refer to the following as operational properties.

  • (T1): If Z=aX+bZ=aX+b, then
    MZ(s)=ebsMX(as),φZ(u)=eiubφX(au),gZ(s)=sbgX(sa)MZ(s)=ebsMX(as),φZ(u)=eiubφX(au),gZ(s)=sbgX(sa)
    (15)
    For the moment generating function, this pattern follows from
    E[e(aX+b)s]=sbsE[e(as)X]E[e(aX+b)s]=sbsE[e(as)X]
    (16)
    Similar arguments hold for the other two.
  • (T2): If the pair {X,Y}{X,Y} is independent, then
    MX+Y(s)=MX(s)MY(s),φX+Y(u)=φX(u)φY(u),gX+Y(s)=gX(s)gY(s)MX+Y(s)=MX(s)MY(s),φX+Y(u)=φX(u)φY(u),gX+Y(s)=gX(s)gY(s)
    (17)
    For the moment generating function, esXesX and esYesY form an independent pair for each value of the parameter s. By the product rule for expectation
    E[es(X+Y)]=E[esXesY]=E[esX]E[esY]E[es(X+Y)]=E[esXesY]=E[esX]E[esY]
    (18)
    Similar arguments are used for the other two transforms.
    A partial converse for (T2) is as follows:
  • (T3): If MX+Y(s)=MX(s)MY(s)MX+Y(s)=MX(s)MY(s), then the pair {X,Y}{X,Y} is uncorrelated. To show this, we obtain two expressions for E[(X+Y)2]E[(X+Y)2], one by direct expansion and use of linearity, and the other by taking the second derivative of the moment generating function.
    E[(X+Y)2]=E[X2]+E[Y2]+2E[XY]E[(X+Y)2]=E[X2]+E[Y2]+2E[XY]
    (19)
    MX+Y''(s)=[MX(s)MY(s)]''=MX''(s)MY(s)+MX(s)MY''(s)+2MX'(s)MY'(s)MX+Y''(s)=[MX(s)MY(s)]''=MX''(s)MY(s)+MX(s)MY''(s)+2MX'(s)MY'(s)
    (20)
    On setting s=0s=0 and using the fact that MX(0)=MY(0)=1MX(0)=MY(0)=1, we have
    E[(X+Y)2]=E[X2]+E[Y2]+2E[X]E[Y]E[(X+Y)2]=E[X2]+E[Y2]+2E[X]E[Y]
    (21)
    which implies the equality E[XY]=E[X]E[Y]E[XY]=E[X]E[Y].

Note that we have not shown that being uncorrelated implies the product rule.

We utilize these properties in determining the moment generating and generating functions for several of our common distributions.

Some discrete distributions

  1. Indicator function X = I E P ( E ) = p X = I E P ( E ) = p
    g X ( s ) = s 0 q + s 1 p = q + p s M X ( s ) = g X ( e s ) = q + p e s g X ( s ) = s 0 q + s 1 p = q + p s M X ( s ) = g X ( e s ) = q + p e s
    (22)
  2. Simple random variableX=i=1ntiIAiX=i=1ntiIAi (primitive form) P(Ai)=piP(Ai)=pi
    MX(s)=i=1nestipiMX(s)=i=1nestipi
    (23)
  3. Binomial(n,p)(n,p). X=i=1nIEiwith{IEi:1in}iidP(Ei)=pX=i=1nIEiwith{IEi:1in}iidP(Ei)=p
    We use the product rule for sums of independent random variables and the generating function for the indicator function.
    gX(s)=i=1n(q+ps)=(q+ps)nMX(s)=(q+pes)ngX(s)=i=1n(q+ps)=(q+ps)nMX(s)=(q+pes)n
    (24)
  4. Geometric(p)(p). P(X=k)=pqkk0P(X=k)=pqkk0E[X]=q/pE[X]=q/p We use the formula for the geometric series to get
    gX(s)=k=0pqksk=pk=0(qs)k=p1-qsMX(s)=p1-qesgX(s)=k=0pqksk=pk=0(qs)k=p1-qsMX(s)=p1-qes
    (25)
  5. Negative binomial(m,p)(m,p) If Ym is the number of the trial in a Bernoulli sequence on which the mth success occurs, and Xm=Ym-mXm=Ym-m is the number of failures before the mth success, then
    P(Xm=k)=P(Ym-m=k)=C(-m,k)(-q)kpmP(Xm=k)=P(Ym-m=k)=C(-m,k)(-q)kpm
    (26)
    whereC(-m,k)=-m(-m-1)(-m-2)(-m-k+1)k!whereC(-m,k)=-m(-m-1)(-m-2)(-m-k+1)k!
    (27)
    The power series expansion about t=0t=0 shows that
    (1+t)-m=1+C(-m,1)t+C(-m,2)t2+for-1<t<1(1+t)-m=1+C(-m,1)t+C(-m,2)t2+for-1<t<1
    (28)
    Hence
    MXm(s)=pmk=0C(-m,k)(-q)kesk=p1-qesmMXm(s)=pmk=0C(-m,k)(-q)kesk=p1-qesm
    (29)
    Comparison with the moment generating function for the geometric distribution shows that Xm=Ym-mXm=Ym-m has the same distribution as the sum of m iid random variables, each geometric (p)(p). This suggests that the sequence is characterized by independent, successive waiting times to success. This also shows that the expectation and variance of Xm are m times the expectation and variance for the geometric. Thus
    E[Xm]=mq/pand Var [Xm]=mq/p2E[Xm]=mq/pand Var [Xm]=mq/p2
    (30)
  6. Poisson(μ)(μ)P(X=k)=e-μμkk!k0P(X=k)=e-μμkk!k0 In Example 2, above, we establish gX(s)=eμ(s-1)gX(s)=eμ(s-1) and MX(s)=eμ(es-1)MX(s)=eμ(es-1). If {X,Y}{X,Y} is an independent pair, with XX Poisson (λ)(λ) and YY Poisson (μ)(μ), then Z=X+YZ=X+Y Poisson (λ+μ)(λ+μ). Follows from (T1) and product of exponentials.

Some absolutely continuous distributions

  1. Uniform on (a,b)(a,b)fX(t)=1b-aa<t<bfX(t)=1b-aa<t<b
    MX(s)=estfX(t)dt=1b-aabestdt=esb-esas(b-a)MX(s)=estfX(t)dt=1b-aabestdt=esb-esas(b-a)
    (31)
  2. Symmetric triangular(-c,,c)(-c,,c)
    fX(t)=I[-c,0)(t)c+tc2+I[0,c](t)c-tc2fX(t)=I[-c,0)(t)c+tc2+I[0,c](t)c-tc2
    (32)
    MX(s)=1c2-c0(c+t)estdt+1c20c(c-t)estdt=ecs+e-cs-2c2s2MX(s)=1c2-c0(c+t)estdt+1c20c(c-t)estdt=ecs+e-cs-2c2s2
    (33)
    =ecs-1cs·1-e-cscs=MY(s)MZ(-s)=MY(s)M-Z(s)=ecs-1cs·1-e-cscs=MY(s)MZ(-s)=MY(s)M-Z(s)
    (34)
    where MY is the moment generating function for YY uniform (0,c)(0,c) and similarly for MZ. Thus, X has the same distribution as the difference of two independent random variables, each uniform on (0,c)(0,c).
  3. Exponential(λ)(λ)fX(t)=λe-λt,t0fX(t)=λe-λt,t0
    In example 1, above, we show that MX(s)=λλ-sMX(s)=λλ-s.
  4. Gamma(α,λ)(α,λ)fX(t)=1Γ(α)λαtα-1e-λtt0fX(t)=1Γ(α)λαtα-1e-λtt0
    MX(s)=λαΓ(α)0tα-1e-(λ-s)tdt=λλ-sαMX(s)=λαΓ(α)0tα-1e-(λ-s)tdt=λλ-sα
    (35)
    For α=nα=n, a positive integer,
    MX(s)=λλ-snMX(s)=λλ-sn
    (36)
    which shows that in this case X has the distribution of the sum of n independent random variables each exponential (λ)(λ).
  5. Normal(μ,σ2)(μ,σ2).
    • The standardized normal, ZN(0,1)ZN(0,1)
      MZ(s)=12π-este-t2/2dtMZ(s)=12π-este-t2/2dt
      (37)
      Now st-t22=s22-12(t-s)2st-t22=s22-12(t-s)2 so that
      MZ(s)=es2/212π-e-(t-s)2/2dt=es2/2MZ(s)=es2/212π-e-(t-s)2/2dt=es2/2
      (38)
      since the integrand (including the constant 1/2π1/2π) is the density for N(s,1)N(s,1).
    • X=σZ+μX=σZ+μ implies by property (T1)
      MX(s)=esμeσ2s2/2=expσ2s22+sμMX(s)=esμeσ2s2/2=expσ2s22+sμ
      (39)

Example 3: Affine combination of independent normal random variables

Suppose {X,Y}{X,Y} is an independent pair with XN(μX,σX2)XN(μX,σX2) and YN(μY,σY2)YN(μY,σY2). Let Z=aX+bY+cZ=aX+bY+c. Then Z is normal, for by properties of expectation and variance

μ Z = a μ X + b μ Y + c and σ Z 2 = a 2 σ X 2 + b 2 σ Y 2 μ Z = a μ X + b μ Y + c and σ Z 2 = a 2 σ X 2 + b 2 σ Y 2
(40)

and by the operational properties for the moment generating function

M Z ( s ) = e s c M X ( a s ) M Y ( b s ) = exp ( a 2 σ X 2 + b 2 σ Y 2 ) s 2 2 + s ( a μ X + b μ Y + c ) M Z ( s ) = e s c M X ( a s ) M Y ( b s ) = exp ( a 2 σ X 2 + b 2 σ Y 2 ) s 2 2 + s ( a μ X + b μ Y + c )
(41)
= exp σ Z 2 s 2 2 + s μ Z = exp σ Z 2 s 2 2 + s μ Z
(42)

The form of MZ shows that Z is normally distributed.

Moment generating function and simple random variables

Suppose X=i=1ntiIAiX=i=1ntiIAi in canonical form. That is, Ai is the event {X=ti}{X=ti} for each of the distinct values in the range of X, with pi=P(Ai)=P(X=ti)pi=P(Ai)=P(X=ti). Then the moment generating function for X is

M X ( s ) = i = 1 n p i e s t i M X ( s ) = i = 1 n p i e s t i
(43)

The moment generating function MX is thus related directly and simply to the distribution for random variable X.

Consider the problem of determining the sum of an independent pair {X,Y}{X,Y} of simple random variables. The moment generating function for the sum is the product of the moment generating functions. Now if Y=j=1mujIBjY=j=1mujIBj, with P(Y=uj)=πjP(Y=uj)=πj, we have

M X ( s ) M Y ( s ) = i = 1 n p i e s t i j = 1 m π j e s u j = i , j p i π j e s ( t i + u j ) M X ( s ) M Y ( s ) = i = 1 n p i e s t i j = 1 m π j e s u j = i , j p i π j e s ( t i + u j )
(44)

The various values are sums ti+ujti+uj of pairs (ti,uj)(ti,uj) of values. Each of these sums has probability piπjpiπj for the values corresponding to ti,ujti,uj. Since more than one pair sum may have the same value, we need to sort the values, consolidate like values and add the probabilties for like values to achieve the distribution for the sum. We have an m-function mgsum for achieving this directly. It produces the pair-products for the probabilities and the pair-sums for the values, then performs a csort operation. Although not directly dependent upon the moment generating function analysis, it produces the same result as that produced by multiplying moment generating functions.

Example 4: Distribution for a sum of independent simple random variables

Suppose the pair {X,Y}{X,Y} is independent with distributions

X = [ 1 3 5 7 ] Y = [ 2 3 4 ] P X = [ 0 . 2 0 . 4 0 . 3 0 . 1 ] P Y = [ 0 . 3 0 . 5 0 . 2 ] X = [ 1 3 5 7 ] Y = [ 2 3 4 ] P X = [ 0 . 2 0 . 4 0 . 3 0 . 1 ] P Y = [ 0 . 3 0 . 5 0 . 2 ]
(45)

Determine the distribution for Z=X+YZ=X+Y.

X = [1 3 5 7];
Y = 2:4;
PX = 0.1*[2 4 3 1];
PY = 0.1*[3 5 2];
[Z,PZ] = mgsum(X,Y,PX,PY);
disp([Z;PZ]')
    3.0000    0.0600
    4.0000    0.1000
    5.0000    0.1600
    6.0000    0.2000
    7.0000    0.1700
    8.0000    0.1500
    9.0000    0.0900
   10.0000    0.0500
   11.0000    0.0200

This could, of course, have been achieved by using icalc and csort, which has the advantage that other functions of X and Y may be handled. Also, since the random variables are nonnegative, integer-valued, the MATLAB convolution function may be used (see Example 7). By repeated use of the function mgsum, we may obtain the distribution for the sum of more than two simple random variables. The m-functions mgsum3 and mgsum4 utilize this strategy.

The techniques for simple random variables may be used with the simple approximations to absolutely continuous random variables.

Example 5: Difference of uniform distribution

The moment generating functions for the uniform and the symmetric triangular show that the latter appears naturally as the difference of two uniformly distributed random variables. We consider X and Y iid, uniform on [0,1].

tappr
Enter matrix [a b] of x-range endpoints  [0 1]
Enter number of x approximation points  200
Enter density as a function of t  t<=1
Use row matrices X and PX as in the simple case
[Z,PZ] = mgsum(X,-X,PX,PX);
plot(Z,PZ/d)               % Divide by d to recover f(t)
%  plotting details   ---  see Figure 1
Figure 1: Density for the difference of an independent pair, uniform (0,1).
Figure one is a density graph. It is titled, Density for difference two variables, each uniform (0, 1). The horizontal axis of the graph is labeled, t, and the vertical graph is labeled fZ(t). The plot of the density is triangular, beginning at (-1, 0), and increasing at a constant slope to point (0, 1). The graph continues after this point downward with a constant negative slope to point (1, 0).

The generating function

The form of the generating function for a nonnegative, integer-valued random variable exhibits a number of important properties.

X = k = 0 k I A i (canonical form) p k = P ( A k ) = P ( X = k ) g X ( s ) = k = 0 s k p k X = k = 0 k I A i (canonical form) p k = P ( A k ) = P ( X = k ) g X ( s ) = k = 0 s k p k
(46)
  1. As a power series in s with nonnegative coefficients whose partial sums converge to one, the series converges at least for |s|1|s|1.
  2. The coefficients of the power series display the distribution: for value k the probability pk=P(X=k)pk=P(X=k) is the coefficient of sk.
  3. The power series expansion about the origin of an analytic function is unique. If the generating function is known in closed form, the unique power series expansion about the origin determines the distribution. If the power series converges to a known closed form, that form characterizes the distribution,
  4. For a simple random variable (i.e., pk=0pk=0 for k>nk>n), gX is a polynomial.

Example 6: The Poisson distribution

In Example 2, above, we establish the generating function for XX Poisson (μ)(μ) from the distribution. Suppose, however, we simply encounter the generating function

g X ( s ) = e m ( s - 1 ) = e - m e m s g X ( s ) = e m ( s - 1 ) = e - m e m s
(47)

From the known power series for the exponential, we get

g X ( s ) = e - m k = 0 ( m s ) k k ! = e - m k = 0 s k m k k ! g X ( s ) = e - m k = 0 ( m s ) k k ! = e - m k = 0 s k m k k !
(48)

We conclude that

P ( X = k ) = e - m m k k ! , 0 k P ( X = k ) = e - m m k k ! , 0 k
(49)

which is the Poisson distribution with parameter μ=mμ=m.

For simple, nonnegative, integer-valued random variables, the generating functions are polynomials. Because of the product rule (T2), the problem of determining the distribution for the sum of independent random variables may be handled by the process of multiplying polynomials. This may be done quickly and easily with the MATLAB convolution function.

Example 7: Sum of independent simple random variables

Suppose the pair {X,Y}{X,Y} is independent, with

g X ( s ) = 1 10 ( 2 + 3 s + 3 s 2 + 2 s 5 ) g Y ( s ) = 1 10 ( 2 s + 4 s 2 + 4 s 3 ) g X ( s ) = 1 10 ( 2 + 3 s + 3 s 2 + 2 s 5 ) g Y ( s ) = 1 10 ( 2 s + 4 s 2 + 4 s 3 )
(50)

In the MATLAB function convolution, all powers of s must be accounted for by including zeros for the missing powers.

gx = 0.1*[2 3 3 0 0 2];      % Zeros for missing powers 3, 4
gy = 0.1*[0 2 4 4];          % Zero  for missing power 0
gz = conv(gx,gy);
a = ['       Z         PZ'];
b = [0:8;gz]';
disp(a)
       Z         PZ          % Distribution for Z = X + Y
disp(b)
         0         0
    1.0000    0.0400
    2.0000    0.1400
    3.0000    0.2600
    4.0000    0.2400
    5.0000    0.1200
    6.0000    0.0400
    7.0000    0.0800
    8.0000    0.0800

If mgsum were used, it would not be necessary to be concerned about missing powers and the corresponding zero coefficients.

Integral transforms

We consider briefly the relationship of the moment generating function and the characteristic function with well known integral transforms (hence the name of this chapter).

Moment generating function and the Laplace transform

When we examine the integral forms of the moment generating function, we see that they represent forms of the Laplace transform, widely used in engineering and applied mathematics. Suppose FX is a probability distribution function with FX(-)=0FX(-)=0. The bilateral Laplace transform for FX is given by

- e - s t F X ( t ) d t - e - s t F X ( t ) d t
(51)

The Laplace-Stieltjes transform for FX is

- e - s t F X ( d t ) - e - s t F X ( d t )
(52)

Thus, if MX is the moment generating function for X, then MX(-s)MX(-s) is the Laplace-Stieltjes transform for X (or, equivalently, for FX).

The theory of Laplace-Stieltjes transforms shows that under conditions sufficiently general to include all practical distribution functions

M X ( - s ) = - e - s t F X ( d t ) = s - e - s t F X ( t ) d t M X ( - s ) = - e - s t F X ( d t ) = s - e - s t F X ( t ) d t
(53)

Hence

1 s M X ( - s ) = - e - s t F X ( t ) d t 1 s M X ( - s ) = - e - s t F X ( t ) d t
(54)

The right hand expression is the bilateral Laplace transform of FX. We may use tables of Laplace transforms to recover FX when MX is known. This is particularly useful when the random variable X is nonnegative, so that FX(t)=0FX(t)=0 for t<0t<0.

If X is absolutely continuous, then

M X ( - s ) = - e - s t f X ( t ) d t M X ( - s ) = - e - s t f X ( t ) d t
(55)

In this case, MX(-s)MX(-s) is the bilateral Laplace transform of fX. For nonnegative random variable X, we may use ordinary tables of the Laplace transform to recover fX.

Example 8: Use of Laplace transform

Suppose nonnegative X has moment generating function

M X ( s ) = 1 ( 1 - s ) M X ( s ) = 1 ( 1 - s )
(56)

We know that this is the moment generating function for the exponential (1) distribution. Now,

1 s M X ( - s ) = 1 s ( 1 + s ) = 1 s - 1 1 + s 1 s M X ( - s ) = 1 s ( 1 + s ) = 1 s - 1 1 + s
(57)

From a table of Laplace transforms, we find 1/s1/s is the transform for the constant 1 (for t0t0) and 1/(1+s)1/(1+s) is the transform for e-t,t0e-t,t0, so that FX(t)=1-e-tt0FX(t)=1-e-tt0, as expected.

Example 9: Laplace transform and the density

Suppose the moment generating function for a nonnegative random variable is

M X ( s ) = λ λ - s α M X ( s ) = λ λ - s α
(58)

From a table of Laplace transforms, we find that for α>0α>0,

Γ ( α ) ( s - a ) α is the Laplace transform of t α - 1 e a t t 0 Γ ( α ) ( s - a ) α is the Laplace transform of t α - 1 e a t t 0
(59)

If we put a=-λa=-λ, we find after some algebraic manipulations

f X ( t ) = λ α t α - 1 e - λ t Γ ( α ) , t 0 f X ( t ) = λ α t α - 1 e - λ t Γ ( α ) , t 0
(60)

Thus, XX gamma (α,λ)(α,λ), in keeping with the determination, above, of the moment generating function for that distribution.

The characteristic function

Since this function differs from the moment generating function by the interchange of parameter s and iuiu, where i is the imaginary unit, i2=-1i2=-1, the integral expressions make that change of parameter. The result is that Laplace transforms become Fourier transforms. The theoretical and applied literature is even more extensive for the characteristic function.

Not only do we have the operational properties (T1) and (T2) and the result on moments as derivatives at the origin, but there is an important expansion for the characteristic function.

An expansion theorem

If E[|X|n]<E[|X|n]<, then

φ ( k ) ( 0 ) = i k E [ X k ] , for 0 k n and φ ( u ) = k = 0 n ( i u ) k k ! E [ X k ] + o ( u n ) as u 0 φ ( k ) ( 0 ) = i k E [ X k ] , for 0 k n and φ ( u ) = k = 0 n ( i u ) k k ! E [ X k ] + o ( u n ) as u 0
(61)

We note one limit theorem which has very important consequences.

A fundamental limit theorem

Suppose {Fn:1n}{Fn:1n} is a sequence of probability distribution functions and {φn:1n}{φn:1n} is the corresponding sequence of characteristic functions.

  1. If F is a distribution function such that Fn(t)F(t)Fn(t)F(t) at every point of continuity for F, and φ is the characteristic function for F, then
    φn(u)φ(u)uφn(u)φ(u)u
    (62)
  2. If φn(u)φ(u)φn(u)φ(u) for all u and φ is continuous at 0, then φ is the characteristic function for distribution function F such that
    Fn(t)F(t)ateachpointofcontinuityofFFn(t)F(t)ateachpointofcontinuityofF
    (63)

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks