Skip to content Skip to navigation

OpenStax_CNX

You are here: Home » Content » Random Selection

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Applied Probability"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This module is included inLens: UniqU's lens
    By: UniqU, LLCAs a part of collection: "Applied Probability"

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Random Selection

Module by: Paul E Pfeiffer. E-mail the author

Summary: The usual treatments deal with a single random variable or a fixed, finite number of random variables, considered jointly. However, there are many common applications in which we select at random a member of a class of random variables and observe its value, or select a random number of random variables and obtain some function of those selected. This is formulated with the aid of a counting or selecting random variable N, which is nonegative, integer valued. It may be independent of the class selected, or may be related in some sequential way to members of the class. We consider only the independent case. Many important problems require optional random variables, sometimes called Markov times. These involve more theory than we develop in this treatment. As a basic model, we consider the sum of a random number of members of an iid class. In order to have a concrete interpretation to help visualize the formal patterns, we think of the demand of a random number of customers. We suppose the number of customers N is independent of the individual demands. We formulate a model to be used for a variety of applications. Under standard independence conditions, we obtain expressions for compound demand D, conditional expectation for g(D) given N = n, and moment generating function for D. These are applied in various situations.

Introduction

The usual treatments deal with a single random variable or a fixed, finite number of random variables, considered jointly. However, there are many common applications in which we select at random a member of a class of random variables and observe its value, or select a random number of random variables and obtain some function of those selected. This is formulated with the aid of a counting or selecting random variable N, which is nonegative, integer valued. It may be independent of the class selected, or may be related in some sequential way to members of the class. We consider only the independent case. Many important problems require optional random variables, sometimes called Markov times. These involve more theory than we develop in this treatment.

Some common examples:

  1. Total demand of N customers— N independent of the individual demands.
  2. Total service time for N units— N independent of the individual service times.
  3. Net gain in N plays of a game— N independent of the individual gains.
  4. Extreme values of N random variables— N independent of the individual values.
  5. Random sample of size NN is usually determined by propereties of the sample observed.
  6. Decide when to play on the basis of past results— N dependent on past.

A useful model—random sums

As a basic model, we consider the sum of a random number of members of an iid class. In order to have a concrete interpretation to help visualize the formal patterns, we think of the demand of a random number of customers. We suppose the number of customers N is independent of the individual demands. We formulate a model to be used for a variety of applications.

  • A basic sequence {Xn:0n}{Xn:0n} [Demand of n customers]
  • An incremental sequence {Yn:0n}{Yn:0n} [Individual demands]
    These are related as follows:
    X n = k = 0 n Y k for n 0 and X n = 0 for n < 0 Y n = X n - X n - 1 for all n X n = k = 0 n Y k for n 0 and X n = 0 for n < 0 Y n = X n - X n - 1 for all n
    (1)
  • A counting random variable N. If N=nN=n then n of the Yk are added to give the compound demand D (the random sum)
    D = k = 0 N Y k = k = 0 I { N = k } X k = k = 0 I { k } ( N ) X k D = k = 0 N Y k = k = 0 I { N = k } X k = k = 0 I { k } ( N ) X k
    (2)

Note. In some applications the counting random variable may take on the idealized value . For example, in a game that is played until some specified result occurs, this may never happen, so that no finite value can be assigned to N. In such a case, it is necessary to decide what value X is to be assigned. For N independent of the Yn (hence of the Xn), we rarely need to consider this possibility.

Independent selection from an iid incremental sequence

We assume throughout, unless specifically stated otherwise, that:

  1. X0=Y0=0X0=Y0=0
  2. {Yk:1k}{Yk:1k} is iid
  3. {N,Yk:0k}{N,Yk:0k} is an independent class

We utilize repeatedly two important propositions:

  1. E[h(D)|N=n]=E[h(Xn)],n0E[h(D)|N=n]=E[h(Xn)],n0.
  2. MD(s)=gN[MY(s)]MD(s)=gN[MY(s)]. If the Yn are nonnegative integer valued, then so is D and gD(s)=gN[gY(s)]gD(s)=gN[gY(s)]

DERIVATION

We utilize properties of generating functions, moment generating functions, and conditional expectation.

  1. E[I{n}(N)h(D)]=E[h(D)|N=n]P(N=n)E[I{n}(N)h(D)]=E[h(D)|N=n]P(N=n) by definition of conditional expectation, given an event. Now, I{n}(N)h(D)=I{n}(N)h(Xn)I{n}(N)h(D)=I{n}(N)h(Xn) and E[I{n}(N)h(Xn)]=P(N=n)E[h(Xn)]E[I{n}(N)h(Xn)]=P(N=n)E[h(Xn)]. Hence E[h(D)|N=n]P(N=n)=P(N=n)E[h(Xn)]E[h(D)|N=n]P(N=n)=P(N=n)E[h(Xn)]. Division by P(N=n)P(N=n) gives the desired result.
  2. By the law of total probability (CE1b), MD(s)=E[esD]=E{E[esD|N]}MD(s)=E[esD]=E{E[esD|N]}. By proposition 1 and the product rule for moment generating functions,
    E[esD|N=n]=E[esXn]=k=1nE[esYk]=MYn(s)E[esD|N=n]=E[esXn]=k=1nE[esYk]=MYn(s)
    (3)
    Hence
    MD(s)=n=0MYn(s)P(N=n)=gN[MY(s)]MD(s)=n=0MYn(s)P(N=n)=gN[MY(s)]
    (4)
    A parallel argument holds for gD in the integer-valued case.

Remark. The result on MD and gD may be developed without use of conditional expectation.

M D ( s ) = E [ e s D ] = k = 0 E [ I { N = n } e s X n ] = k = 0 P ( N = n ) E [ e s X n ] M D ( s ) = E [ e s D ] = k = 0 E [ I { N = n } e s X n ] = k = 0 P ( N = n ) E [ e s X n ]
(5)
= k = 0 P ( N = n ) M Y n ( s ) = g N [ M Y ( s ) ] = k = 0 P ( N = n ) M Y n ( s ) = g N [ M Y ( s ) ]
(6)

Example 1: A service shop

Suppose the number N of jobs brought to a service shop in a day is Poisson (8). One fourth of these are items under warranty for which no charge is made. Others fall in one of two categories. One half of the arriving jobs are charged for one hour of shop time; the remaining one fourth are charged for two hours of shop time. Thus, the individual shop hour charges Yk have the common distribution

Y = [ 0 1 2 ] with probabilities P Y = [ 1 / 4 1 / 2 1 / 4 ] Y = [ 0 1 2 ] with probabilities P Y = [ 1 / 4 1 / 2 1 / 4 ]
(7)

Make the basic assumptions of our model. Determine P(D4)P(D4).

SOLUTION

g N ( s ) = e 8 ( s - 1 ) g Y ( s ) = 1 4 ( 1 + 2 s + s 2 ) g N ( s ) = e 8 ( s - 1 ) g Y ( s ) = 1 4 ( 1 + 2 s + s 2 )
(8)

According to the formula developed above,

g D ( s ) = g N [ g Y ( s ) ] = exp ( ( 8 / 4 ) ( 1 + 2 s + s 2 ) - 8 ) = e 4 s e 2 s 2 e - 6 g D ( s ) = g N [ g Y ( s ) ] = exp ( ( 8 / 4 ) ( 1 + 2 s + s 2 ) - 8 ) = e 4 s e 2 s 2 e - 6
(9)

Expand the exponentials in power series about the origin, multiply out to get enough terms. The result of straightforward but somewhat tedious calculations is

g D ( s ) = e - 6 ( 1 + 4 s + 10 s 2 + 56 3 s 3 + 86 3 s 4 + ) g D ( s ) = e - 6 ( 1 + 4 s + 10 s 2 + 56 3 s 3 + 86 3 s 4 + )
(10)

Taking the coefficients of the generating function, we get

P ( D 4 ) e - 6 ( 1 + 4 + 10 + 56 3 + 86 3 ) = e - 6 187 3 0 . 1545 P ( D 4 ) e - 6 ( 1 + 4 + 10 + 56 3 + 86 3 ) = e - 6 187 3 0 . 1545
(11)

Example 2: A result on Bernoulli trials

Suppose the counting random variable NN binomial (n,p)(n,p) and Yi=IEiYi=IEi, with P(Ei)=p0P(Ei)=p0. Then

g N = ( q + p s ) n and g Y ( s ) = q 0 + p 0 s g N = ( q + p s ) n and g Y ( s ) = q 0 + p 0 s
(12)

By the basic result on random selection, we have

g D ( s ) = g N [ g Y ( s ) ] = [ q + p ( q 0 + p 0 s ) ] n = [ ( 1 - p p 0 ) + p p 0 s ] n g D ( s ) = g N [ g Y ( s ) ] = [ q + p ( q 0 + p 0 s ) ] n = [ ( 1 - p p 0 ) + p p 0 s ] n
(13)

so that DD binomial (n,pp0)(n,pp0).

In the next section we establish useful m-procedures for determining the generating function gD and the moment generating function MD for the compound demand for simple random variables, hence for determining the complete distribution. Obviously, these will not work for all problems. It may helpful, if not entirely sufficient, in such cases to be able to determine the mean value E[D]E[D] and variance Var [D] Var [D]. To this end, we establish the following expressions for the mean and variance.

Example 3: Mean and variance of the compound demand

E [ D ] = E [ N ] E [ Y ] and Var [ D ] = E [ N ] Var [ Y ] + Var [ N ] E 2 [ Y ] E [ D ] = E [ N ] E [ Y ] and Var [ D ] = E [ N ] Var [ Y ] + Var [ N ] E 2 [ Y ]
(14)

DERIVATION

E [ D ] = E n = 0 I { N = n } X n = n = 0 P ( N = n ) E [ X n ] E [ D ] = E n = 0 I { N = n } X n = n = 0 P ( N = n ) E [ X n ]
(15)
= E [ Y ] n = 0 n P ( N = n ) = E [ Y ] E [ N ] = E [ Y ] n = 0 n P ( N = n ) = E [ Y ] E [ N ]
(16)
E [ D 2 ] = n = 0 P ( N = n ) E [ X n 2 ] = n = 0 P ( N = n ) { Var [ X n ] + E 2 [ X n ] } E [ D 2 ] = n = 0 P ( N = n ) E [ X n 2 ] = n = 0 P ( N = n ) { Var [ X n ] + E 2 [ X n ] }
(17)
= n = 0 P ( N = n ) { n Var [ Y ] + n 2 E 2 [ Y ] } = E [ N ] Var [ Y ] + E [ N 2 ] E 2 [ Y ] = n = 0 P ( N = n ) { n Var [ Y ] + n 2 E 2 [ Y ] } = E [ N ] Var [ Y ] + E [ N 2 ] E 2 [ Y ]
(18)

Hence

Var [ D ] = E [ N ] Var [ Y ] + E [ N 2 ] E 2 [ Y ] - E [ N ] 2 E 2 [ Y ] = E [ N ] Var [ Y ] + Var [ N ] E 2 [ Y ] Var [ D ] = E [ N ] Var [ Y ] + E [ N 2 ] E 2 [ Y ] - E [ N ] 2 E 2 [ Y ] = E [ N ] Var [ Y ] + Var [ N ] E 2 [ Y ]
(19)

Example 4: Mean and variance for Example 1

E[N]= Var [N]=8E[N]= Var [N]=8. By symmetry E[Y]=1E[Y]=1. Var [Y]=0.25(0+2+4)-1=0.5 Var [Y]=0.25(0+2+4)-1=0.5. Hence,

E [ D ] = 8 · 1 = 8 , Var [ D ] = 8 · 0 . 5 + 8 · 1 = 12 E [ D ] = 8 · 1 = 8 , Var [ D ] = 8 · 0 . 5 + 8 · 1 = 12
(20)

Calculations for the compound demand

We have m-procedures for performing the calculations necessary to determine the distribution for a composite demand D when the counting random variable N and the individual demands Yk are simple random variables with not too many values. In some cases, such as for a Poisson counting random variable, we are able to approximate by a simple random variable.

The procedure gend

If the Yi are nonnegative, integer valued, then so is D, and there is a generating function. We examine a strategy for computation which is implemented in the m-procedure gend. Suppose

g N ( s ) = p 0 + p 1 s + p 2 s 2 + + p n s n g N ( s ) = p 0 + p 1 s + p 2 s 2 + + p n s n
(21)
g Y ( s ) = π 0 + π 1 s + π 2 s 2 + + π m s m g Y ( s ) = π 0 + π 1 s + π 2 s 2 + + π m s m
(22)

The coefficients of gN and gY are the probabilities of the values of N and Y, respectively. We enter these and calculate the coefficients for powers of gY:

g N = [ p 0 p 1 p n ] 1 × ( n + 1 ) Coefficients of g N y = [ π 0 π 1 π m ] 1 × ( m + 1 ) Coefficients of g Y y 2 = conv ( y , y ) 1 × ( 2 m + 1 ) Coefficients of g Y 2 y 3 = conv ( y , y 2 ) 1 × ( 3 m + 1 ) Coefficients of g Y 3 y n = conv ( y , y ( n - 1 ) ) 1 × ( n m + 1 ) Coefficients of g Y n g N = [ p 0 p 1 p n ] 1 × ( n + 1 ) Coefficients of g N y = [ π 0 π 1 π m ] 1 × ( m + 1 ) Coefficients of g Y y 2 = conv ( y , y ) 1 × ( 2 m + 1 ) Coefficients of g Y 2 y 3 = conv ( y , y 2 ) 1 × ( 3 m + 1 ) Coefficients of g Y 3 y n = conv ( y , y ( n - 1 ) ) 1 × ( n m + 1 ) Coefficients of g Y n
(23)

We wish to generate a matrix P whose rows contain the joint probabilities. The probabilities in the ith row consist of the coefficients for the appropriate power of gY multiplied by the probability N has that value. To achieve this, we need a matrix, each of whose n+1n+1 rows has nm+1nm+1 elements, the length of ynyn. We begin by “preallocating” zeros to the rows. That is, we set P=zeros(n+1,n*m+1)P=zeros(n+1,n*m+1). We then replace the appropriate elements of the successive rows. The replacement probabilities for the ith row are obtained by the convolution of gY and the power of gY for the previous row. When the matrix P is completed, we remove zero rows and columns, corresponding to missing values of N and D (i.e., values with zero probability). To orient the joint probabilities as on the plane, we rotate P ninety degrees counterclockwise. With the joint distribution, we may then calculate any desired quantities.

Example 5: A compound demand

The number of customers in a major appliance store is equally likely to be 1, 2, or 3. Each customer buys 0, 1, or 2 items with respective probabilities 0.5, 0.4, 0.1. Customers buy independently, regardless of the number of customers. First we determine the matrices representing gN and gY. The coefficients are the probabilities that each integer value is observed. Note that the zero coefficients for any missing powers must be included.

gN = (1/3)*[0 1 1 1];    % Note zero coefficient for missing zero power
gY = 0.1*[5 4 1];        % All powers 0 thru 2 have positive coefficients
gend
 Do not forget zero coefficients for missing powers
Enter the gen fn COEFFICIENTS for gN gN    % Coefficient matrix named gN
Enter the gen fn COEFFICIENTS for gY gY    % Coefficient matrix named gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD)                  % Optional display of complete distribution
         0    0.2917
    1.0000    0.3667
    2.0000    0.2250
    3.0000    0.0880
    4.0000    0.0243
    5.0000    0.0040
    6.0000    0.0003
EN = N*PN'
EN =   2
EY = Y*PY'
EY =  0.6000
ED = D*PD'
ED =  1.2000                % Agrees with theoretical EN*EY
P3 = (D>=3)*PD'
P3  = 0.1167                
[N,D,t,u,PN,PD,PL] = jcalcf(N,D,P);
EDn = sum(u.*P)./sum(P);
disp([N;EDn]')
    1.0000    0.6000        % Agrees with theoretical E[D|N=n] = n*EY
    2.0000    1.2000
    3.0000    1.8000
VD = (D.^2)*PD' - ED^2
VD =  1.1200                % Agrees with theoretical EN*VY + VN*EY^2

Example 6: A numerical example

g N ( s ) = 1 5 ( 1 + s + s 2 + s 3 + s 4 ) g Y ( s ) = 0 . 1 ( 5 s + 3 s 2 + 2 s 3 ) g N ( s ) = 1 5 ( 1 + s + s 2 + s 3 + s 4 ) g Y ( s ) = 0 . 1 ( 5 s + 3 s 2 + 2 s 3 )
(24)

Note that the zero power is missing from gYgY, corresponding to the fact that P(Y=0)=0P(Y=0)=0.

gN = 0.2*[1 1 1 1 1];
gY = 0.1*[0 5 3 2];      % Note the zero coefficient in the zero position
gend
Do not forget zero coefficients for missing powers
Enter the gen fn COEFFICIENTS for gN  gN
Enter the gen fn COEFFICIENTS for gY  gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD)                 % Optional display of complete distribution
         0    0.2000
    1.0000    0.1000
    2.0000    0.1100
    3.0000    0.1250
    4.0000    0.1155
    5.0000    0.1110
    6.0000    0.0964
    7.0000    0.0696
    8.0000    0.0424
    9.0000    0.0203
   10.0000    0.0075
   11.0000    0.0019
   12.0000    0.0003
p3 = (D == 3)*PD'        % P(D=3)
P3 =  0.1250
P4_12 = ((D >= 4)&(D <= 12))*PD'
P4_12 = 0.4650           % P(4 <= D <= 12)

Example 7: Number of successes for random number N of trials.

We are interested in the number of successes in N trials for a general counting random variable. This is a generalization of the Bernoulli case in Example 2. Suppose, as in Example 5, the number of customers in a major appliance store is equally likely to be 1, 2, or 3, and each buys at least one item with probability p=0.6p=0.6. Determine the distribution for the number D of buying customers.

SOLUTION

We use gN,gYgN,gY, and gend.

gN = (1/3)*[0 1 1 1]; % Note zero coefficient for missing zero power
gY = [0.4 0.6];       % Generating function for the indicator function
gend
Do not forget zero coefficients for missing powers
Enter gen fn COEFFICIENTS for gN  gN
Enter gen fn COEFFICIENTS for gY  gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD)
         0    0.2080
    1.0000    0.4560
    2.0000    0.2640
    3.0000    0.0720

The procedure gend is limited to simple N and Yk, with nonnegative integer values. Sometimes, a random variable with unbounded range may be approximated by a simple random variable. The solution in the following example utilizes such an approximation procedure for the counting random variable N.

Example 8: Solution of the shop time Example 1

The number N of jobs brought to a service shop in a day is Poisson (8). The individual shop hour charges Yk have the common distribution Y=[012]Y=[012] with probabilities PY=[1/41/21/4]PY=[1/41/21/4].

Under the basic assumptions of our model, determine P(D4)P(D4).

SOLUTION

Since Poisson N is unbounded, we need to check for a sufficient number of terms in a simple approximation. Then we proceed as in the simple case.

pa = cpoisson(8,10:5:30)     % Check for sufficient number of terms
pa =   0.2834    0.0173    0.0003    0.0000    0.0000
p25 = cpoisson(8,25)         % Check on choice of n = 25
p25 =  1.1722e-06
gN = ipoisson(8,0:25);       % Approximate gN
gY = 0.25*[1 2 1];
gend
Do not forget zero coefficients for missing powers
Enter gen fn COEFFICIENTS for gN  gN
Enter gen fn COEFFICIENTS for gY  gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD(D<=20,:))            % Calculated values to D = 50
         0    0.0025         % Display for D <= 20
    1.0000    0.0099
    2.0000    0.0248
    3.0000    0.0463
    4.0000    0.0711
    5.0000    0.0939
    6.0000    0.1099
    7.0000    0.1165
    8.0000    0.1132
    9.0000    0.1021
   10.0000    0.0861
   11.0000    0.0684
   12.0000    0.0515
   13.0000    0.0369
   14.0000    0.0253
   15.0000    0.0166
   16.0000    0.0105
   17.0000    0.0064
   18.0000    0.0037
   19.0000    0.0021
   20.0000    0.0012
sum(PD)                       % Check on sufficiency of approximation
ans =  1.0000
P4 = (D<=4)*PD'
P4 =   0.1545                 % Theoretical value (4  places) = 0.1545
ED = D*PD'
ED =   8.0000                 % Theoretical = 8  (Example 4)
VD = (D.^2)*PD' - ED^2
VD =  11.9999                 % Theoretical = 12 (Example 4)

The m-procedures mgd and jmgd

The next example shows a fundamental limitation of the gend procedure. The values for the individual demands are not limited to integers, and there are considerable gaps between the values. In this case, we need to implement the moment generating function MD rather than the generating function gD.

In the generating function case, it is as easy to develop the joint distribution for {N,D}{N,D} as to develop the marginal distribution for D. For the moment generating function, the joint distribution requires considerably more computation. As a consequence, we find it convenient to have two m-procedures: mgd for the marginal distribution and jmgd for the joint distribution.

Instead of the convolution procedure used in gend to determine the distribution for the sums of the individual demands, the m-procedure mgd utilizes the m-function mgsum to obtain these distributions. The distributions for the various sums are concatenated into two row vectors, to which csort is applied to obtain the distribution for the compound demand. The procedure requires as input the generating function for N and the actual distribution, Y and PYPY, for the individual demands. For gNgN, it is necessary to treat the coefficients as in gend. However, the actual values and probabilities in the distribution for Y are put into a pair of row matrices. If Y is integer valued, there are no zeros in the probability matrix for missing values.

Example 9: Noninteger values

A service shop has three standard charges for a certain class of warranty services it performs: $10, $12.50, and $15. The number of jobs received in a normal work day can be considered a random variable N which takes on values 0, 1, 2, 3, 4 with equal probabilities 0.2. The job types for arrivals may be represented by an iid class {Yi:1i4}{Yi:1i4}, independent of the arrival process. The Yi take on values 10, 12.5, 15 with respective probabilities 0.5, 0.3, 0.2. Let C be the total amount of services rendered in a day. Determine the distribution for C.

SOLUTION

gN = 0.2*[1 1 1 1 1];         % Enter data
Y = [10 12.5 15];
PY = 0.1*[5 3 2];
mgd                           % Call for procedure
Enter gen fn COEFFICIENTS for gN  gN
Enter VALUES for Y  Y
Enter PROBABILITIES for Y  PY
Values are in row matrix D; probabilities are in PD.
To view the distribution, call for mD.
disp(mD)                      % Optional display of distribution
         0    0.2000
   10.0000    0.1000
   12.5000    0.0600
   15.0000    0.0400
   20.0000    0.0500
   22.5000    0.0600
   25.0000    0.0580
   27.5000    0.0240
   30.0000    0.0330
   32.5000    0.0450
   35.0000    0.0570
   37.5000    0.0414
   40.0000    0.0353
   42.5000    0.0372
   45.0000    0.0486
   47.5000    0.0468
   50.0000    0.0352
   52.5000    0.0187
   55.0000    0.0075
   57.5000    0.0019
   60.0000    0.0003

We next recalculate Example 6, above, using mgd rather than gend.

Example 10: Recalculation of Example 6

In Example 6, we have

g N ( s ) = 1 5 ( 1 + s + s 2 + s 3 + s 4 ) g Y ( s ) = 0 . 1 ( 5 s + 3 s 2 + 2 s 3 ) g N ( s ) = 1 5 ( 1 + s + s 2 + s 3 + s 4 ) g Y ( s ) = 0 . 1 ( 5 s + 3 s 2 + 2 s 3 )
(25)

This means that the distribution for Y is Y=[123]Y=[123] and PY=0.1*[532]PY=0.1*[532].

We use the same expression for gNgN as in Example 6.

gN = 0.2*ones(1,5);
Y = 1:3;
PY = 0.1*[5 3 2];
mgd
Enter gen fn COEFFICIENTS for gN  gN
Enter VALUES for Y  Y
Enter PROBABILITIES for Y  PY
Values are in row matrix D; probabilities are in PD.
To view the distribution, call for mD.
disp(mD)
         0    0.2000
    1.0000    0.1000
    2.0000    0.1100
    3.0000    0.1250
    4.0000    0.1155
    5.0000    0.1110
    6.0000    0.0964
    7.0000    0.0696
    8.0000    0.0424
    9.0000    0.0203
   10.0000    0.0075
   11.0000    0.0019
   12.0000    0.0003
P3 = (D==3)*PD'
P3 =   0.1250
ED = D*PD'
ED =   3.4000
P_4_12 = ((D>=4)&(D<=12))*PD'
P_4_12 =  0.4650
P7 = (D>=7)*PD'
P7 =   0.1421

As expected, the results are the same as those obtained with gend.

If it is desired to obtain the joint distribution for {N,D}{N,D}, we use a modification of mgd called jmgd. The complications come in placing the probabilities in the P matrix in the desired positions. This requires some calculations to determine the appropriate size of the matrices used as well as a procedure to put each probability in the position corresponding to its D value. Actual operation is quite similar to the operation of mgd, and requires the same data format.

A principle use of the joint distribution is to demonstrate features of the model, such as E[D|N=n]=nE[Y]E[D|N=n]=nE[Y], etc. This, of course, is utilized in obtaining the expressions for MD(s)MD(s) in terms of gN(s)gN(s) and MY(s)MY(s). This result guides the development of the computational procedures, but these do not depend upon this result. However, it is usually helpful to demonstrate the validity of the assumptions in typical examples.

Remark. In general, if the use of gend is appropriate, it is faster and more efficient than mgd (or jmgd). And it will handle somewhat larger problems. But both m-procedures work quite well for problems of moderate size, and are convenient tools for solving various “compound demand” type problems.

Content actions

Download module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks