We have mprocedures for performing the calculations necessary to determine the distribution
for a composite demand D when the counting random variable N and the individual demands
Y_{k} are simple random variables with not too many values. In some cases, such as for
a Poisson counting random variable, we are able to approximate by a simple random variable.
The procedure gend
If the Y_{i} are nonnegative, integer valued, then so is D, and there is a
generating function. We examine a strategy for computation which is implemented in
the mprocedure gend. Suppose
g
N
(
s
)
=
p
0
+
p
1
s
+
p
2
s
2
+
⋯
+
p
n
s
n
g
N
(
s
)
=
p
0
+
p
1
s
+
p
2
s
2
+
⋯
+
p
n
s
n
(21)
g
Y
(
s
)
=
π
0
+
π
1
s
+
π
2
s
2
+
⋯
+
π
m
s
m
g
Y
(
s
)
=
π
0
+
π
1
s
+
π
2
s
2
+
⋯
+
π
m
s
m
(22)The coefficients of g_{N} and g_{Y} are the probabilities of the values of N and
Y, respectively. We enter these and calculate the coefficients
for powers of g_{Y}:
g
N
=
[
p
0
p
1
⋯
p
n
]
1
×
(
n
+
1
)
Coefficients
of
g
N
y
=
[
π
0
π
1
⋯
π
m
]
1
×
(
m
+
1
)
Coefficients
of
g
Y
⋯
y
2
=
conv
(
y
,
y
)
1
×
(
2
m
+
1
)
Coefficients
of
g
Y
2
y
3
=
conv
(
y
,
y
2
)
1
×
(
3
m
+
1
)
Coefficients
of
g
Y
3
⋯
y
n
=
conv
(
y
,
y
(
n

1
)
)
1
×
(
n
m
+
1
)
Coefficients
of
g
Y
n
g
N
=
[
p
0
p
1
⋯
p
n
]
1
×
(
n
+
1
)
Coefficients
of
g
N
y
=
[
π
0
π
1
⋯
π
m
]
1
×
(
m
+
1
)
Coefficients
of
g
Y
⋯
y
2
=
conv
(
y
,
y
)
1
×
(
2
m
+
1
)
Coefficients
of
g
Y
2
y
3
=
conv
(
y
,
y
2
)
1
×
(
3
m
+
1
)
Coefficients
of
g
Y
3
⋯
y
n
=
conv
(
y
,
y
(
n

1
)
)
1
×
(
n
m
+
1
)
Coefficients
of
g
Y
n
(23)We wish to generate a matrix P whose rows contain the joint probabilities. The
probabilities in the ith row consist of the coefficients for the appropriate power
of g_{Y} multiplied by the probability N has that value. To achieve this, we need a matrix,
each of whose n+1n+1 rows has nm+1nm+1 elements, the length of ynyn. We begin by
“preallocating” zeros to the rows. That is, we set P=zeros(n+1,n*m+1)P=zeros(n+1,n*m+1).
We then replace the appropriate elements of the successive rows. The replacement probabilities
for the ith row are obtained by the convolution of g_{Y} and the power of g_{Y} for the
previous row. When the matrix P is completed, we remove zero rows and columns, corresponding
to missing values of N and D (i.e., values with zero probability).
To orient the joint probabilities as on the plane, we rotate P ninety degrees counterclockwise.
With the joint distribution, we may then calculate any desired quantities.
The number of customers in a major appliance store is equally likely to be 1, 2, or 3.
Each customer buys 0, 1, or 2 items with respective probabilities 0.5, 0.4, 0.1.
Customers buy independently, regardless of the number of customers.
First we determine the matrices representing g_{N} and g_{Y}. The coefficients
are the probabilities that each
integer value is observed. Note that the zero coefficients for any
missing powers must be included.
gN = (1/3)*[0 1 1 1]; % Note zero coefficient for missing zero power
gY = 0.1*[5 4 1]; % All powers 0 thru 2 have positive coefficients
gend
Do not forget zero coefficients for missing powers
Enter the gen fn COEFFICIENTS for gN gN % Coefficient matrix named gN
Enter the gen fn COEFFICIENTS for gY gY % Coefficient matrix named gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD) % Optional display of complete distribution
0 0.2917
1.0000 0.3667
2.0000 0.2250
3.0000 0.0880
4.0000 0.0243
5.0000 0.0040
6.0000 0.0003
EN = N*PN'
EN = 2
EY = Y*PY'
EY = 0.6000
ED = D*PD'
ED = 1.2000 % Agrees with theoretical EN*EY
P3 = (D>=3)*PD'
P3 = 0.1167
[N,D,t,u,PN,PD,PL] = jcalcf(N,D,P);
EDn = sum(u.*P)./sum(P);
disp([N;EDn]')
1.0000 0.6000 % Agrees with theoretical E[DN=n] = n*EY
2.0000 1.2000
3.0000 1.8000
VD = (D.^2)*PD'  ED^2
VD = 1.1200 % Agrees with theoretical EN*VY + VN*EY^2
g
N
(
s
)
=
1
5
(
1
+
s
+
s
2
+
s
3
+
s
4
)
g
Y
(
s
)
=
0
.
1
(
5
s
+
3
s
2
+
2
s
3
)
g
N
(
s
)
=
1
5
(
1
+
s
+
s
2
+
s
3
+
s
4
)
g
Y
(
s
)
=
0
.
1
(
5
s
+
3
s
2
+
2
s
3
)
(24)Note that the zero power is missing from gYgY, corresponding to the fact
that P(Y=0)=0P(Y=0)=0.
gN = 0.2*[1 1 1 1 1];
gY = 0.1*[0 5 3 2]; % Note the zero coefficient in the zero position
gend
Do not forget zero coefficients for missing powers
Enter the gen fn COEFFICIENTS for gN gN
Enter the gen fn COEFFICIENTS for gY gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD) % Optional display of complete distribution
0 0.2000
1.0000 0.1000
2.0000 0.1100
3.0000 0.1250
4.0000 0.1155
5.0000 0.1110
6.0000 0.0964
7.0000 0.0696
8.0000 0.0424
9.0000 0.0203
10.0000 0.0075
11.0000 0.0019
12.0000 0.0003
p3 = (D == 3)*PD' % P(D=3)
P3 = 0.1250
P4_12 = ((D >= 4)&(D <= 12))*PD'
P4_12 = 0.4650 % P(4 <= D <= 12)
We are interested in the number of successes in N trials for a general counting random variable.
This is a generalization of the Bernoulli case in Example 2. Suppose, as in Example 5,
the number of customers in a major appliance store is equally likely to be 1, 2, or 3, and each
buys at least one item with probability p=0.6p=0.6. Determine the distribution for the number
D of buying customers.
SOLUTION
We use gN,gYgN,gY, and gend.
gN = (1/3)*[0 1 1 1]; % Note zero coefficient for missing zero power
gY = [0.4 0.6]; % Generating function for the indicator function
gend
Do not forget zero coefficients for missing powers
Enter gen fn COEFFICIENTS for gN gN
Enter gen fn COEFFICIENTS for gY gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD)
0 0.2080
1.0000 0.4560
2.0000 0.2640
3.0000 0.0720
The procedure gend is limited to simple N and Y_{k}, with nonnegative
integer values. Sometimes, a random variable with unbounded range may be approximated by a simple
random variable. The solution in the following example utilizes such an approximation
procedure for the counting random variable N.
The number N of jobs brought to a service shop in a day is Poisson (8). The
individual shop hour charges Y_{k} have the common distribution Y=[012]Y=[012] with probabilities PY=[1/41/21/4]PY=[1/41/21/4].
Under the basic assumptions of our model, determine P(D≤4)P(D≤4).
SOLUTION
Since Poisson N is unbounded, we need to check for a sufficient number of terms
in a simple approximation. Then we proceed as in the simple case.
pa = cpoisson(8,10:5:30) % Check for sufficient number of terms
pa = 0.2834 0.0173 0.0003 0.0000 0.0000
p25 = cpoisson(8,25) % Check on choice of n = 25
p25 = 1.1722e06
gN = ipoisson(8,0:25); % Approximate gN
gY = 0.25*[1 2 1];
gend
Do not forget zero coefficients for missing powers
Enter gen fn COEFFICIENTS for gN gN
Enter gen fn COEFFICIENTS for gY gY
Results are in N, PN, Y, PY, D, PD, P
May use jcalc or jcalcf on N, D, P
To view distribution for D, call for gD
disp(gD(D<=20,:)) % Calculated values to D = 50
0 0.0025 % Display for D <= 20
1.0000 0.0099
2.0000 0.0248
3.0000 0.0463
4.0000 0.0711
5.0000 0.0939
6.0000 0.1099
7.0000 0.1165
8.0000 0.1132
9.0000 0.1021
10.0000 0.0861
11.0000 0.0684
12.0000 0.0515
13.0000 0.0369
14.0000 0.0253
15.0000 0.0166
16.0000 0.0105
17.0000 0.0064
18.0000 0.0037
19.0000 0.0021
20.0000 0.0012
sum(PD) % Check on sufficiency of approximation
ans = 1.0000
P4 = (D<=4)*PD'
P4 = 0.1545 % Theoretical value (4 places) = 0.1545
ED = D*PD'
ED = 8.0000 % Theoretical = 8 (Example 4)
VD = (D.^2)*PD'  ED^2
VD = 11.9999 % Theoretical = 12 (Example 4)
The mprocedures mgd and jmgd
The next example shows a fundamental limitation of the gend procedure. The
values for the individual demands are not limited to integers, and there are considerable
gaps between the values. In this case, we need to implement the
moment generating function M_{D} rather than the generating function g_{D}.
In the generating function case, it is as easy to develop the joint distribution
for {N,D}{N,D} as to develop the marginal distribution for D. For the moment
generating function, the joint distribution requires considerably more computation.
As a consequence, we find it convenient to have two mprocedures: mgd for the
marginal distribution and jmgd for the joint distribution.
Instead of the convolution procedure used in gend to determine the distribution for
the sums of the individual demands, the mprocedure mgd utilizes the
mfunction mgsum to obtain these distributions. The distributions for the various
sums are concatenated into two row vectors, to which csort is applied to obtain the distribution
for the compound demand. The procedure requires as input the generating function for
N and the actual distribution, Y and PYPY, for the individual demands. For gNgN, it is
necessary to treat the coefficients as in gend. However, the actual values and probabilities
in the distribution for Y are put into a pair of row matrices. If Y is
integer valued, there are no zeros in the probability matrix for missing values.
A service shop has three standard charges for a certain class of warranty services
it performs: $10, $12.50, and $15. The number of jobs received in a normal work
day can be considered a random variable N which takes on values 0, 1, 2, 3, 4 with
equal probabilities 0.2. The job types for arrivals may be represented by an iid
class {Yi:1≤i≤4}{Yi:1≤i≤4}, independent of the arrival process. The Y_{i}
take on values 10, 12.5, 15 with respective probabilities 0.5, 0.3, 0.2. Let C be
the total amount of services rendered in a day. Determine the distribution for C.
SOLUTION
gN = 0.2*[1 1 1 1 1]; % Enter data
Y = [10 12.5 15];
PY = 0.1*[5 3 2];
mgd % Call for procedure
Enter gen fn COEFFICIENTS for gN gN
Enter VALUES for Y Y
Enter PROBABILITIES for Y PY
Values are in row matrix D; probabilities are in PD.
To view the distribution, call for mD.
disp(mD) % Optional display of distribution
0 0.2000
10.0000 0.1000
12.5000 0.0600
15.0000 0.0400
20.0000 0.0500
22.5000 0.0600
25.0000 0.0580
27.5000 0.0240
30.0000 0.0330
32.5000 0.0450
35.0000 0.0570
37.5000 0.0414
40.0000 0.0353
42.5000 0.0372
45.0000 0.0486
47.5000 0.0468
50.0000 0.0352
52.5000 0.0187
55.0000 0.0075
57.5000 0.0019
60.0000 0.0003
We next recalculate Example 6, above, using mgd rather than gend.
In Example 6, we have
g
N
(
s
)
=
1
5
(
1
+
s
+
s
2
+
s
3
+
s
4
)
g
Y
(
s
)
=
0
.
1
(
5
s
+
3
s
2
+
2
s
3
)
g
N
(
s
)
=
1
5
(
1
+
s
+
s
2
+
s
3
+
s
4
)
g
Y
(
s
)
=
0
.
1
(
5
s
+
3
s
2
+
2
s
3
)
(25)This means that the distribution for Y is
Y=[123]Y=[123] and PY=0.1*[532]PY=0.1*[532].
We use the same expression for gNgN as in Example 6.
gN = 0.2*ones(1,5);
Y = 1:3;
PY = 0.1*[5 3 2];
mgd
Enter gen fn COEFFICIENTS for gN gN
Enter VALUES for Y Y
Enter PROBABILITIES for Y PY
Values are in row matrix D; probabilities are in PD.
To view the distribution, call for mD.
disp(mD)
0 0.2000
1.0000 0.1000
2.0000 0.1100
3.0000 0.1250
4.0000 0.1155
5.0000 0.1110
6.0000 0.0964
7.0000 0.0696
8.0000 0.0424
9.0000 0.0203
10.0000 0.0075
11.0000 0.0019
12.0000 0.0003
P3 = (D==3)*PD'
P3 = 0.1250
ED = D*PD'
ED = 3.4000
P_4_12 = ((D>=4)&(D<=12))*PD'
P_4_12 = 0.4650
P7 = (D>=7)*PD'
P7 = 0.1421
As expected, the results are the same as those obtained with gend.
If it is desired to obtain the joint distribution for {N,D}{N,D}, we use a modification
of mgd called jmgd. The complications come in placing the probabilities in the
P matrix in the desired positions. This requires some calculations to determine
the appropriate size of the matrices used as well as a procedure to put each probability in the
position corresponding to its D value. Actual operation is quite similar to the
operation of mgd, and requires the same data format.
A principle use of the joint distribution is to demonstrate
features of the model, such as E[DN=n]=nE[Y]E[DN=n]=nE[Y], etc. This, of course, is
utilized in obtaining the expressions for MD(s)MD(s) in terms of gN(s)gN(s) and MY(s)MY(s).
This result guides the development of the computational procedures, but these do not
depend upon this result. However, it is usually helpful to demonstrate the validity of
the assumptions in typical examples.
Remark. In general, if the use of gend is appropriate, it is faster and more efficient
than mgd (or jmgd). And it will handle somewhat larger problems. But both mprocedures work
quite well for problems of moderate size, and are convenient tools for solving various “compound
demand” type problems.