Algorithms for analyzing productform queueing
networks are based on one of two basic approaches:
convolution and mean value analysis.
As its name implies, mean value analysis produces means of
distributions rather than the distributions themselves.
Convolution, on the other hand, allows us to compute the steady
state occupancy distribution function in a relatively efficient
way. The alternative, to solve for the distribution function by
treating a queueing network as a continuous time Markov chain,
is impractical for all but the simplest cases.
For any productform queueing network with
MM queues, the
probability of being in a state with
n
1
n
1
jobs in queue 1,
n
2
n
2
jobs in queue 2, …,
n
M
n
M
in queue MM in
steady state is
π
n
1
,
n
2
,
…
,
n
M
=
π
1
n
1
π
2
n
2
⋯
π
M
n
M
K
π
n
1
,
n
2
,
…
,
n
M
π
1
n
1
π
2
n
2
⋯
π
M
n
M
K
π
m
n
m
π
m
n
m
is the probability of
n
m
n
m
jobs in queue mm if the queue were
in isolation with a Poisson input process having the same rate
as the throughput for queue mm, and
KK is a normalization constant. In
the case of an open network, KK is
always 1. For a closed network, KK
is determined by the constraint that the state probabilities sum
to 1; i.e.,
K=∑(
n
m
′
≥0)∧(1≤m≤M)∧(∑m=1M
n
m
′
=N),all states∈
n
1
′
n
2
′
⋯
n
M
′
π
1
n
1
′
π
2
n
2
′
⋯
π
M
n
M
′
K
n
m
′
0
1
m
M
m
1
M
n
m
′
N
all states
n
1
′
n
2
′
⋯
n
M
′
π
1
n
1
′
π
2
n
2
′
⋯
π
M
n
M
′
with NN being the total number of
jobs in the system.
When all of the queues are singleserver queues, we can rewrite
the joint state probability as
π
n
1
,
n
2
,
…
,
n
M
=
ρ
1
n
1
ρ
2
n
2
⋯
ρ
M
n
M
C
π
n
1
,
n
2
,
…
,
n
M
ρ
1
n
1
ρ
2
n
2
⋯
ρ
M
n
M
C
(1)
where the normalization constant
CC
is
C=∑(
n
m
′
≥0)∧(1≤m≤M)∧(∑m=1M
n
m
′
=N),all states∈
n
1
′
n
2
′
⋯
n
M
′
ρ
1
n
1
′
ρ
2
n
2
′
⋯
ρ
M
n
M
′
C
n
m
′
0
1
m
M
m
1
M
n
m
′
N
all states
n
1
′
n
2
′
⋯
n
M
′
ρ
1
n
1
′
ρ
2
n
2
′
⋯
ρ
M
n
M
′
The queue utilization
ρ
m
=
λ
m
μ
m
ρ
m
λ
m
μ
m
in a closed network cannot be determined directly,
since we cannot calculate the queue throughput
λ
m
λ
m
directly.
Let PP be the
routing matrix for the Markov chain; i.e.,
p
i
,
j
p
i
,
j
is the probability that a job leaving queue
ii goes directly to queue
jj. Then
λP=λ
λ
P
λ
, where
λ=
λ
1
λ
2
…
λ
M
T
λ
λ
1
λ
2
…
λ
M
is the vector of queue throughputs. We can't get the
queue throughputs directly from this matrix equation because,
like the single step transition probability matrix for a
discrete time Markov chain, PP is singular, since each row sums
to one. Actually, PP
is the single step transition probability matrix for the
embedded Markov chain describing the queue occupied by a single
job, and for that reason we often refer to PP as the routing
chain.
However, we don't actually need to know
λ
m
λ
m
to determine the normalization constant. We can determine the
relative throughputs
r
m
r
m
, where
r
m
=
λ
m
α
r
m
λ
m
α
,
1≤m≤M
1
m
M
, for some unknown nonzero constant
αα, since
λP=λ
λ
P
λ
implies that
rP=r
r
P
r
. Define the relative utilization,
u
m
=
r
m
μ
m
=
λ
m
α
μ
m
=
ρ
m
α
u
m
r
m
μ
m
λ
m
α
μ
m
ρ
m
α
. Then
ρ
1
n
1
ρ
2
n
2
⋯
ρ
M
n
M
=α
u
1
n
1
α
u
2
n
2
⋯α
u
M
n
M
=α
n
1
+
n
2
+…+
n
M
u
1
n
1
u
2
n
2
⋯
u
M
n
M
=αN
u
1
n
1
u
2
n
2
⋯
u
M
n
M
ρ
1
n
1
ρ
2
n
2
⋯
ρ
M
n
M
α
u
1
n
1
α
u
2
n
2
⋯
α
u
M
n
M
α
n
1
n
2
…
n
M
u
1
n
1
u
2
n
2
⋯
u
M
n
M
α
N
u
1
n
1
u
2
n
2
⋯
u
M
n
M
(2)
C=αN∑all states
u
1
n
1
u
2
n
2
⋯
u
M
n
M
=αNGN
C
α
N
all states
u
1
n
1
u
2
n
2
⋯
u
M
n
M
α
N
G
N
(3)
where
GN
G
N
is the normalization constant
CC
for
NN jobs in the system, scaled
by
αN
α
N
; i.e.,
GN=CαN
G
N
C
α
N
. Since
αα is an
unknown constant, so is
αN
α
N
. However, when we substitute these expressions into
the
equation for the state
probabilities, we get
π
n
1
,
n
2
,
…
,
n
M
=αN
u
1
n
1
u
2
n
2
⋯
u
M
n
M
αN∑all states
u
1
n
1
′
u
2
n
2
′
⋯
u
M
n
M
′
=
u
1
n
1
u
2
n
2
⋯
u
M
n
M
∑all states
u
1
n
1
′
u
2
n
2
′
⋯
u
M
n
M
′
=
u
1
n
1
u
2
n
2
⋯
u
M
n
M
GN
π
n
1
,
n
2
,
…
,
n
M
α
N
u
1
n
1
u
2
n
2
⋯
u
M
n
M
α
N
all states
u
1
n
1
′
u
2
n
2
′
⋯
u
M
n
M
′
u
1
n
1
u
2
n
2
⋯
u
M
n
M
all states
u
1
n
1
′
u
2
n
2
′
⋯
u
M
n
M
′
u
1
n
1
u
2
n
2
⋯
u
M
n
M
G
N
This says that in theory
GN
G
N
and hence the state probabilities can be computed from
the relative utilizations, without having to know the scaling
factor
αα.
In practice, however, this may not be such a good idea because
of the number of states in the summation for computing the
scaled normalization constant
GN
G
N
. How many states are there? Imagine
NN jobs lined up in a row. Now
insert
M−1
M
1
markers in the row of jobs. The markers partition the
NN jobs into
MM groups, some of which may be
empty. Each of the MM groups
corresponds to the jobs that are in one of the
MM queues. For example, suppose we
have seven jobs, each represented by an "x", and three markers,
represented by "". The following four strings represent four
ways of partitioning seven jobs among four queues. The number
of jobs in each queue is listed to the right of each string.
Table 1
ways of partitioning jobs 
number of jobs in each queue 
x  x   x x x x x 
n
1
=1
n
1
1
,
n
2
=1
n
2
1
,
n
3
=0
n
3
0
,
n
4
=5
n
4
5

   x x x x x x x 
n
1
=0
n
1
0
,
n
2
=0
n
2
0
,
n
3
=0
n
3
0
,
n
4
=7
n
4
7

x x    x x x x x 
n
1
=2
n
1
2
,
n
2
=0
n
2
0
,
n
3
=0
n
3
0
,
n
4
=5
n
4
5

x x  x x  x x  x 
n
1
=2
n
1
2
,
n
2
=2
n
2
2
,
n
3
=2
n
3
2
,
n
4
=1
n
4
1

The problem of counting the number of possible states in this
example is equivalent to the problem taking a string of ten
symbols and deciding which seven of them are jobs (or,
equivalently, which three of them are markers). In general, the
number of states in a queueing network with
NN jobs and
MM queues is
the number of ways of taking a string of
N+M−1
N
M
1
symbols and deciding which of them represent jobs or
which
M−1
M
1
of them represent markers. Hence, the number of
states is
CN+M−1N
C
N
M
1
N
, the number of combinations of
N+M−1
N
M
1
objects taken NN at a
time. For example, the number of states for seven jobs and
three queues is 140. A queueing network with 20 jobs and 5
queues has 10,626 states.
The Convolution Algorithm allows us to compute
GN
G
N
more efficiently, without enumerating all possible
states for the queueing network. Define
gnm=∑(
n
i
≥0)∧(1≤i≤m)∧(∑i=1m
n
i
=n),all states∈
n
1
n
2
⋯
n
m
u
1
n
1
u
2
n
2
⋯
u
m
n
m
g
n
m
n
i
0
1
i
m
i
1
m
n
i
n
all states
n
1
n
2
⋯
n
m
u
1
n
1
u
2
n
2
⋯
u
m
n
m
gnm
g
n
m
is the scaled normalization constant for
nn jobs and
mm queues. Each term in
the summation belongs to one of two groups: one for terms in
which
n
m
=0
n
m
0
and one for terms with
n
m
>0
n
m
0
. For example,
g33=
u
1
3+
u
2
3+
u
3
3+
u
1
2
u
2
+
u
1
2
u
3
+
u
1
u
2
2+
u
2
2
u
3
+
u
1
u
3
2+
u
2
u
3
2+
u
1
u
2
u
3
=
u
1
3+
u
2
3+
u
1
2
u
2
+
u
1
u
2
2+
u
3
3+
u
1
2
u
3
+
u
2
2
u
3
+
u
1
u
3
2+
u
2
u
3
2+
u
1
u
2
u
3
=
u
1
3+
u
2
3+
u
1
2
u
2
+
u
1
u
2
2+
u
3
(
u
3
2+
u
1
2+
u
2
2+
u
1
u
3
+
u
2
u
3
+
u
1
u
2
)
g
3
3
u
1
3
u
2
3
u
3
3
u
1
2
u
2
u
1
2
u
3
u
1
u
2
2
u
2
2
u
3
u
1
u
3
2
u
2
u
3
2
u
1
u
2
u
3
u
1
3
u
2
3
u
1
2
u
2
u
1
u
2
2
u
3
3
u
1
2
u
3
u
2
2
u
3
u
1
u
3
2
u
2
u
3
2
u
1
u
2
u
3
u
1
3
u
2
3
u
1
2
u
2
u
1
u
2
2
u
3
u
3
2
u
1
2
u
2
2
u
1
u
3
u
2
u
3
u
1
u
2
(4)
The first group contains all those product terms which have no
u
m
u
m
factor. It corresponds to dividing all
nn jobs among the first
m−1
m
1
queues. In the second group, the exponents of each
product term sum to
n−1
n
1
,
since
u
m
u
m
has been factored out of each. This summation corresponds to
dividing all but one of the jobs among
mm queues. We can summarize this
as
g33=g32+
u
3
g23
g
3
3
g
3
2
u
3
g
2
3
We can perform a similar grouping of terms for any values of
nn and
mm and achieve a similar results;
that is,
gnm=gnm−1+
u
m
gn−1m
g
n
m
g
n
m
1
u
m
g
n
1
m
We can represent this recurrence relation graphically as
To solve the recurrence relation, we need
n+m−1
n
m
1
initial conditions. If
n=0
n
0
, the queueing network has only one state for any value
of mm:
(
π
0
,
0
,
…
,
0
=1=
u
1
0
u
2
0⋯
u
m
0g0m=1g0m)⇒(g0m=1)
π
0
,
0
,
…
,
0
1
u
1
0
u
2
0
⋯
u
m
0
g
0
m
1
g
0
m
g
0
m
1
For
m=1
m
1
, again the queueing network has only one state,
regardless of the value of nn:
(
π
n
=
u
1
ngn1=1)⇒(gn1=
u
1
n)
π
n
u
1
n
g
n
1
1
g
n
1
u
1
n
With these initial conditions, we can apply the recurrence
relation to obtain
GN=gNM
G
N
g
N
M
. This is conveniently represented as a
table. For example, the following table illustrates the
computation of
g33
g
3
3
.
Table 2
nm
n
m

1 
2 
3 
0 
g01=1
g
0
1
1

g02=1
g
0
2
1

g03=1
g
0
3
1

1 
g11=
u
1
g
1
1
u
1

g12=g11+
u
2
g02
g
1
2
g
1
1
u
2
g
0
2

g13=g12+
u
3
g03
g
1
3
g
1
2
u
3
g
0
3

2 
g11=
u
1
2
g
1
1
u
1
2

g22=g21+
u
2
g12
g
2
2
g
2
1
u
2
g
1
2

g23=g22+
u
3
g13
g
2
3
g
2
2
u
3
g
1
3

3 
g11=
u
1
3
g
1
1
u
1
3

g32=g31+
u
2
g22
g
3
2
g
3
1
u
2
g
2
2

g33=g32+
u
3
g23
g
3
3
g
3
2
u
3
g
2
3

We can simplify the computation somewhat by choosing
u
1
=1
u
1
1
and scaling the other relative utilizations appropriately.
This algorithm for computing
GN
G
N
has a time complexity of
ONM
O
N
M
, a vast improvement over the brute force method using
the sum of product terms. The space complexity as shown is also
ONM
O
N
M
, but this can be reduced to
ON
O
N
or
OM
O
M
. We can fill in the table one column (row) at a time
at a time, and once we finish using a column (row) to compute
the next column (row), we can throw it away.
Let's apply the Convolution Algorithm to a threequeue,
centralserver model with five jobs. The routing matrix is
P=(
0.10.30.6
100
100
)
P
0.1
0.3
0.6
1
0
0
1
0
0
The pervisit service demands at the queues are
1
μ
1
=50
1
μ
1
50
1
μ
2
=100
1
μ
2
100
1
μ
3
=125
1
μ
3
125
From the equation
rP=r
r
P
r
for computing the relative throughputs, we get
r
2
=0.3
r
1
r
2
0.3
r
1
r
3
=0.6
r
1
r
3
0.6
r
1
Using the relative throughputs to compute relative utilizations,
u
1
=
r
1
1
μ
1
=50
r
1
u
1
r
1
1
μ
1
50
r
1
u
2
=
r
2
1
μ
2
=0.3
r
1
100=30
r
1
u
2
r
2
1
μ
2
0.3
r
1
100
30
r
1
u
3
=
r
3
1
μ
3
=0.6
r
1
125=75
r
1
u
3
r
3
1
μ
3
0.6
r
1
125
75
r
1
Setting
u
1
=1
u
1
1
means that
r
1
=150
r
1
1
50
and hence
u
2
=0.6
u
2
0.6
and
u
3
=1.5
u
3
1.5
. Now apply the Convolution Algorithm using these
relative utilizations.
Table 3

u
1
=1
u
1
1

u
2
=0.6
u
2
0.6

u
3
=1.5
u
3
1.5

nm
n
m

1 
2 
3 
0 
1 
1 
1 
1 
1 
1.6 
3.1 
2 
1 
1.96 
6.61 
3 
1 
2.176 
12.091 
4 
1 
2.3056 
20.4421 
5 
1 
2.38336 
33.04651 
g53=G5=33.04651
g
5
3
G
5
33.04651
. Armed with this, we can compute the state
probabilities:
π
n
1
,
n
2
,
n
3
=
u
1
n
1
u
2
n
2
u
3
n
3
G5=1
n
1
0.6
n
2
1.55−
n
1
−
n
2
33.04651=0.6
n
2
1.55−
n
1
−
n
2
33.04651
π
n
1
,
n
2
,
n
3
u
1
n
1
u
2
n
2
u
3
n
3
G
5
1
n
1
0.6
n
2
1.5
5
n
1
n
2
33.04651
0.6
n
2
1.5
5
n
1
n
2
33.04651
For example,
π
2
,
2
,
1
=0.621.533.04651≃0.01634
π
2
,
2
,
1
0.6
2
1.5
33.04651
0.01634
Usually, what we want to know is not the steady state
probability distribution, but properties of the distribution (or
quantities that can be derived from the distribution) such as
average response time and average queue length. If we want to
know the average length of queue
mm, we could calculate it from the
steadystate probability distribution:
Elength of queue m=∑(
n
i
≥0)∧(1≤i≤M)∧(∑i
n
i
=M),all states∈
n
1
n
2
…
n
m
…
n
M
n
m
π
n
1
,
n
2
,
…
,
n
m
,
…
,
n
M
length of queue m
n
i
0
1
i
M
i
n
i
M
all states
n
1
n
2
…
n
m
…
n
M
n
m
π
n
1
,
n
2
,
…
,
n
m
,
…
,
n
M
This would require us to compute the probability of every state,
which gets us back to an
OCN+M−1N
O
C
N
M
1
N
algorithm, and this is what we were
trying to avoid with the Convolution Algorithm.
Fortunately, there is a better way to compute expected queue
lengths, that makes use of the Convolution Algorithm but avoids
the combinatorial explosion in the number of states. First,
consider the probability that queue
mm has at least
nn jobs in it.
Let
N
m
N
m
be the random variable that is the number of jobs in queue
mm in steady state.
Pr
N
m
≥n=∑
n
m
≥n,all states
π
n
1
,
n
2
,
…
,
n
m
,
…
,
n
M
=∑(
n
m
≥n)∧(∑i
n
i
=N)
u
1
n
1
u
2
n
2
⋯
u
m
n
m
⋯
u
M
n
M
GN=
u
m
n∑(
n
m
≥n)∧(∑i
n
i
=N)
u
1
n
1
u
2
n
2
⋯
u
m
n
m
−n⋯
u
M
n
M
GN=
u
m
n∑(
n
m
≥0)∧(∑i
n
i
=N−n)
u
1
n
1
u
2
n
2
⋯
u
m
n
m
⋯
u
M
n
M
GN=
u
m
nGN−nGN
N
m
n
n
m
n
all states
π
n
1
,
n
2
,
…
,
n
m
,
…
,
n
M
n
m
n
i
n
i
N
u
1
n
1
u
2
n
2
⋯
u
m
n
m
⋯
u
M
n
M
G
N
u
m
n
n
m
n
i
n
i
N
u
1
n
1
u
2
n
2
⋯
u
m
n
m
n
⋯
u
M
n
M
G
N
u
m
n
n
m
0
i
n
i
N
n
u
1
n
1
u
2
n
2
⋯
u
m
n
m
⋯
u
M
n
M
G
N
u
m
n
G
N
n
G
N
(5)
For a discrete random variable
N
m
N
m
,
E
N
m
=∑n=1∞nPr
N
m
=n=∑n=1∞∑k=1∞Pr
N
m
=k=∑n=1∞Pr
N
m
≥n
N
m
n
1
n
N
m
n
n
1
k
1
N
m
k
n
1
N
m
n
where
∑n=1∞nPr
N
m
=n=1Pr
N
m
=1+2Pr
N
m
=2+3Pr
N
m
=3+…
n
1
n
N
m
n
1
N
m
1
2
N
m
2
3
N
m
3
…
and
∑n=1∞∑k=1∞Pr
N
m
=k=Pr
N
m
=1+Pr
N
m
=2+Pr
N
m
=2+Pr
N
m
=3+Pr
N
m
=3+Pr
N
m
=3+…
n
1
k
1
N
m
k
N
m
1
N
m
2
N
m
2
N
m
3
N
m
3
N
m
3
…
Using the earlier result for
Pr
N
m
≥n
N
m
n
gives us
E
N
m
=∑n=1N
u
m
nGN−nGN
N
m
n
1
N
u
m
n
G
N
n
G
N
This is a particularly efficient method for calculating
E
N
m
N
m
because the Convolution Algorithm requires us to compute
GN−n
G
N
n
,
1≤n≤N
1
n
N
, as part of the process of computing
GN
G
N
.
Before returning to the example, consider the queue throughputs.
We started down this somewhat torturous road because in a closed
system it is not possible to compute the throughputs for the
queues directly from the routing matrix. This in turn prevented
us from computing the queue utilizations as we did for open
systems. For a single server queue
mm, the utilization of the queue is
the probability that the queue is nonempty:
ρ
m
=Pr
N
m
≥1=
u
m
GN−1GN=α
u
m
ρ
m
N
m
1
u
m
G
N
1
G
N
α
u
m
Hence,
α=GN−1GN
α
G
N
1
G
N
Since
ρ
m
=
λ
m
μ
m
=α
u
m
ρ
m
λ
m
μ
m
α
u
m
,
λ
m
=
μ
m
u
m
GN−1GN
λ
m
μ
m
u
m
G
N
1
G
N
Now let's go back to the example. The scaled normalization
constants
GN−n
G
N
n
,
1≤n≤N
1
n
N
, are just the values in the last column of the
Convolution Algorithm table, excluding the last value
GN
G
N
. Hence,
E
N
1
=∑n=151nG5−nG5=20.4421+12.091+6.61+3.1+133.04651≃1.3086
N
1
n
1
5
1
n
G
5
n
G
5
20.4421
12.091
6.61
3.1
1
33.04651
1.3086
(6)
E
N
2
=∑n=150.6nG5−nG5=0.6×20.4421+0.6212.091+0.636.61+0.643.1+0.65133.04651≃0.5606
N
2
n
1
5
0.6
n
G
5
n
G
5
0.6
20.4421
0.6
2
12.091
0.6
3
6.61
0.6
4
3.1
0.6
5
1
33.04651
0.5606
(7)
E
N
3
=∑n=151.5nG5−nG5=1.5×20.4421+1.5212.091+1.536.61+1.543.1+1.55133.04651≃3.1309
N
3
n
1
5
1.5
n
G
5
n
G
5
1.5
20.4421
1.5
2
12.091
1.5
3
6.61
1.5
4
3.1
1.5
5
1
33.04651
3.1309
(8)
λ
1
=1501×20.442133.04651≃0.01237
λ
1
1
50
1
20.4421
33.04651
0.01237
λ
2
=11000.3×20.442133.04651≃0.00371
λ
2
1
100
0.3
20.4421
33.04651
0.00371
λ
3
=11251.5×20.442133.04651≃0.00742
λ
3
1
125
1.5
20.4421
33.04651
0.00742
ρ
1
=20.442133.046511≃0.6186
ρ
1
20.4421
33.04651
1
0.6186
ρ
2
=20.442133.046510.3≃0.1856
ρ
2
20.4421
33.04651
0.3
0.1856
ρ
3
=20.442133.046511.5≃0.9279
ρ
3
20.4421
33.04651
1.5
0.9279