# Connexions

You are here: Home » Content » Applied Probability » Elements of Markov Sequences

• Preface to Pfeiffer Applied Probability

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This collection is included in aLens by: Digital Scholarship at Rice University

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection:

Collection by: Paul E Pfeiffer. E-mail the author

# Elements of Markov Sequences

Module by: Paul E Pfeiffer. E-mail the author

Summary: We show that the fundamental Markov property for a sequence is an expression of conditional independence of “past” and “future," given the “present.” The essential Chapman-Kolmogorov equation is seen as a consequence. In the usual time-homogeneous case with finite state space, the Chapman-Kolmogorov equation leads to the algebraic formulation that is widely studied at a variety of levels of mathematical sophistication. We sketch some of the more common results. This should provide a probabilistic perspective for a more complete study of the algebraic analysis. We model a system characterized by a sequence of states taken on at discrete instants, which we call transition times. At each transition time, there is either a change to a new state or a renewal of the state immediately before the transition. Each state is maintained unchanged during the period or stage between transitions. At any transition time, the move to the next state is characterized by a conditional transition probability distribution. We suppose the system is memoryless in the sense that the transition probabilities are dependent upon the current state (and perhaps the period number), but not upon the manner in which that state was reached. The past influences the future only through the present. This is the Markov property.

## Elements of Markov Sequences

Markov sequences (Markov chains) are often studied at a very elementary level, utilizing algebraic tools such as matrix analysis. In this section, we show that the fundamental Markov property is an expression of conditional independence of “past” and “future," given the “present.” The essential Chapman-Kolmogorov equation is seen as a consequence of this conditional independence. In the usual time-homogeneous case with finite state space, the Chapman-Kolmogorov equation leads to the algebraic formulation that is widely studied at a variety of levels of mathematical sophistication. With the background laid, we only sketch some of the more common results. This should provide a probabilistic perspective for a more complete study of the algebraic analysis.

Markov sequences

We wish to model a system characterized by a sequence of states taken on at discrete instants which we call transition times. At each transition time, there is either a change to a new state or a renewal of the state immediately before the transition. Each state is maintained unchanged during the period or stage between transitions. At any transition time, the move to the next state is characterized by a conditional transition probability distribution. We suppose that the system is memoryless, in the sense that the transition probabilities are dependent upon the current state (and perhaps the period number), but not upon the manner in which that state was reached. The past influences the future only through the present. This is the Markov property, which we model in terms of conditional independence.

For period i, the state is represented by a value of a random variable Xi, whose value is one of the members of a set E, known as the state space. We consider only a finite state space and identify the states by integers from 1 to M. We thus have a sequence

X N = { X n : n N } , where N = { 0 , 1 , 2 , } X N = { X n : n N } , where N = { 0 , 1 , 2 , }
(1)

We view an observation of the system as a composite trial. Each ω yields a sequence of states {X0(ω),X1(ω),}{X0(ω),X1(ω),} which is referred to as a realization of the sequence, or a trajectory. We suppose the system is evolving in time. At discrete instants of time t1,t2,t1,t2, the system makes a transition from one state to the succeeding one (which may be the same).

 Initial period: n=0,t∈[0,t1)n=0,t∈[0,t1), state is X0(ω)X0(ω); at t1 the transition is to X1(ω)X1(ω) Period one: n=1,t∈[t1,t2)n=1,t∈[t1,t2), state is X1(ω)X1(ω); at t2 the transition is to X2(ω)X2(ω) ..... Period k: n=k,t∈[tk,tk+1)n=k,t∈[tk,tk+1), state is Xk(ω)Xk(ω); at tk+1tk+1 move to Xk+1(ω)Xk+1(ω) .....

The parameter n indicates the period t[tn,tn+1)[tn,tn+1). If the periods are of unit length, then tn=ntn=n. At tn+1tn+1, there is a transition from the state Xn(ω)Xn(ω) to the state Xn+1(ω)Xn+1(ω) for the next period. To simplify writing, we adopt the following convention:

U n = ( X 0 , X 1 , , X n ) E n U m , n = ( X m , , X n ) and U n = ( X n , X n + 1 , ) E n U n = ( X 0 , X 1 , , X n ) E n U m , n = ( X m , , X n ) and U n = ( X n , X n + 1 , ) E n
(2)

The random vector Un is called the past at n of the sequence XN and Un is the future at n. In order to capture the notion that the system is without memory, so that the future is affected by the present, but not by how the present is reached, we utilize the notion of conditional independence, given a random vector, in the following

Definition. The sequence XN is Markov iff

(M) { X n + 1 , U n } ci | X n for all n 0 (M) { X n + 1 , U n } ci | X n for all n 0
(3)

Several conditions equivalent to the Markov condition (M) may be obtained with the aid of properties of conditional independence. We note first that (M) is equivalent to

P ( X n + 1 = k | X n = j , U n - 1 Q ) = P ( X n + 1 = k | X n = j ) for each n 0 , j , k E , and Q E n - 1 P ( X n + 1 = k | X n = j , U n - 1 Q ) = P ( X n + 1 = k | X n = j ) for each n 0 , j , k E , and Q E n - 1
(4)

The state in the next period is conditioned by the past only through the present state, and not by the manner in which the present state is reached. The statistics of the process are determined by the initial state probabilities and the transition probabilities

P ( X n + 1 = k | X n = j ) j , k E , n 0 P ( X n + 1 = k | X n = j ) j , k E , n 0
(5)

The following examples exhibit a pattern which implies the Markov condition and which can be exploited to obtain the transition probabilities.

### Example 1: One-dimensional random walk

An object starts at a given initial position. At discrete instants t1,t2,t1,t2, the object moves a random distance along a line. The various moves are independent of each other. Let

• Y0 be the initial position
• Yk be the amount the object moves at time t=tk{Yk:1k}t=tk{Yk:1k} iid
• Xn=k=0nYkXn=k=0nYk be the position after n moves.

We note that Xn+1=g(Xn,Yn+1)Xn+1=g(Xn,Yn+1). Since the position after the transition at tn+1tn+1 is affected by the past only by the value of the position Xn and not by the sequence of positions which led to this position, it is reasonable to suppose that the process XN is Markov. We verify this below.

### Example 2: A class of branching processes

Each member of a population is able to reproduce. For simplicity, we suppose that at certain discrete instants the entire next generation is produced. Some mechanism limits each generation to a maximum population of M members. Let

• ZinZin be the number propagated by the ith member of the nth generation.
• Zin=0Zin=0 indicates death and no offspring, Zin=kZin=k indicates a net of k propagated by the ith member (either death and k offspring or survival and k-1k-1 offspring).

The population in generation n+1n+1 is given by

X n + 1 = min M , i = 1 X n Z i n X n + 1 = min M , i = 1 X n Z i n
(6)

We suppose the class {Zin:1iM,0n}{Zin:1iM,0n} is iid. Let Yn+1=(Z1n,Z2n,,ZMn)Yn+1=(Z1n,Z2n,,ZMn). Then {Yn+1,Un}{Yn+1,Un} is independent. It seems reasonable to suppose the sequence XN is Markov.

### Example 3: An inventory problem

A certain item is stocked according to an (m,M)(m,M) inventory policy, as follows:

• If stock at the end of a period is less than m, order up to M.
• If stock at the end of a period is m or greater, do not order.

Let X0 be the initial stock, and Xn be the stock at the end of the nth period (before restocking), and let Dn be the demand during the nth period. Then for n0n0,

X n + 1 = max { M - D n + 1 , 0 } if 0 X n < m max { X n - D n + 1 , 0 } if m X n = g ( X n , D n + 1 ) X n + 1 = max { M - D n + 1 , 0 } if 0 X n < m max { X n - D n + 1 , 0 } if m X n = g ( X n , D n + 1 )
(7)

If we suppose {Dn:1n}{Dn:1n} is independent, then {Dn+1,Un}{Dn+1,Un} is independent for each n0n0, and the Markov condition seems to be indicated.

Remark. In this case, the actual transition takes place throughout the period. However, for purposes of analysis, we examine the state only at the end of the period (before restocking). Thus, the transitions are dispersed in time, but the observations are at discrete instants.

A piece of equipment has a lifetime which is an integral number of units of time. When a unit fails, it is replaced immediately with another unit of the same type. Suppose

• Xn is the remaining lifetime of the unit in service at time n
• Yn+1Yn+1 is the lifetime of the unit installed at time n, with {Yn:1n}{Yn:1n} iid
Then X n + 1 = X n - 1 if X n 1 Y n + 1 - 1 if X n = 0 = g ( X n , Y n + 1 ) Then X n + 1 = X n - 1 if X n 1 Y n + 1 - 1 if X n = 0 = g ( X n , Y n + 1 )
(8)

Remark. Each of these four examples exhibits the pattern

1. {X0,Yn:1n}{X0,Yn:1n} is independent
2. Xn+1=gn+1(Xn,Yn+1),n0Xn+1=gn+1(Xn,Yn+1),n0

We now verify the Markov condition and obtain a method for determining the transition probabilities.

A pattern yielding Markov sequences

Suppose {Yn:0n}{Yn:0n} is independent (call these the driving random variables). Set

X 0 = g 0 ( Y 0 ) and X n + 1 = g n + 1 ( X n , Y n + 1 ) n 0 X 0 = g 0 ( Y 0 ) and X n + 1 = g n + 1 ( X n , Y n + 1 ) n 0
(9)

Then

1. XN is Markov
2. P(Xn+1Q|Xn=u)=P[gn+1(u,Yn+1)Q]P(Xn+1Q|Xn=u)=P[gn+1(u,Yn+1)Q] for all n,un,u, and any Borel set Q.

VERIFICATION

1. It is apparent that if Y0,Y1,,YnY0,Y1,,Yn are known, then Un is known. Thus Un=hn(Y0,Y1,,Yn)Un=hn(Y0,Y1,,Yn) , which ensures each pair {Yn+1,Un}{Yn+1,Un} is independent. By property (CI13), with X=Yn+1,Y=XnX=Yn+1,Y=Xn, and Z=Un-1Z=Un-1, we have
{Yn+1,Un-1} ci |Xn{Yn+1,Un-1} ci |Xn
(10)
Since Xn+1=gn+1(Yn+1,Xn)Xn+1=gn+1(Yn+1,Xn) and Un=hn(Xn,Un-1)Un=hn(Xn,Un-1), property (CI9) ensures
{Xn+1,Un} ci |Xnn0{Xn+1,Un} ci |Xnn0
(11)
which is the Markov property.
2. P(Xn+1Q|Xn=u)=E{IQ[gn+1(Xn,Yn+1)]|Xn=u}a.s.P(Xn+1Q|Xn=u)=E{IQ[gn+1(Xn,Yn+1)]|Xn=u}a.s. =E{IQ[gn+1(u,Yn+1)]}a.s.[PX]=E{IQ[gn+1(u,Yn+1)]}a.s.[PX] by (CE10b) =P[gn+1(u,Yn+1)Q]=P[gn+1(u,Yn+1)Q] by (E1a)

The application of this proposition, below, to the previous examples shows that the transition probabilities are invariant with n. This case is important enough to warrant separate classification.

Definition. If P(Xn+1Q|Xn=u)P(Xn+1Q|Xn=u) is invariant with n, for all Borel sets Q, all uEuE, the Markov process XN is said to be homogeneous.

As a matter of fact, this is the only case usually treated in elementary texts. In this regard, we note the following special case of the proposition above.

Homogenous Markov sequences

If {Yn:1n}{Yn:1n} is iid and gn+1=ggn+1=g for all n, then the process is a homogeneous Markov process, and

P ( X n + 1 Q | X n = u ) = P [ g ( u , Y n + 1 ) Q ] , invariant with n P ( X n + 1 Q | X n = u ) = P [ g ( u , Y n + 1 ) Q ] , invariant with n
(12)

Remark.

In the homogeneous case, the transition probabilities are invariant with n. In this case, we write

P ( X n + 1 = j | X n = i ) = p ( i , j ) or p i j (invariant with n ) P ( X n + 1 = j | X n = i ) = p ( i , j ) or p i j (invariant with n )
(13)

These are called the (one-step) transition probabilities.

The transition probabilities may be arranged in a matrix P called the transition probability matrix, usually referred to as the transition matrix,

P = [ p ( i , j ) ] P = [ p ( i , j ) ]
(14)

The element p(i,j)p(i,j) on row i and column j is the probability P(Xn+1=j|Xn=i)P(Xn+1=j|Xn=i). Thus, the elements on the ith row constitute the conditional distribution for Xn+1Xn+1, given Xn=iXn=i. The transition matrix thus has the property that each row sums to one. Such a matrix is called a stochastic matrix. We return to the examples. From the propositions on transition probabilities, it is apparent that each is Markov. Since the function g is the same for all n and the driving random variables corresponding to the Yi form an iid class, the sequences must be homogeneous. We may utilize part (b) of the propositions to obtain the one-step transition probabilities.

### Example 5: Random walk continued

gn(u,Yn+1)=u+Yn+1gn(u,Yn+1)=u+Yn+1, so that gn is invariant with n. Since {Yn:1n}{Yn:1n} is iid,

P ( X n + 1 = k | X n = j ) = P ( j + Y = k ) = P ( Y = k - j ) = p k - j where p k = P ( Y = k ) P ( X n + 1 = k | X n = j ) = P ( j + Y = k ) = P ( Y = k - j ) = p k - j where p k = P ( Y = k )
(15)

### Example 6: Branching process continued

g(j,Yn+1)=min{M,i=1jZin}g(j,Yn+1)=min{M,i=1jZin} and E={0,1,,M}E={0,1,,M}. If {Zin:1iM,1n}{Zin:1iM,1n} is iid, then

W j n = i = 1 j Z i n ensures { W j n : 1 n } is iid for each j E W j n = i = 1 j Z i n ensures { W j n : 1 n } is iid for each j E
(16)

We thus have

P ( X n + 1 = k | X n = j ) = P ( W j n = k ) for 0 k < M P ( W j n M ) for k M 0 j M P ( X n + 1 = k | X n = j ) = P ( W j n = k ) for 0 k < M P ( W j n M ) for k M 0 j M
(17)

With the aid of moment generating functions, one may determine distributions for

W 1 = Z 1 , W 2 = Z 1 + Z 2 , , W M = Z 1 + + Z M W 1 = Z 1 , W 2 = Z 1 + Z 2 , , W M = Z 1 + + Z M
(18)

These calculations are implemented in an m-procedure called branchp. We simply need the distribution for the iid ZinZin.

% file branchp.m
% Calculates transition matrix for a simple branching
% process with specified maximum population.
disp('Do not forget zero probabilities for missing values of Z')
PZ = input('Enter PROBABILITIES for individuals  ');
M  = input('Enter maximum allowable population  ');
mz = length(PZ) - 1;
EZ = dot(0:mz,PZ);
disp(['The average individual propagation is ',num2str(EZ),])
P  = zeros(M+1,M+1);
Z  = zeros(M,M*mz+1);
k  = 0:M*mz;
a  = min(M,k);
z  = 1;
P(1,1) = 1;
for i = 1:M                 % Operation similar to genD
z = conv(PZ,z);
Z(i,1:i*mz+1) = z;
[t,p] = csort(a,Z(i,:));
P(i+1,:) = p;
end
disp('The transition matrix is P')
disp('To study the evolution of the process, call for branchdbn')

PZ = 0.01*[15 45 25 10 5];    % Probability distribution for individuals
branchp                       % Call for procedure
Do not forget zero probabilities for missing values of Z
Enter PROBABILITIES for individuals  PZ
Enter maximum allowable population  10
The average individual propagation is 1.45
The transition matrix is P
To study the evolution of the process, call for branchdbn
disp(P)                       % Optional display of generated P
Columns 1 through 7
1.0000         0         0         0         0         0         0
0.1500    0.4500    0.2500    0.1000    0.0500         0         0
0.0225    0.1350    0.2775    0.2550    0.1675    0.0950    0.0350
0.0034    0.0304    0.1080    0.1991    0.2239    0.1879    0.1293
0.0005    0.0061    0.0307    0.0864    0.1534    0.1910    0.1852
0.0001    0.0011    0.0075    0.0284    0.0702    0.1227    0.1623
0.0000    0.0002    0.0017    0.0079    0.0253    0.0579    0.1003
0.0000    0.0000    0.0003    0.0020    0.0078    0.0222    0.0483
0.0000    0.0000    0.0001    0.0005    0.0021    0.0074    0.0194
0.0000    0.0000    0.0000    0.0001    0.0005    0.0022    0.0068
0.0000    0.0000    0.0000    0.0000    0.0001    0.0006    0.0022
Columns 8 through 11
0         0         0         0
0         0         0         0
0.0100    0.0025         0         0
0.0705    0.0315    0.0119    0.0043
0.1481    0.0987    0.0559    0.0440
0.1730    0.1545    0.1179    0.1625
0.1381    0.1574    0.1528    0.3585
0.0832    0.1179    0.1412    0.5771
0.0406    0.0698    0.1010    0.7591
0.0169    0.0345    0.0590    0.8799
0.0062    0.0147    0.0294    0.9468


Note that p(0,0)=1p(0,0)=1. If the population ever reaches zero, it is extinct and no more births can occur. Also, if the maximum population (10 in this case) is reached, there is a high probability of returning to that value and very small probability of becoming extinct (reaching zero state).

### Example 7: Inventory problem (continued)

In this case,

g ( j , D n + 1 ) = max { M - D n + 1 , 0 } for 0 j < m max { j - D n + 1 , 0 } for m j M g ( j , D n + 1 ) = max { M - D n + 1 , 0 } for 0 j < m max { j - D n + 1 , 0 } for m j M
(19)

Numerical example

m = 1 M = 3 D n is Poisson (1) m = 1 M = 3 D n is Poisson (1)
(20)

To simplify writing, use D for Dn. Because of the invariance with n, set

P ( X n + 1 = k | X n = j ) = p ( j , k ) = P [ g ( j , D n + 1 ) = k ] P ( X n + 1 = k | X n = j ) = p ( j , k ) = P [ g ( j , D n + 1 ) = k ]
(21)

The various cases yield

g ( 0 , D ) = max { 3 - D , 0 } g ( 0 , D ) = max { 3 - D , 0 }

• g(0,D)=0g(0,D)=0 iff D3D3 implies p(0,0)=P(D3)p(0,0)=P(D3)
• g(0,D)=1g(0,D)=1 iff D=2D=2 implies p(0,1)=P(D=2)p(0,1)=P(D=2)
• g(0,D)=2g(0,D)=2 iff D=1D=1 implies p(0,2)=P(D=1)p(0,2)=P(D=1)
• g(0,D)=3g(0,D)=3 iff D=0D=0 implies p(0,3)=P(D=0)p(0,3)=P(D=0)

g ( 1 , D ) = max { 1 - D , 0 } g ( 1 , D ) = max { 1 - D , 0 }

• g(1,D)=0g(1,D)=0 iff D1D1 implies p(1,0)=P(D1)p(1,0)=P(D1)
• g(1,D)=1g(1,D)=1 iff D=0D=0 implies p(1,1)=P(D=0)p(1,1)=P(D=0)
• g(1,D)=2,3g(1,D)=2,3 is impossible

g ( 2 , D ) = max { 2 - D , 0 } g ( 2 , D ) = max { 2 - D , 0 }

• g(2,D)=0g(2,D)=0 iff D2D2 implies p(2,0)=P(D2)p(2,0)=P(D2)
• g(2,D)=1g(2,D)=1 iff D=1D=1 implies p(2,1)=P(D=1)p(2,1)=P(D=1)
• g(2,D)=2g(2,D)=2 iff D=0D=0 implies p(2,2)=P(D=0)p(2,2)=P(D=0)
• g(2,D)=3g(2,D)=3 is impossible

g(3,D)=max{3-D,0}=g(0,D)g(3,D)=max{3-D,0}=g(0,D) so that p(3,k)=p(0,k)p(3,k)=p(0,k)

The various probabilities for D may be obtained from a table (or may be calculated easily with cpoisson) to give the transition probability matrix

P = 0 . 0803 0 . 1839 0 . 3679 0 . 3679 0 . 6321 0 . 3679 0 0 0 . 2642 0 . 3679 0 . 3679 0 0 . 0803 0 . 1839 0 . 3679 0 . 3679 P = 0 . 0803 0 . 1839 0 . 3679 0 . 3679 0 . 6321 0 . 3679 0 0 0 . 2642 0 . 3679 0 . 3679 0 0 . 0803 0 . 1839 0 . 3679 0 . 3679
(22)

The calculations are carried out “by hand” in this case, to exhibit the nature of the calculations. This is a standard problem in inventory theory, involving costs and rewards. An m-procedure inventory1 has been written to implement the function g.

% file inventory1.m
% Version of 1/27/97
% Data for transition probability calculations
% for (m,M) inventory policy
M  = input('Enter value M of maximum stock  ');
m  = input('Enter value m of reorder point  ');
Y  = input('Enter row vector of demand values  ');
PY = input('Enter demand probabilities  ');
states = 0:M;
ms = length(states);
my = length(Y);
% Calculations for determining P
[y,s] = meshgrid(Y,states);
T  =  max(0,M-y).*(s < m) + max(0,s-y).*(s >= m);
P  = zeros(ms,ms);
for i = 1:ms
[a,b] = meshgrid(T(i,:),states);
P(i,:) = PY*(a==b)';
end
P


We consider the case M=5M=5, the reorder point m=3m=3, and demand is Poisson (3). We approximate the Poisson distribution with values up to 20.

inventory1
Enter value M of maximum stock  5             % Maximum stock
Enter value m of reorder point  3             % Reorder point
Enter row vector of demand values  0:20       % Truncated set of demand values
Enter demand probabilities  ipoisson(3,0:20)  % Demand probabilities
P =
0.1847    0.1680    0.2240    0.2240    0.1494    0.0498
0.1847    0.1680    0.2240    0.2240    0.1494    0.0498
0.1847    0.1680    0.2240    0.2240    0.1494    0.0498
0.5768    0.2240    0.1494    0.0498         0         0
0.3528    0.2240    0.2240    0.1494    0.0498         0
0.1847    0.1680    0.2240    0.2240    0.1494    0.0498


### Example 8: Remaining lifetime (continued)

g(0,Y)=Y-1g(0,Y)=Y-1, so that p(0,k)=P(Y-1=k)=P(Y=k+1)p(0,k)=P(Y-1=k)=P(Y=k+1)

g(j,Y)=j-1g(j,Y)=j-1 for j1j1, so that p(j,k)=δj-1,kp(j,k)=δj-1,k for j1j1

The resulting transition probability matrix is

P = p 1 p 2 p 3 1 0 0 0 1 0 P = p 1 p 2 p 3 1 0 0 0 1 0
(23)

The matrix is an infinite matrix, unless Y is simple. If the range of Y is {1,2,,M}{1,2,,M} then the state space E is {0,1,,M-1}{0,1,,M-1}.

Various properties of conditional independence, particularly (CI9), (CI10), and (CI12), may be used to establish the following. The immediate future Xn+1Xn+1 may be replaced by any finite futureUn,n+pUn,n+p and the present Xn may be replaced by any extended presentUm,nUm,n. Some results of abstract measure theory show that the finite future Un,n+pUn,n+p may be replaced by the entire future Un. Thus, we may assert

Extended Markov property

XN is Markov iff

(M * ) { U n , U m } ci | U m , n 0 m n (M * ) { U n , U m } ci | U m , n 0 m n
(24)

The Chapman-Kolmogorov equation and the transition matrix

As a special case of the extended Markov property, we have

{ U n + k , U n } ci | X n + k for all n 0 , k , 1 { U n + k , U n } ci | X n + k for all n 0 , k , 1
(25)

Setting g(Un+k,Xn+k)=Xn+k+mg(Un+k,Xn+k)=Xn+k+m and h(Un,Xn+k)=Xnh(Un,Xn+k)=Xn in (CI9), we get

{ X n + k + m , X n } ci | X n + k for all n 0 , k , m 1 { X n + k + m , X n } ci | X n + k for all n 0 , k , m 1
(26)

By the iterated conditioning rule (CI9) for conditional independence, it follows that

( C K ) E [ g ( X n + k + m ) | X n ] = E { E [ g ( X n + k + m ) | X n + k ] | X n } n 0 , k , m 1 ( C K ) E [ g ( X n + k + m ) | X n ] = E { E [ g ( X n + k + m ) | X n + k ] | X n } n 0 , k , m 1
(27)

This is the Chapman-Kolmogorov equation, which plays a central role in the study of Markov sequences. For a discrete state space E, with

P ( X n = j | X m = i ) = p m , n ( i , j ) P ( X n = j | X m = i ) = p m , n ( i , j )
(28)

this equation takes the form

( C K ' ) p m , q ( i , k ) = j E p m , n ( i , j ) p n , q ( j , k ) 0 m < n < q ( C K ' ) p m , q ( i , k ) = j E p m , n ( i , j ) p n , q ( j , k ) 0 m < n < q
(29)

To see that this is so, consider

P ( X q = k | X m = i ) = E [ I { k } ( X q ) | X m = i ] = E { E [ I { k } ( X q ) | X n ] | X m = i } P ( X q = k | X m = i ) = E [ I { k } ( X q ) | X m = i ] = E { E [ I { k } ( X q ) | X n ] | X m = i }
(30)
= j E [ I { k } ( X q ) | X n = j ] p m , n ( i , j ) = j p n , q ( j , k ) p m , n ( i , j ) = j E [ I { k } ( X q ) | X n = j ] p m , n ( i , j ) = j p n , q ( j , k ) p m , n ( i , j )
(31)

Homogeneous case

For this case, we may put (CK')(CK') in a useful matrix form. The conditional probabilities pm of the form

p m ( i , k ) = P ( X n + m = k | X n = i ) invariant in n p m ( i , k ) = P ( X n + m = k | X n = i ) invariant in n
(32)

are known as the m-step transition probabilities. The Chapman-Kolmogorov equation in this case becomes

( C K ' ' ) p m + n ( i , k ) = j E p m ( i , j ) p n ( j , k ) i , j E ( C K ' ' ) p m + n ( i , k ) = j E p m ( i , j ) p n ( j , k ) i , j E
(33)

In terms of the m-step transition matrix P(m)=[pm(i,k)]P(m)=[pm(i,k)], this set of sums is equivalent to the matrix product

( C K ' ' ) P ( m + n ) = P ( m ) P ( n ) ( C K ' ' ) P ( m + n ) = P ( m ) P ( n )
(34)

Now

P ( 2 ) = P ( 1 ) P ( 1 ) = P P = P 2 , P ( 3 ) = P ( 2 ) P ( 1 ) = P 3 , etc. P ( 2 ) = P ( 1 ) P ( 1 ) = P P = P 2 , P ( 3 ) = P ( 2 ) P ( 1 ) = P 3 , etc.
(35)

A simple inductive argument based on (CK'')(CK'') establishes

The product rule for transition matrices

The m-step probability matrix P(m)=PmP(m)=Pm, the mth power of the transition matrix P

### Example 9: The inventory problem (continued)

For the inventory problem in Example 7, the three-step transition probability matrix P(3)P(3) is obtained by raising P to the third power to get

P ( 3 ) = P 3 = 0 . 2930 0 . 2917 0 . 2629 0 . 1524 0 . 2619 0 . 2730 0 . 2753 0 . 1898 0 . 2993 0 . 2854 0 . 2504 0 . 1649 0 . 2930 0 . 2917 0 . 2629 0 . 1524 P ( 3 ) = P 3 = 0 . 2930 0 . 2917 0 . 2629 0 . 1524 0 . 2619 0 . 2730 0 . 2753 0 . 1898 0 . 2993 0 . 2854 0 . 2504 0 . 1649 0 . 2930 0 . 2917 0 . 2629 0 . 1524
(36)

We consider next the state probabilities for the various stages. That is, we examine the distributions for the various Xn, letting pk(n)=P(Xn=k)pk(n)=P(Xn=k) for each kEkE. To simplify writing, we consider a finite state space E={1,,M}E={1,,M}. We use π(n)π(n) for the row matrix

π ( n ) = [ p 1 ( n ) p 2 ( n ) p M ( n ) ] π ( n ) = [ p 1 ( n ) p 2 ( n ) p M ( n ) ]
(37)

As a consequence of the product rule, we have

Probability distributions for any period

For a homogeneous Markov sequence, the distribution for any Xn is determined by the initial distribution (i.e., for X0) and the transition probability matrix P.

VERIFICATION

Suppose the homogeneous sequence XN has finite state-space E={1,2,,M}E={1,2,,M}. For any n0n0, let pj(n)=P(Xn=j)pj(n)=P(Xn=j) for each jEjE. Put

π ( n ) = [ p 1 ( n ) p 2 ( n ) p M ( n ) ] π ( n ) = [ p 1 ( n ) p 2 ( n ) p M ( n ) ]
(38)

Then

• π (0)=π (0)= the initial probability distribution
• π(1)=π(0)Pπ(1)=π(0)P
• .....
• π(n)=π(n-1)P=π(0)P(n)=π(0)Pn=π(n)=π(n-1)P=π(0)P(n)=π(0)Pn= the nth-period distribution

The last expression is an immediate consequence of the product rule.

### Example 10: Inventory problem (continued)

In the inventory system for Examples 3, 7 and 9, suppose the initial stock is M=3M=3. This means that

π ( 0 ) = [ 0 0 0 1 ] π ( 0 ) = [ 0 0 0 1 ]
(39)

The product of π(0)π(0) and P3 is the fourth row of P3, so that the distribution for X3 is

π ( 3 ) = [ p 0 ( 3 ) p 1 ( 3 ) p 2 ( 3 ) p 3 ( 3 ) ] = [ 0 . 2930 0 . 2917 0 . 2629 0 . 1524 ] π ( 3 ) = [ p 0 ( 3 ) p 1 ( 3 ) p 2 ( 3 ) p 3 ( 3 ) ] = [ 0 . 2930 0 . 2917 0 . 2629 0 . 1524 ]
(40)

Thus, given a stock of M=3M=3 at startup, the probability is 0.2917 that X3=1X3=1. This is the probability of one unit in stock at the end of period number three.

Remarks

• A similar treatment shows that for the nonhomogeneous case the distribution at any stage is determined by the initial distribution and the class of one-step transition matrices. In the nonhomogeneous case, transition probabilities pn,n+1(i,j)pn,n+1(i,j) depend on the stage n.
• A discrete-parameter Markov process, or Markov sequence, is characterized by the fact that each member Xn+1Xn+1 of the sequence is conditioned by the value of the previous member of the sequence. This one-step stochastic linkage has made it customary to refer to a Markov sequence as a Markov chain. In the discrete-parameter Markov case, we use the terms process, sequence, or chain interchangeably.

The transition diagram and the transition matrix

The previous examples suggest that a Markov chain is a dynamic system, evolving in time. On the other hand, the stochastic behavior of a homogeneous chain is determined completely by the probability distribution for the initial state and the one-step transition probabilities p(i,j)p(i,j) as presented in the transition matrix P. The time-invariant transition matrix may convey a static impression of the system. However, a simple geometric representation, known as the transition diagram, makes it possible to link the unchanging structure, represented by the transition matrix, with the dynamics of the evolving system behavior.

Definition. A transition diagram for a homogeneous Markov chain is a linear graph with one node for each state and one directed edge for each possible one-step transition between states (nodes).

We ignore, as essentially impossible, any transition which has zero transition probability. Thus, the edges on the diagram correspond to positive one-step transition probabilities between the nodes connected. Since for some pair (i,j)(i,j) of states, we may have p(i,j)>0p(i,j)>0 but p(j,i)=0p(j,i)=0, we may have a connecting edge between two nodes in one direction, but none in the other. The system can be viewed as an object jumping from state to state (node to node) at the successive transition times. As we follow the trajectory of this object, we achieve a sense of the evolution of the system.

### Example 11: Transition diagram for inventory example

Consider, again, the transition matrix P for the inventory problem (rounded to three decimals).

P = 0 . 080 0 . 184 0 . 368 0 . 368 0 . 632 0 . 368 0 0 0 . 264 0 . 368 0 . 368 0 0 . 080 0 . 184 0 . 368 0 . 368 P = 0 . 080 0 . 184 0 . 368 0 . 368 0 . 632 0 . 368 0 0 0 . 264 0 . 368 0 . 368 0 0 . 080 0 . 184 0 . 368 0 . 368
(41)

Figure 1 shows the transition diagram for this system. At each node corresponding to one of the possible states, the state value is shown. In this example, the state value is one less than the state number. For convenience, we refer to the node for state k+1k+1, which has state value k, as node k. If the state value is zero, there are four possibilities: remain in that condition with probability 0.080; move to node 1 with probability 0.184; move to node 2 with probability 0.368; or move to node 3 with probability 0.368. These are represented by the “self loop” and a directed edge to each of the nodes representing states. Each of these directed edges is marked with the (conditional) transition probability. On the other hand, probabilities of reaching state value 0 from each of the others is represented by directed edges into the node for state value 0. A similar situation holds for each other node. Note that the probabilities on edges leaving a node (including a self loop) must total to one, since these correspond to the transition probability distribution from that node. There is no directed edge from the node 2 to node 3, since the probability of a transition from value 2 to value 3 is zero. Similary, there is no directed edge from node 1 to either node 2 or node 3.

There is a one-one relation between the transition diagram and the transition matrix P. The transition diagram not only aids in visualizing the dynamic evolution of a chain, but also displays certain structural properties. Often a chain may be decomposed usefully into subchains. Questions of communication and recurrence may be answered in terms of the transition diagram. Some subsets of states are essentially closed, in the sense that if the system arrives at any one state in the subset it can never reach a state outside the subset. Periodicities can sometimes be seen, although it is usually easier to use the diagram to show that periodicities cannot occur.

Classification of states

Many important characteristics of a Markov chain can be studied by considering the number of visits to an arbitrarily chosen, but fixed, state.

Definition. For a fixed state j, let

• T1=T1= the time (stage number) of the first visit to state j (after the initial period).
• Fk(i,j)=P(Ti=k|X0=i)Fk(i,j)=P(Ti=k|X0=i), the probability of reaching state j for the first time from state i in k steps.
• F(i,j)=P(Ti<|X0=i)=k=1Fk(i,j)F(i,j)=P(Ti<|X0=i)=k=1Fk(i,j), the probability of ever reaching state j from state i.

A number of important theorems may be developed for Fk and F, although we do not develop them in this treatment. We simply quote them as needed. An important classification of states is made in terms of F.

Definition. State j is said to be transient iff F(j,j)<1F(j,j)<1,

and is said to be recurrent iff F(j,j)=1F(j,j)=1.

Remark. If the state space E is infinite, recurrent states fall into one of two subclasses: positive or null. Only the positive case is common in the infinite case, and that is the only possible case for systems with finite state space.

Sometimes there is a regularity in the structure of a Markov sequence that results in periodicities.

Definition. For state j, let

δ = greatest common denominator of { n : p n ( j , j ) > 0 } δ = greatest common denominator of { n : p n ( j , j ) > 0 }
(42)

If δ>1δ>1, then state j is periodic with period δ; otherwise, state j is aperiodic.

Usually if there are any self loops in the transition diagram (positive probabilities on the diagonal of the transition matrix P) the system is aperiodic. Unless stated otherwise, we limit consideration to the aperiodic case.

Definition. A state j is called ergodic iff it is positive, recurrent, and aperiodic.

It is called absorbing iff p(j,j)=1p(j,j)=1.

A recurrent state is one to which the system eventually returns, hence is visited an infinity of times. If it is absorbing, then once it is reached it returns each step (i.e., never leaves).

An arrow notation is used to indicate important relations between states.

Definition. We say

• State i reaches j, denoted ijij, iff pn(i,j)>0pn(i,j)>0 for some n>0n>0.
• States i and j communicate, denoted ijij iff both i reaches j and j reaches i.

By including j reaches j in all cases, the relation is an equivalence relation (i.e., is reflexive, transitive, and idempotent). With this relationship, we can define important classes.

Definition. A class of states is communicating iff every state in the class may be reached from every other state in the class (i.e. every pair communicates). A class is closed if no state outside the class can be reached from within the class.

The following important conditions are intuitive and may be established rigorously:

• ijij implies i is recurrent iff j is recurrent
• ijij and i recurrent implies ijij
• ijij and i recurrent implies j recurrent

Limit theorems for finite state space sequences

The following propositions may be established for Markov sequences with finite state space:

• There are no null states, and not all states are transient.
• If a class of states is irreducible (i.e.,has no proper closed subsets), then
• All states are recurrent
• All states are aperiodic or all are periodic with the same period.
• If a class C is closed, irreducible, and i is a transient state (necessarily not in C),
• then F(i,j)=F(i,k)F(i,j)=F(i,k) for all j,kCj,kC.

A limit theorem

If the states in a Markov chain are ergodic (i.e., positive, recurrent, aperiodic), then

lim n p n ( i , j ) = π j > 0 j = 1 M π j = 1 π j = i = 1 M π i p ( i , j ) lim n p n ( i , j ) = π j > 0 j = 1 M π j = 1 π j = i = 1 M π i p ( i , j )
(43)

If, as above, we let

π ( n ) = [ p 1 ( n ) p 1 ( n ) p M ( n ) ] so that π ( n ) = π ( 0 ) P n π ( n ) = [ p 1 ( n ) p 1 ( n ) p M ( n ) ] so that π ( n ) = π ( 0 ) P n
(44)

the result above may be written

π ( n ) = π ( 0 ) P n π ( 0 ) P 0 π ( n ) = π ( 0 ) P n π ( 0 ) P 0
(45)

where

P 0 = π 1 π 2 π m π 1 π 2 π m π 1 π 2 π m P 0 = π 1 π 2 π m π 1 π 2 π m π 1 π 2 π m
(46)

Each row of P0=limnPnP0=limnPn is the long run distribution π=limnπ(n)π=limnπ(n).

Definition. A distribution is stationary iff

π = π P π = π P
(47)

The result above may be stated by saying that the long-run distribution is the stationary distribution. A generating function analysis shows the convergence is exponential in the following sense

| P n - P 0 | a | λ | n | P n - P 0 | a | λ | n
(48)

where |λ||λ| is the largest absolute value of the eigenvalues for P other than λ=1λ=1.

### Example 12: The long run distribution for the inventory example

We use MATLAB to check the eigenvalues for the transition probability P and to obtain increasing powers of P. The convergence process is readily evident.

P =
0.0803    0.1839    0.3679    0.3679
0.6321    0.3679         0         0
0.2642    0.3679    0.3679         0
0.0803    0.1839    0.3679    0.3679
E = abs(eig(P))
E =
1.0000
0.2602
0.2602
0.0000
format long
N = E(2).^[4 8 12]
N = 0.00458242348096   0.00002099860496   0.00000009622450
>> P4 = P^4
P4 =
0.28958568915950   0.28593792666752   0.26059678211310   0.16387960205989
0.28156644866011   0.28479107531968   0.26746979455342   0.16617268146679
0.28385952806702   0.28250048636032   0.26288737107246   0.17075261450021
0.28958568915950   0.28593792666752   0.26059678211310   0.16387960205989

>> P8 = P^8
P8 =
0.28580046500309   0.28471421248816   0.26315895715219   0.16632636535655
0.28577030590344   0.28469190218618   0.26316681807503   0.16637097383535
0.28581491438224   0.28471028095839   0.26314057837998   0.16633422627939
0.28580046500309   0.28471421248816   0.26315895715219   0.16632636535655

>> P12 = P^12
P12 =
0.28579560683438   0.28470680858266   0.26315641543927   0.16634116914369
0.28579574073314   0.28470680714781   0.26315628010643   0.16634117201261
0.28579574360207   0.28470687626748   0.26315634631961   0.16634103381085
0.28579560683438   0.28470680858266   0.26315641543927   0.16634116914369
>> error4 = max(max(abs(P^16 - P4)))    % Use P^16 for P_0
error4 =  0.00441148012334              % Compare with 0.0045824...
>> error8 = max(max(abs(P^16 - P8)))
error8 = 2.984007206519035e-05          % Compare with  0.00002099
>> error12 = max(max(abs(P^16 - P12)))
error12 = 1.005660185959822e-07         % Compare with 0.00000009622450


The convergence process is clear, and the agreement with the error is close to the predicted. We have not determined the factor a, and we have approximated the long run matrix P0 with P16. This exhibits a practical criterion for sufficient convergence. If the rows of Pn agree within acceptable precision, then n is sufficiently large. For example, if we consider agreement to four decimal places sufficient, then

P10 = P^10
P10 =
0.2858    0.2847    0.2632    0.1663
0.2858    0.2847    0.2632    0.1663
0.2858    0.2847    0.2632    0.1663
0.2858    0.2847    0.2632    0.1663


shows that n=10n=10 is quite sufficient.

## Simulation of finite homogeneous Markov sequences

In the section, "The Quantile Function", the quantile function is used with a random number generator to obtain a simple random sample from a given population distribution. In this section, we adapt that procedure to the problem of simulating a trajectory for a homogeneous Markov sequences with finite state space.

### Elements and terminology

1. States and state numbers. We suppose there are m states, usually carrying a numerical value. For purposes of analysis and simulation, we number the states 1 through m. Computation is carried out with state numbers; if desired, these can be translated into the actual state values after computation is completed.
2. Stages, transitions, period numbers, trajectories and time. We use the term stage and period interchangeably. It is customary to number the periods or stages beginning with zero for the initial stage. The period number is the number of transitions to reach that stage from the initial one. Zero transitions are required to reach the original stage (period zero), one transition to reach the next (period one), two transitions to reach period two, etc. We call the sequence of states encountered as the system evolves a trajectory or a chain. The terms “sample path” or “realization of the process” are also used in the literature. Now if the periods are of equal time length, the number of transitions is a measure of the elapsed time since the chain originated. We find it convenient to refer to time in this fashion. At time k the chain has reached the period numbered k. The trajectory is k+1k+1 stages long, so time or period number is one less than the number of stages.
3. The transition matrix and the transition distributions. For each state, there is a conditional transition probability distribution for the next state. These are arranged in a transition matrix. The ith row consists of the transition distribution for selecting the next-period state when the current state number is i. The transition matrix P thus has nonnegative elements, with each row summing to one. Such a matrix is known as a stochastic matrix.

### The fundamental simulation strategy

1. A fundamental strategy for sampling from a given population distribution is developed in the unit on the Quantile Function. If Q is the quantile function for the population distribution and U is a random variable distributed uniformly on the interval [0,1][0,1], then X=Q(U)X=Q(U) has the desired distribution. To obtain a sample from the uniform distribution use a random number generator. This sample is “transformed” by the quantile function into a sample from the desired distribution.
2. For a homogeneous chain, if we are in state k, we have a distribution for selecting the next state. If we use the quantile function for that distribution and a number produced by a random number generator, we make a selection of the next state based on that distribution. A succession of these choices, with the selection of the next state made in each case from the distribution for the current state, constitutes a valid simulation of a trajectory.

Arrival times and recurrence times

The basic simulation produces one or more trajectories of a specified length. Sometimes we are interested in continuing until first arrival at (or visit to) a specific target state or any one of a set of target states. The time (in transitions) to reach a target state is one less than the number of stages in the trajectory which begins with the initial state and ends with the target state reached.

• If the initial state is not in the target set, we speak of the arrival time.
• If the initial state is in the target set, the arrival time would be zero. In this case, we do not stop at zero but continue until the next visit to a target state (possibly the same as the initial state). We call the number of transitions in this case the recurrence time.
• In some instances, it may be desirable to know the time to complete visits to a prescribed number of the target states. Again there is a choice of treatment in the case the initial set is in the target set.

Data files

For use of MATLAB in simulation, we find it convenient to organize the appropriate data in an m-file.

• In every case, we need the transition matrix P. Its size indicates the number of states (say by the length of any row or column).
• If the states are to have values other than the state numbers, these may be included in the data file, although they may be added later, in response to a prompt.
• If long trajectories are to be produced, it may be desirable to determine the fraction of times each state is realized. A comparison with the long-run probabilities for the chain may be of interest. In this case, the data file may contain the long-run probability distribution. Usually, this is obtained by taking one row of a sufficiently large power of the transition matrix. This operation may be performed after the data file is called for but before the simulation procedure begins.

An example data file used to illustrate the various procedures is shown below. These data were generated artificially and have no obvious interpretations in terms of a specific systems to be modeled. However, they are sufficiently complex to provide nontrivial illustrations of the simulation procedures.

% file markovp1.m
% Artificial data for a Markov chain, used to
% illustrate the operation of the simulation procedures.
P = [0.050 0.011 0.155 0.155 0.213 0.087 0.119 0.190 0.008 0.012
0.103 0.131 0.002 0.075 0.013 0.081 0.134 0.115 0.181 0.165
0.103 0.018 0.128 0.081 0.137 0.180 0.149 0.051 0.009 0.144
0.051 0.098 0.118 0.154 0.057 0.039 0.153 0.112 0.117 0.101
0.016 0.143 0.200 0.062 0.099 0.175 0.108 0.054 0.062 0.081
0.029 0.085 0.156 0.158 0.011 0.156 0.088 0.090 0.055 0.172
0.110 0.059 0.020 0.212 0.016 0.113 0.086 0.062 0.204 0.118
0.084 0.171 0.009 0.138 0.140 0.150 0.023 0.003 0.125 0.157
0.105 0.123 0.121 0.167 0.149 0.040 0.051 0.059 0.086 0.099
0.192 0.093 0.191 0.061 0.094 0.123 0.106 0.065 0.040 0.035];
states = 10:3:37;
PI = [0.0849 0.0905 0.1125 0.1268 0.0883 0.1141 ...
0.1049 0.0806 0.0881 0.1093];         % Long-run distribution


The largest absolute value of the eigenvalues (other than one) is 0.1716. Since 0.1716165.6·10-130.1716165.6·10-13, we take any row of P16 as the long-run probabilities. These are included in the matrix PI in the m-file, above. The examples for the various procedures below use this set of artificial data, since the purpose is to illustrate the operation of the procedures.

The setup and the generating m-procedures

The m-procedure chainset sets up for simulation of Markov chains. It prompts for input of the transition matrix P, the states (if different from the state numbers), the long-run distribution (if available), and the set of target states if it is desired to obtain arrival or recurrence times. The procedure determines the number of states from the size of P and calculates the information needed for the quantile function. It then prompts for a call for one of the generating procedures.

The m-procedure mchain, as do the other generating procedures below, assumes chainset has been run, so that commonly used data are available in appropriate form. The procedure prompts for the number of stages (length of the trajectory to be formed) and for the initial state. When the trajectory is produced, the various states in the trajectory and the fraction or relative frequency of each is displayed. If the long-run distribution has been supplied by chainset, this distribution is included for comparison. In the examples below, we reset the random number generator (set the “seed” to zero) for purposes of comparison. However, in practice, it may be desirable to make several runs without resetting the seed, to allow greater effective “randomness.”

### Example 13

markovp1                              % Call for data
chainset                              % Call for setup procedure
Enter the transition matrix  P
Enter the states if not 1:ms  states  % Enter the states
States are
1    10
2    13
3    16
4    19
5    22
6    25
7    28
8    31
9    34
10    37
Enter the long-run probabilities  PI  % Enter the long-run distribution
Enter the set of target states [16 22 25]   % Not used with mchain
Call for for appropriate chain generating procedure
rand('seed',0)
mchain                                % Call for generating procedure
Enter the number n of stages   10000  % Note the trajectory length
Enter the initial state  16
State     Frac       P0          % Statistics on the trajectory
10.0000    0.0812    0.0849
13.0000    0.0952    0.0905
16.0000    0.1106    0.1125
19.0000    0.1226    0.1268
22.0000    0.0880    0.0883
25.0000    0.1180    0.1141
28.0000    0.1034    0.1049
31.0000    0.0814    0.0806
34.0000    0.0849    0.0881
37.0000    0.1147    0.1093
To view the first part of the trajectory of states, call for TR
disp(TR')
0     1     2     3     4     5     6     7     8     9    10
16    16    10    28    34    37    16    25    37    10    13


The fact that the fractions or relative frequencies approximate the long-run probabilities is an expression of a fundamental limit property of probability theory. This limit property, which requires somewhat sophisticated technique to establish, justifies a relative frequency interpretation of probability.

The procedure arrival assumes the setup provided by chainset, including a set E of target states. The procedure prompts for the number r of repetitions and the initial state. Then it produces r succesive trajectories, each starting with the prescribed initial state and ending on one of the target states. The arrival times vary from one run to the next. Various statistics are computed and displayed or made available. In the single-run case (r=1r=1), the trajectory may be displayed. An auxiliary procedure plotdbn may be used in the multirun case to plot the distribution of arrival times.

### Example 14: Arrival time to a target set of states

rand('seed',0)
arrival                                  % Assumes chainset has been run, as above
Enter the number of repetitions  1       % Single run case
The target state set is:
16    22    25
Enter the initial state  34              % Specified initial state
The arrival time is 6                   % Data on trajectory
The state reached is 16
To view the trajectory of states, call for TR
disp(TR')                                % Optional call to view trajectory
0     1     2     3     4     5     6
34    13    10    28    34    37    16
rand('seed',0)
arrival
Enter the number of repetitions  1000    % Call for 1000 repetitions
The target state set is:
16    22    25
Enter the initial state  34              % Specified initial state
The result of 1000 repetitions is:      % Run data (see optional calls below)
Term state  Rel Freq   Av time
16.0000    0.3310    3.3021
22.0000    0.3840    3.2448
25.0000    0.2850    4.3895
The average arrival time is 3.59
The standard deviation is 3.207
The minimum arrival time is 1
The maximum arrival time is 23
To view the distribution of arrival times, call for dbn
To plot the arrival time distribution, call for plotdbn
plotdbn                                 % See Figure 2


It would be difficult to establish analytically estimates of arrival times. The simulation procedure gives a reasonable “feel” for these times and how they vary.

The procedure recurrence is similar to the procedure arrival. If the initial state is not in the target set, it behaves as does the procedure arrival and stops on the first visit to the target set. However, if the initial state is in the target set, the procedures are different. The procedure arrival stops with zero transitions, since it senses that it has “arrived.” We are usually interested in having at least one transition– back to the same state or to another state in the target set. We call these times recurrence times.

### Example 15

rand('seed',0)
recurrence
Enter the number of repititions  1
The target state set is:
16     22     25
Enter the initial state  22

The recurrence time is 1
The state reached is 16
To view the trajectory of state numbers, call for TR
disp(TR')    0     1
22    16
recurrence
Enter the number of repititions  1000
The target state set is:
16     22     25
Enter the initial state  25
The result of 1000 repetitions is:
Term state  Rel Freq   Av time
16.0000    0.3680    2.8723
22.0000    0.2120    4.6745
25.0000    0.4200    3.1690
The average recurrence time is 3.379
The standard deviation is 3.0902
The minimum recurrence time is 1
The maximum recurrence time is 20
To view the distribution of recurrence times, call for dbn
To plot the recurrence time distribution, call for plotdbn
% See Figure 3


The procedure kvis stops when a designated number k of states are visited. If k is greater than the number of target states, or if no k is designated, the procedure stops when all have been visited. For k=1k=1, the behavior is the same as arrival. However, that case is better handled by the procedure arrival, which provides more statistics on the results.

### Example 16

rand('seed',0)
kvis                % Assumes chainset has been run
Enter the number of repetitions  1
The target state set is:
16     22     25
Enter the number of target states to visit  2
Enter the initial state  34
The time for completion is 7
To view the trajectory of states, call for TR
disp(TR')
0     1     2     3     4     5     6     7
34    13    10    28    34    37    16    25
rand('seed',0)
kvis
Enter the number of repetitions  100
The target state set is:
16     22     25
Enter the number of target states to visit    % Default-- visit all three
Enter the initial state  31
The average completion time is 17.57
The standard deviation is 8.783
The minimum completion time is 5
The maximum completion time is 42
To view a detailed count, call for D.
The first column shows the various completion times;
the second column shows the numbers of trials yielding those times


The first goal of this somewhat sketchy introduction to Markov processes is to provide a general setting which gives insight into the essential character and structure of such systems. The important case of homogenous chains is introduced in such a way that their algebraic structure appears as a logical consequence of the Markov propertiy. The general theory is used to obtain some tools for formulating homogeneous chains in practical cases. Some MATLAB tools for studying their behavior are applied to an artificial example, which demonstrates their general usefulness in studying many practical, applied problems.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks