Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Applied Finite Mathematics » Markov Chains

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
  • College Open Textbooks display tagshide tags

    This collection is included inLens: Community College Open Textbook Collaborative
    By: CC Open Textbook Collaborative

    Comments:

    "Reviewer's Comments: 'I recommend this book for undergraduates. The content is especially useful for those in finance, probability statistics, and linear programming. The course material is […]"

    Click the "College Open Textbooks" link to see all content they endorse.

    Click the tag icon tag icon to display tags associated with this content.

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Bookshare

    This collection is included inLens: Bookshare's Lens
    By: Bookshare - A Benetech Initiative

    Comments:

    "Accessible versions of this collection are available at Bookshare. DAISY and BRF provided."

    Click the "Bookshare" link to see all content affiliated with them.

  • Featured Content display tagshide tags

    This collection is included inLens: Connexions Featured Content
    By: Connexions

    Comments:

    "Applied Finite Mathematics covers topics including linear equations, matrices, linear programming, the mathematics of finance, sets and counting, probability, Markov chains, and game theory."

    Click the "Featured Content" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Markov Chains

Module by: Rupinder Sekhon. E-mail the author

Summary: This chapter covers principles of Markov Chains. After completing this chapter students should be able to: write transition matrices for Markov Chain problems; find the long term trend for a Regular Markov Chain; Solve and interpret Absorbing Markov Chains.

Chapter Overview

In this chapter, you will learn to:

  1. Write transition matrices for Markov Chain problems.
  2. Find the long term trend for a Regular Markov Chain.
  3. Solve and interpret Absorbing Markov Chains.

Markov Chains

We will now study stochastic processes, experiments in which the outcomes of events depend on the previous outcomes. Such a process or experiment is called a Markov Chain or Markov process. The process was first studied by a Russian mathematician named Andrei A. Markov in the early 1900s.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The above information can be expressed in a matrix which lists the probabilities of going from one state into another state. This matrix is called a transition matrix.

Figure 1
This matrix depict the flow of customers from mama bell to papa bell and vice versa.

The reader should observe that a transition matrix is always a square matrix because all possible states must have both rows and columns. All entries in a transition matrix are non-negative as they represent probabilities. Furthermore, since all possible outcomes are considered in the Markov process, the sum of the row entries is always 1.

Example 1

Problem 1

Professor Symons either walks to school, or he rides his bicycle. If he walks to school one day, then the next day, he will walk or cycle with equal probability. But if he bicycles one day, then the probability that he will walk the next day is 1/41/4 size 12{1/4} {}. Express this information in a transition matrix.

Solution

We obtain the following transition matrix by properly placing the row and column entries. Note that if, for example, Professor Symons bicycles one day, then the probability that he will walk the next day is 1/41/4 size 12{1/4} {}, and therefore, the probability that he will bicycle the next day is 3/43/4 size 12{3/4} {}.

Figure 2
This matrix shows the probability that Professor Symons will bicycle or walk to work.

Example 2

Problem 1

In Example 1, if it is assumed that the first day is Monday, write a matrix that gives probabilities of a transition from Monday to Wednesday.

Solution

Let WW size 12{W} {} denote that Professor Symons walks and BB size 12{B} {} denote that he rides his bicycle.

We use the following tree diagram to compute the probabilities.

Figure 3
The Tree diagram depicts the probability that Professor Symons will either walk or bicycle to work on Monday, Tuesday, or Wednesday.

The probability that Professor Symons walked on Wednesday given that he walked on Monday can be found from the tree diagram, as listed below.

PWalked Wednesday Walked Monday=PWWW+PWBW=1/4+1/8=3/8PWalked Wednesday Walked Monday=PWWW+PWBW=1/4+1/8=3/8 size 12{P left ("Walked Wednesday" \lline " Walked Monday" right )=P left ( ital "WWW" right )+P left ( ital "WBW" right )=1/4+1/8=3/8} {}.

PBicycled Wednesday Walked Monday=PWWB+PWBB=1/4+3/8=5/8PBicycled Wednesday Walked Monday=PWWB+PWBB=1/4+3/8=5/8 size 12{P left ("Bicycled Wednesday" \lline " Walked Monday" right )=P left ( ital "WWB" right )+P left ( ital "WBB" right )=1/4+3/8=5/8} {}.

PWalked Wednesday Bicycled Monday=PBWW+PBBW=1/8+3/16=3/5/16PWalked Wednesday Bicycled Monday=PBWW+PBBW=1/8+3/16=3/5/16 size 12{P left ("Walked Wednesday" \lline " Bicycled Monday" right )=P left ( ital "BWW" right )+P left ( ital "BBW" right )=1/8+3/"16"=3/5/"16"} {}.

PBicycled Wednesday Bicycleed Monday=PBWB+PBBB=1/8+9/16=11/16PBicycled Wednesday Bicycleed Monday=PBWB+PBBB=1/8+9/16=11/16 size 12{P left ("Bicycled Wednesday" \lline " Bicycleed Monday" right )=P left ( ital "BWB" right )+P left ( ital "BBB" right )=1/8+9/"16"="11"/"16"} {}.

We represent the results in the following matrix.

Figure 4
This matrix depicts the probability that Professor Symons will walk or bicycle to work on Monday or Wednesday.

Alternately, this result can be obtained by squaring the original transition matrix.

We list both the original transition matrix TT size 12{T} {} and T2T2 size 12{T rSup { size 8{2} } } {} as follows:

T=1/21/21/43/4T=1/21/21/43/4 size 12{T= left [ matrix { 1/2 {} # 1/2 {} ## 1/"4" {} # "3"/"4"{} } right ]} {}
(1)
T21/21/21/43/41/21/21/43/41/4+1/81/4+3/81/8+3/161/8+9/163/85/85/1611/16T21/21/21/43/41/21/21/43/41/4+1/81/4+3/81/8+3/161/8+9/163/85/85/1611/16 size 12{ matrix { T rSup { size 8{2} } {} # ={} {} # left [ matrix { 1/2 {} # 1/2 {} ## 1/4 {} # 3/4{} } right ] left [ matrix { 1/2 {} # 1/2 {} ## 1/4 {} # 3/4{} } right ] {} ## {} # ={} {} # left [ matrix { 1/4+1/8 {} # 1/4+3/8 {} ## 1/8+3/"16" {} # 1/8+9/"16"{} } right ] {} ## {} # ={} {} # left [ matrix { 3/8 {} # 5/8 {} ## 5/"16" {} # "11"/"16"{} } right ]{} } } {}
(2)

The reader should compare this result with the probabilities obtained from the tree diagram.

Consider the following case, for example,

PWalked Wednesday Bicycled Monday=PBWW+PBBW=1/8+3/16=5/16.PWalked Wednesday Bicycled Monday=PBWW+PBBW=1/8+3/16=5/16 size 12{P left ("Walked Wednesday" \lline " Bicycled Monday" right )=P left ( ital "BWW" right )+P left ( ital "BBW" right )=1/8+3/"16"=5/"16"} {}.
(3)

It makes sense because to find the probability that Professor Symons will walk on Wednesday given that he bicycled on Monday, we sum the probabilities of all paths that begin with BB size 12{B} {} and end in WW size 12{W} {}. There are two such paths, and they are BWWBWW size 12{ ital "BWW"} {} and BBWBBW size 12{ ital "BBW"} {}.

Example 3

Problem 1

The transition matrix for Example 1 is given below.

Figure 5
This matrix depicts the probability that Professor Symons will walk or bicycle to work on Monday or Tuesday.

Write the transition matrix from a) Monday to Thursday, b) Monday to Friday.

Solution

In writing a transition matrix from Monday to Thursday, we are moving from one state to another in three steps. That is, we need to compute T3T3 size 12{T rSup { size 8{3} } } {}.

T3=11/3221/3221/6443/64T3=11/3221/3221/6443/64 size 12{T rSup { size 8{3} } = left [ matrix { "11"/"32" {} # "21"/"32" {} ## "21"/"64" {} # "43"/"64"{} } right ]} {}
(4)

b) To find the transition matrix from Monday to Friday, we are moving from one state to another in 4 steps. Therefore, we compute T4T4 size 12{T rSup { size 8{4} } } {}.

T4=43/12885/12885/256171/256T4=43/12885/12885/256171/256 size 12{T rSup { size 8{4} } = left [ matrix { "43"/"128" {} # "85"/"128" {} ## "85"/"256" {} # "171"/"256"{} } right ]} {}
(5)

It is important that the student is able to interpret the above matrix correctly. For example, the entry 85/12885/128 size 12{"85"/"128"} {}, states that if Professor Symons walked to school on Monday, then there is 85/12885/128 size 12{"85"/"128"} {} probability that he will bicycle to school on Friday.

There are certain Markov chains that tend to stabilize in the long run, and they are the subject of Section 3. It so happens that the transition matrix we have used in all the above examples is just such a Markov chain. The next example deals with the long term trend or steady-state situation for that matrix.

Example 4

Problem 1

Suppose Professor Symons continues to walk and bicycle according to the transition matrix given in Example 1. In the long run, how often will he walk to school, and how often will he bicycle?

Solution

As mentioned earlier, as we take higher and higher powers of our matrix TT size 12{T} {}, it should stabilize.

T5=.333984.666015.333007.666992T5=.333984.666015.333007.666992 size 12{T rSup { size 8{5} } = left [ matrix { "." "333984" {} # "." "666015" {} ## "." "333007" {} # "." "666992"{} } right ]} {}
(6)
T10=.33333397.66666603.33333301.66666698T10=.33333397.66666603.33333301.66666698 size 12{T rSup { size 8{"10"} } = left [ matrix { "." "33333397" {} # "." "66666603" {} ## "." "33333301" {} # "." "66666698"{} } right ]} {}
(7)
T20=1/32/31/32/3T20=1/32/31/32/3 size 12{T rSup { size 8{"20"} } = left [ matrix { 1/3 {} # 2/3 {} ## 1/3 {} # 2/3{} } right ]} {}
(8)

Therefore, in the long run, Professor Symons will walk to school 1/31/3 size 12{1/3} {} of the time and bicycle 2/32/3 size 12{2/3} {} of the time.

When this happens, we say that the system is in steady-state or state of equilibrium. In this situation, all row vectors are equal. If the original matrix is an nn size 12{n} {} by nn size 12{n} {} matrix, we get n vectors that are all the same. We call this vector a fixed probability vector or the equilibrium vector EE size 12{E} {}. In the above problem, the fixed probability vector EE size 12{E} {} is 1/32/31/32/3 size 12{ left [ matrix { 1/3 {} # 2/3{} } right ]} {}. Furthermore, if the equilibrium vector EE size 12{E} {} is multiplied by the original matrix TT size 12{T} {}, the result is the equilibrium vector EE size 12{E} {}. That is,

ET=EET=E size 12{ ital "ET"=E} {}
(9)

or,

1/32/31/21/21/43/4=1/32/31/32/31/21/21/43/4=1/32/3 size 12{ left [ matrix { 1/3 {} # 2/3{} } right ] left [ matrix { 1/2 {} # 1/2 {} ## 1/4 {} # 3/4{} } right ]= left [ matrix { 1/3 {} # 2/3{} } right ]} {}
(10)

Regular Markov Chains

At the end of Section 2, we took the transition matrix TT size 12{T} {} and started taking higher and higher powers of it. The matrix started to stabilize, and finally it reached its steady-state or state of equilibrium. When that happened, all the row vectors became the same, and we called one such row vector a fixed probability vector or an equilibrium vector EE size 12{E} {}. Furthermore, we discovered that ET=EET=E size 12{ ital "ET"=E} {}.

Section Overview

In this section, we wish to answer the following four questions.

  1. Does every Markov chain reach a state of equilibrium?
  2. Does the product of an equilibrium vector and its transition matrix always equal the equilibrium vector? That is, does ET=EET=E size 12{ ital "ET"=E} {}?
  3. Can the equilibrium vector EE size 12{E} {} be found without raising the matrix to higher powers?
  4. Does the long term market share distribution for a Markov chain depend on the initial market share?

Example 5

Problem 1

Does every Markov chain reach the state of equilibrium?

Solution

Answer: A Markov chain reaches a state of equilibrium if it is a regular Markov chain. A Markov chain is said to be a regular Markov chain if some power of it has only positive entries.

Example 6

Problem 1

Determine whether the following Markov chains are regular.

  1. A=10.3.7A=10.3.7 size 12{A= left [ matrix { 1 {} # 0 {} ## "." 3 {} # "." 7{} } right ]} {}

  2. B=01.4.6B=01.4.6 size 12{B= left [ matrix { 0 {} # 1 {} ## "." 4 {} # "." 6{} } right ]} {}

Solution
  1. The matrix AA size 12{A} {} is not a regular Markov chain because every power of it has an entry 0 in the first row, second column position. In fact, we will show that all 2 by 2 matrices that have a zero in the first row, second column position are not regular. Consider the following matrix MM size 12{M} {}.

    M=a0bcM=a0bc size 12{M= left [ matrix { a {} # 0 {} ## b {} # c{} } right ]} {}
    (11)
    M2=a0bca0bc=aa+0ba0+0cba+cbb0+ccM2=a0bca0bc=aa+0ba0+0cba+cbb0+cc size 12{M rSup { size 8{2} } = left [ matrix { a {} # 0 {} ## b {} # c{} } right ] left [ matrix { a {} # 0 {} ## b {} # c{} } right ]= left [ matrix { a cdot a+0 cdot b {} # a cdot 0+0 cdot c {} ## b cdot a+c cdot b {} # b cdot 0+c cdot c{} } right ]} {}
    (12)

    Observe that the first row, second column entry, a0+0ca0+0c size 12{a cdot 0+0 cdot c} {}, will always be zero, regardless of what power we raise the matrix to.

  2. The transition matrix BB size 12{B} {} is a regular Markov chain because

    B2=.40.60.24.76B2=.40.60.24.76 size 12{B rSup { size 8{2} } = left [ matrix { "." "40" {} # "." "60" {} ## "." "24" {} # "." "76"{} } right ]} {}
    (13)

    has only positive entries.

Example 7

Problem 1

Does the product of an equilibrium vector and its transition matrix always equal the equilibrium vector? That is, does ET=EET=E size 12{ ital "ET"=E} {}?

Solution

At this point, the reader may have already guessed that the answer is yes if the transition matrix is a regular Markov chain. We try to illustrate with the following example from Paragraph 3.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The transition matrix is given below.

Figure 6
This matrix depict the flow of customers from mama bell to papa bell and vice versa.

If the initial market share for Mama Bell is 20% and for Papa Bell 80%, we'd like to know the long term market share for each company.

Let matrix TT size 12{T} {} denote the transition matrix for this Markov chain, and MM size 12{M} {} denote the matrix that represents the initial market share. Then TT size 12{T} {} and MM size 12{M} {} are as follows:

T=.60.40.30.70T=.60.40.30.70 size 12{T= left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]} {} and M=.20.80M=.20.80 size 12{M= left [ matrix { "." "20" {} # "." "80"{} } right ]} {}

Since each month the towns people switch according to the transition matrix TT size 12{T} {}, after one month the distribution for each company is as follows:

.20.80.60.40.30.70=.36.64.20.80.60.40.30.70=.36.64 size 12{ left [ matrix { "." "20" {} # "." "80"{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { "." "36" {} # "." "64"{} } right ]} {}
(14)

After two months, the market share for each company is

.36.64.60.40.30.70=.408.592.36.64.60.40.30.70=.408.592 size 12{ left [ matrix { "." "36" {} # "." "64"{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { "." "408" {} # "." "592"{} } right ]} {}
(15)

After three months the distribution is

.408.592.60.40.30.70=.4224.5776.408.592.60.40.30.70=.4224.5776 size 12{ left [ matrix { "." "408" {} # "." "592"{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { "." "4224" {} # "." "5776"{} } right ]} {}
(16)

After four months the market share is

.4224.5776.60.40.30.70=.42672.57328.4224.5776.60.40.30.70=.42672.57328 size 12{ left [ matrix { "." "4224" {} # "." "5776"{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { "." "42672" {} # "." "57328"{} } right ]} {}
(17)

After 30 months the market share is 3/74/73/74/7 size 12{ left [ matrix { 3/7 {} # 4/7{} } right ]} {} .

The market share after 30 months has stabilized to 3/74/73/74/7 size 12{ left [ matrix { 3/7 {} # 4/7{} } right ]} {} .

This means that

3/74/7.60.40.30.70=3/74/7{}3/74/7.60.40.30.70=3/74/7 size 12{ left [ matrix { 3/7 {} # 4/7{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { 3/7 {} # 4/7{} } right ]} {}
(18)

Once the market share reaches an equilibrium state, it stays the same, that is, ET=EET=E size 12{ ital "ET"=E} {}.

This helps us answer the next question.

Example 8

Problem 1

Can the equilibrium vector EE size 12{E} {} be found without raising the transition matrix to large powers?

Solution

The answer to the second question provides us with a way to find the equilibrium vector EE size 12{E} {}.

The answer lies in the fact that ET=EET=E size 12{ ital "ET"=E} {}.

Since we have the matrix TT size 12{T} {}, we can determine EE size 12{E} {} from the statement ET=EET=E size 12{ ital "ET"=E} {}.

Suppose E=e1eE=e1e size 12{E= left [ matrix { e {} # 1 - e{} } right ]} {}, then ET=EET=E size 12{ ital "ET"=E} {} gives us

e1e.60.40.30.70=e1ee1e.60.40.30.70=e1e size 12{ left [ matrix { e {} # 1 - e{} } right ] left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]= left [ matrix { e {} # 1 - e{} } right ]} {}
(19)
.60e+.301e.40e+.701e=e1e.60e+.301e.40e+.701e=e1e size 12{ left [ matrix { left ( "." "60" right )e+ "." "30" left (1 - e right ) {} # left ( "." "40" right )e+ "." "70" left (1 - e right ){} } right ]= left [ matrix { e {} # 1 - e{} } right ]} {}
(20)
.30e+.30.30e+.70=e1e.30e+.30.30e+.70=e1e size 12{ left [ matrix { "." "30"e+ "." "30" {} # - "." "30"e+ "." "70"{} } right ]= left [ matrix { e {} # 1 - e{} } right ]} {}
(21)
.30e+.30=e.30e+.30=e size 12{ "." "30"e+ "." "30"=e} {}
(22)
e=3/7e=3/7 size 12{e=3/7} {}
(23)

Therefore, E=3/74/7E=3/74/7 size 12{E= left [ matrix { 3/7 {} # 4/7{} } right ]} {}

Example 9

Problem 1

Does the long term market share for a Markov chain depend on the initial market share?

Solution

We will show that the final market share distribution for a Markov chain does not depend upon the initial market share. In fact, one does not even need to know the initial market share distribution to find the long term distribution. Furthermore, the final market share distribution can be found by simply raising the transition matrix to higher powers.

Consider the initial market share .20.80.20.80 size 12{ left [ matrix { "." "20" {} # "." "80"{} } right ]} {}, and the transition matrix T=.60.40.30.70T=.60.40.30.70 size 12{T= left [ matrix { "." "60" {} # "." "40" {} ## "." "30" {} # "." "70"{} } right ]} {} for Mama Bell and Papa Bell in the above example. Recall we found TnTn size 12{T rSup { size 8{n} } } {}, for very large nn size 12{n} {}, to be 3/74/73/74/73/74/73/74/7 size 12{ left [ matrix { 3/7 {} # 4/7 {} ## 3/7 {} # 4/7{} } right ]} {} .

Clearly, .20.803/74/73/74/7=3/74/7.20.803/74/73/74/7=3/74/7 size 12{ left [ matrix { "." "20" {} # "." "80"{} } right ] left [ matrix { 3/7 {} # 4/7 {} ## 3/7 {} # 4/7{} } right ]= left [ matrix { 3/7 {} # 4/7{} } right ]} {}

No matter what the initial market share, the product is 3/74/73/74/7 size 12{ left [ matrix { 3/7 {} # 4/7{} } right ]} {} .

If the initial share is .10.90.10.90 size 12{ left [ matrix { "." "10" {} # "." "90"{} } right ]} {}, then

.10.903/74/73/74/7=3/74/7.10.903/74/73/74/7=3/74/7 size 12{ left [ matrix { "." "10" {} # "." "90"{} } right ] left [ matrix { 3/7 {} # 4/7 {} ## 3/7 {} # 4/7{} } right ]= left [ matrix { 3/7 {} # 4/7{} } right ]} {}
(24)

For any distribution A=a1aA=a1a size 12{A= left [ matrix { a {} # 1 - a{} } right ]} {}, for example,

a1a3/74/73/74/7=3/7a+3/71a4/7a+4/71a=3/74/7a1a3/74/73/74/7=3/7a+3/71a4/7a+4/71a=3/74/7 size 12{ left [ matrix { a {} # 1 - a{} } right ] left [ matrix { 3/7 {} # 4/7 {} ## 3/7 {} # 4/7{} } right ]= left [ matrix { 3/7 left (a right )+3/7 left (1 - a right ) {} # 4/7 left (a right )+4/7 left (1 - a right ){} } right ]= left [ matrix { 3/7 {} # 4/7{} } right ]} {}
(25)

It makes sense, because the entry 3/7(a)+3/71a3/7(a)+3/71a size 12{3/7 left (a+3/7 left (1 - a right ) right )} {}, for example, will always equal 3/73/7 size 12{3/7} {}.

Just as the sum of the parts equals the whole, the sum of the parts of 3/73/7 size 12{3/7} {} equals 3/73/7 size 12{3/7} {}.

Example 10

Problem 1

Three companies, AA size 12{A} {}, BB size 12{B} {}, and CC size 12{C} {}, compete against each other. The transition matrix T for people switching each month among them is given by the following transition matrix.

Figure 7
This matrix depict the flow of people between company A, B, and C.

If the initial market share for the companies AA size 12{A} {}, BB size 12{B} {}, and CC size 12{C} {} is .25.35.40.25.35.40 size 12{ left [ matrix { "." "25" {} # "." "35" {} # "." "40"{} } right ]} {} , what is the long term distribution?

Solution

Since the long term market share does not depend on the initial market share, we can simply raise the transition market share to a large power and get the distribution.

T20=13/553/1127/55T20=13/553/1127/55 size 12{T rSup { size 8{"20"} } = left [ matrix { "13"/"55" {} # 3/"11" {} # "27"/"55"{} } right ]} {}
(26)

We summarize as follows:

Regular Markov Chains

A Markov chain is said to be a Regular Markov chain if some power of it has only positive entries.
Let TT size 12{T} {} be a transition matrix for a regular Markov chain.

  1. As we take higher powers of TT size 12{T} {}, TnTn size 12{T rSup { size 8{n} } } {}, as n becomes large, approaches a state of equilibrium.
  2. If MM size 12{M} {} is any distribution vector, and EE size 12{E} {} an equilibrium vector, then MTn=EMTn=E size 12{ ital "MT" rSup { size 8{n} } =E} {}.
  3. Each row of the equilibrium matrix TnTn size 12{T rSup { size 8{n} } } {} is a unique equilibrium vector EE size 12{E} {} such that ET=EET=E size 12{ ital "ET"=E} {}.
  4. The equilibrium distribution vector EE size 12{E} {} can be found by letting ET=EET=E size 12{ ital "ET"=E} {}.

Absorbing Markov Chains

In this section, we will study a type of Markov chain in which when a certain state is reached, it is impossible to leave that state. Such states are called absorbing states, and a Markov Chain that has at least one such state is called an Absorbing Markov chain. Suppose you have the following transition matrix.

Figure 8
This matrix depicts the probability of moving from one sate to the other.

The state S2S2 size 12{S rSub { size 8{2} } } {} is an absorbing state, because the probability of moving from state S2S2 size 12{S rSub { size 8{2} } } {} to state S2S2 size 12{S rSub { size 8{2} } } {} is 1. Which is another way of saying that if you are in state S2S2 size 12{S rSub { size 8{2} } } {}, you will remain in state S2S2 size 12{S rSub { size 8{2} } } {}.

In fact, this is the way to identify an absorbing state. If the probability in row i and column i , piipii size 12{p rSub { size 8{ ital "ii"} } } {}, is 1, then state SiSi size 12{S rSub { size 8{i} } } {} is an absorbing state.

We begin with an application of absorbing Markov chains to the gambler's ruin problem.

Example 11: Gambler's Ruin Problem

Problem 1

A gambler has $3,000, and she decides to gamble $1,000 at a time at a Black Jack table in a casino in Las Vegas. She has told herself that she will continue playing until she goes broke or has $5,000. Her probability of winning at Black Jack is .40.40 size 12{ "." "40"} {}. Write the transition matrix, identify the absorbing states, find the solution matrix, and determine the probability that the gambler will be financially ruined at a stage when she has $2,000.

Solution

The transition matrix is written below. Clearly the state 0 and state 5K5K size 12{5K} {} are the absorbing states. This makes sense because as soon as the gambler reaches 0, she is financially ruined and the game is over. Similarly, if the gambler reaches $5,000, she has promised herself to quit and, again, the game is over. The reader should note that p00=1p00=1 size 12{p rSub { size 8{"00"} } =1} {}, and p55=1p55=1 size 12{p rSub { size 8{"55"} } =1} {}.

Further observe that since the gambler bets only $1,000 at a time, she can raise or lower her money only by $1,000 at a time. In other words, if she has $2,000 now, after the next bet she can have $3,000 with a probability of .40.40 size 12{ "." "40"} {} and $1,000 with a probability of .60.60 size 12{ "." "60"} {}.

Figure 9
The matrix depicts the probability of winning a thousand dollars gambling per every thousand dollars won.

To determine the long term trend, we raise the matrix to higher powers until all the non- absorbing states are absorbed. This is the called the solution matrix.

Figure 10
The matrix depicts the probability of winning a thousand dollars gambling per every thousand dollars won.

The solution matrix is often written in the following form, where the non-absorbing states are written as rows on the side, and the absorbing states as columns on the top.

Figure 11
The matrix depicts the probability of winning either zero dollars or five thousand dollars.

The table lists the probabilities of getting absorbed in state 0 or state 5K starting from any of the four non-absorbing states. For example, if at any instance the gambler has $3,000, then her probability of financial ruin is 135/211.

Example 12

Problem 1

Solve the Gambler's Ruin Problem of Example 11 without raising the matrix to higher powers, and determine the number of bets the gambler makes before the game is over.

Solution

In solving absorbing states, it is often convenient to rearrange the matrix so that the rows and columns corresponding to the absorbing states are listed first. This is called the Canonical form. The transition matrix of Example 11 in the canonical form is listed below.

Figure 12
The matrix depicts the probability of winning a thousand dollars gambling per every thousand dollars won.

The canonical form divides the transition matrix into four sub-matrices as listed below.

Figure 13
The matrix shows the absorbing and non-absorbing states of the previous matrix.

The matrix F=1nB1F=1nB1 size 12{F= left (1 rSub { size 8{n} } - B right ) rSup { size 8{ - 1} } } {} is called the fundamental matrix for the absorbing Markov chain, where InIn size 12{I rSub { size 8{n} } } {} is an identity matrix of the same size as BB size 12{B} {}. The ii size 12{i} {}, jthjth size 12{j - ital "th"} {} entry of this matrix tells us the average number of times the process is in the non-absorbing state jj size 12{j} {} before absorption if it started in the non-absorbing state ii size 12{i} {}. The matrix FF size 12{F} {} for our problem is listed below.

Figure 14
This Fundamental matrix shows the average game played before absorption.

The Fundamental matrix FF size 12{F} {} helps us determine the average number of games played before absorption.

According to the matrix, the entry 1.78 in the row 3, column 2 position says that the gambler will play the game 1.78 times before she goes from $3K to $2K. The entry 2.25 in row 3, column 3 says that if the gambler now has $3K, she will have $3K on the average 2.25 times before the game is over.

We now address the question of how many bets will she have to make before she is absorbed, if the gambler begins with $3K?

If we add the number of games the gambler plays in each non-absorbing state, we get the average number of games before absorption from that state. Therefore, if the gambler starts with $3K, the average number of Black Jack games she will play before absorption is

1.07+1.78+2.25+.90=6.01.07+1.78+2.25+.90=6.0 size 12{1 "." "07"+1 "." "78"+2 "." "25"+ "." "90"=6 "." 0} {}
(27)

That is, we expect the gambler will either have $5,000 or nothing on the 7th bet.

Lastly, we find the solution matrix without raising the transition matrix to higher powers.

The matrix FAFA size 12{ ital "FA"} {} gives us the solution matrix.

FA=1.54.90.47.191.352.251.18.471.071.782.25.90.641.071.351.54.6000000.4=.92.08.81.19.64.36.38.62FA=1.54.90.47.191.352.251.18.471.071.782.25.90.641.071.351.54.6000000.4=.92.08.81.19.64.36.38.62 size 12{ ital "FA"= left [ matrix { 1 "." "54" {} # "." "90" {} # "." "47" {} # "." "19" {} ## 1 "." "35" {} # 2 "." "25" {} # 1 "." "18" {} # "." "47" {} ## 1 "." "07" {} # 1 "." "78" {} # 2 "." "25" {} # "." "90" {} ## "." "64" {} # 1 "." "07" {} # 1 "." "35" {} # 1 "." "54"{} } right ] left [ matrix { "." 6 {} # 0 {} ## 0 {} # 0 {} ## 0 {} # 0 {} ## 0 {} # "." 4{} } right ]= left [ matrix { "." "92" {} # "." "08" {} ## "." "81" {} # "." "19" {} ## "." "64" {} # "." "36" {} ## "." "38" {} # "." "62"{} } right ]} {}
(28)

Which is the same as the following matrix we obtained by raising the transition matrix to higher powers.

Figure 15
Figure 15 (Picture 120.png)

We summarize as follows:

Absorbing Markov Chains

  1. A Markov chain is an absorbing Markov chain if it has at least one absorbing state. A state ii size 12{i} {} is an absorbing state if once the system reaches state ii size 12{i} {}, it stays in that state; that is, pii=1pii=1 size 12{p rSub { size 8{ ital "ii"} } =1} {}.
  2. If a transition matrix TT size 12{T} {} for an absorbing Markov chain is raised to higher powers, it reaches an absorbing state called the solution matrix and stays there. The ii size 12{i} {}, jthjth size 12{j - ital "th"} {} entry of this matrix gives the probability of absorption in state jj size 12{j} {} while starting in state ii size 12{i} {}.
  3. Alternately, the solution matrix can be found in the following manner.
    1. Express the transition matrix in the canonical form as below.
      T = I n 0 A B T = I n 0 A B size 12{T= left [ matrix { I rSub { size 8{n} } {} # 0 {} ## A {} # B{} } right ]} {}
      (29)

      where InIn size 12{I rSub { size 8{n} } } {} is an identity matrix, and 0 is a matrix of all zeros.
    2. The fundamental matrix F=IB1F=IB1 size 12{F= left (I - B right ) rSup { size 8{ - 1} } } {}. The fundamental matrix helps us find the number of games played before absorption.
    3. FAFA size 12{ ital "FA"} {} is the solution matrix, whose ii size 12{i} {}, jthjth size 12{j - ital "th"} {} entry gives the probability of absorption in state jj size 12{j} {} while starting in state ii size 12{i} {}.
  4. The sum of the entries of a row of the fundamental matrix gives us the expected number of steps before absorption for the non-absorbing state associated with that row.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks