## Lab 5 Markov Chains

#### Preliminaries

This is based on Section 11.6 in the text. You should read both this lab and the material in 11.6.

#### Basics

The purpose is to study the random transitions of system between various states and to try to make predictions of future states. We start with k states which we number from 1 to k. The main assumption is that if at any time we are in state i then the probability that we will be in state j at the next time is given by a constant probability pji. For example, suppose we are trying to model a disease which has 3 states: 1 = well, 2 = sick, 3 = very sick. Then we might have the following transition probabilities:
 From Well Sick Very Sick 0.6 0.2 0 Well 0.4 0.7 0.1 Sick To 0 0.1 0.9 Very Sick
Note that each column sums to 1 as there is always a transition from one state to another. Now let x(n)i be the probability that at time n you are in state i. Then using conditional probabilities we have:

x(n+1)i = pi1 x(n)1 + pi2 x(n)2 + pi3 x(n)3 + pi4 x(n)4 + ... + pik x(n)k.

In matrix form, if we let x(n) be the k by 1 matrix of values x(n)1, x(n)2, ... , x(n)k, (called the nth state vector) we have simply

x(n+1) = P x(n),

where P is the matrix of all the transition probabilities. From the example above, suppose that initially you are well, so your initial state vector is x(0) = (1,0,0)T. Then using the equation above, we have

x(1) = P x(0) = (0.6, 0.4, 0)T
x(2) = P x(1) = (0.44, 0.52, 0.04)T
x(3) = P x(2) = (0.368,0.544, 0.088 )T

Notice that the sum of the entries is 1 in each one.
Thus, if each transition takes 1 week, then the results say that after 3 weeks you have probability 0.368 that you are still well, probability 0.544 that you are sick and probability 0.088 that you are very sick.

A natural question is what is the state vector in the long run? i.e. if we let n go to infinity, what do we get? One way to find out is to keep multiplying by P until we get a stationary solution, i.e. the solution doesn't change. The other way is to be smart and realize that if we find such a solution z then since it doesn't change, we must have z = Pz. Thus we can find this steady-state distribution by solving (I-P)z = 0 for a vector z where the sum of the entries is 1.

For our example that means we solve:
0.4 z1 - 0.22 = 0
-0.5 z1 + 0.3 z2 - 0.1 z3 = 0
-0.1 z2 + 0.1 z3 = 0
This has the general solution: z1 = 0.5 t, z2 = t and z3 = t. Thus since we want the sum to be 1, we take t = 2/5, so that z = (1/5, 2/5, 2/5)T and this is the steady-state vector for this problem.

#### MATLAB

To do all this in MATLAB you first enter the matrix P. If you want to see what happens over time from a starting distribution, enter the distribution as x (a column matrix). Then for each transition you just type

x = P*x

If you want to find the stead-state vector, use rref to solve (I-P)z = 0 via

rref(eye(k)-P)

where k is the number of states. Then from the rref solution, you'll get a general solution. You'll then have to find the particular solution so that the sum of the terms is 1. For example, from the P above we have

```P = [0.6 0.2 0; 0.4 0.7 0.1; 0 0.1 0.9];
rref(eye(3)-P)
1 0 -0.5
0 1 -1.0
0 0  0.0
```
Thus the general solution is z1 = 0.5t, z2 = 1.0t and z3 = t. The sum of these terms is 2.5t, so to get a sum of 1, we choose t = 0.4, thus z = (0.2, 0.4 0.4)T.

#### Exercises

1. From 11.6 #2ab
2. From 11.6 #8
3. From 11.6 #T2ab (from the Technology Exercises)

Mail: ccollins@math.utk.edu