36
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Embed Size (px)

DESCRIPTION

Transition prob. matrix n-step transition prob. from state i to j is n-step transition matrix (for all states) is then For instance, two step transition matrix is 3

Citation preview

Page 1: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11. Markov Chains (MCs) 2

Courtesy of J. Bard, L. Page, and J. Heyl

Page 2: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.2.1 n-step transition probabilities (review)

2

Page 3: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Transition prob. matrix

n-step transition prob. from state i to j is

n-step transition matrix (for all states) is then

For instance, two step transition matrix is

3

.Ej,i ,n]iX|jX[P)n(p kknij 0

)n(p)n(P ij

2112 P)(P)(P)(P

Page 4: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Chapman-Kolmogorov equations

Prob. of going from state i at t=0, passing though state k at t=m, and ending at state j at t=m+n is

In matrix notation,

4

.Ej,i all,m,n allfor

)n(p)m(p)nm(pEk

kjikij

0

mnP)m(P)n(P)mn(P nP)n(P

Page 5: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.2.2 state probabilities

5

Page 6: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

State probability (pmf of an RV!)Let p(n) = {pj(n)} be the row vector of state probabilities at time n (i.e. state prob. vector)

Thus, p(n) is given byFrom the initial state

In matrix notation

6

)n(pp

]iX[P]iX|jX[P)n(p

ii

ij

innnj

1

11

P)n()n( 1pp

)(p)n(p]iX[P]iX|jX[P)n(p ii

iji

nj 000

nP)()n(P)()n( 00 ppp

Page 7: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

How an MC changes (Ex 11.10, 11.11)

A two-state system

7

Silence(state 0)

Speech(state 1)

0.10.90.8

0.2

80201090

..

..P

Suppose p(0)=(0,1)Then p(1) = p(0)P= (0,1)P = (0.2, 0.8)p(2)= (0.2,0.8)P= (0,1)P2 = (0.34,

0.66) p(4)= (0,1)P4 = (0.507, 0.493)p(8)= (0,1)P8 = (0.629, 0.371)p(16)= (0,1)P16 = (0.665, 0.335)

p(32)= (0,1)P32 = (0.667, 0.333)p(64)= (0,1)P64 = (0.667, 0.333)

Suppose p(0)=(1,0)p(1) = p(0)P = (0.9, 0.1)p(2) = (1,0)P2 = (0.83,

0.17)p(4)= (1,0)P4 = (0.747, 0.253)p(8)= (1,0)P8 = (0.686, 0.314)p(16)= (1,0)P16 = (0.668, 0.332)

p(32)= (1,0)P32 = (0.667, 0.333)p(64)= (1,0)P64 = (0.667, 0.333)

Page 8: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Independence of initial condition

8

Page 9: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

The lesson to take away

No matter what assumptions you make about the initial probability distribution, after a large number of steps,the state probability distribution is approximately (2/3, 1/3)

9

See p.666, 667

Page 10: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.2.3 steady state probabilities

10

Page 11: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

State probabilities (pmf) convergeAs n , then transition prob. matrix Pn approaches a matrix whose rows are equal to the same pmf.

In matrix notation,

where 1 is a column vector of all 1’s, and =(1, 1, … )

The convergence of Pn implies the convergence of the state pmf’s

11

iallfor)n(p jij

jii

jii

ijj )(p)(p)n(p)n(p 00

1nP

Page 12: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Steady state probabilitySystem reaches “equilibrium” or “steady state”, i.e., n , pj(n) j, pi(n-1) i

In matrix notation,

here is stationary state pmf of the Markov chainTo solve this,

12

ii

ijj p

P

1i

i

Page 13: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Speech activity system

From the steady state probabilities

13

= P

(1, 2) = (1, 2) 0.9 0.10.2 0.8

1 = 0.91 + 0.12

2 = 0.21 + 0.82

1 + 2 = 1

1 = 2/3 = 0.6672 = 1/3 = 0.333

Page 14: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

14

Question 11-1: Alice, Bob and Carol are playing Frisbee. Alice always throws to Carol. Bob always throws to Alice.Carol throws to Bob 2/3 of the time and to Alice 1/3 of the time.

In the long run, what percentage of the time do each of the players have the Frisbee?

Page 15: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.3.1 classes of states11.3.2 recurrence properties

15

Page 16: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Why classification?

The methods we have learned with steady state probabilities (in Section 11.2.3) work only with regular Markov chains (MCs).

A regular MC has Pn whose entries are all non-zero for some integer n. (then as n ?)

There are non-regular MCs; how can we check?

16

Page 17: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of StatesAccessible: Possible to go from state i to state j (path exists from i to

j).Let ai and di be the events of arriving and departing of a customer to a

system in state i2 3 4 …10

d4d1 d2 d3

a0a1 a2 a3

2 3 4 …a a1 a0 a0 1 2 3

Two states communicate if both are accessible from each other. A system is irreducible if all states communicate.

State i is recurrent if the system will return to it after leaving some time in the future.

If a state is not recurrent, it is transient.

Page 18: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of States (cont’)A state is periodic if it can only return to itself after a fixed number of transitions greater than 1 (or multiple of a fixed number).

A state that is not periodic is aperiodic.

2

0

1

(1) (1)

(1)

Each state is visited every 3 iterations

2 3 4 …10

d4d1 d2 d3

a0a1 a2 a3

How about this?

Page 19: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of States (cont’)

(1)

2

0

1

(1)(0.5)

(1)

4(0.5)

Each state is visited in multiples of 3 iterations

The period of a state i is the smallest k > 1 such that all paths leading back to i have lengths that are a multiple of k;

i.e., pii(n) = 0 unless n = k, 2k, 3k, . . .

If the smallest k, gcd of all path lengths, is 1, then state i is aperiodic.Periodicity is a class property; all the states in a class have the same period.

Page 20: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of States (cont’)An absorbing/trapping state is one that locks in the system once it enters.

2 3 4a a

1a

0

d d d1 2 3

1 2 3

This diagram might represent the wealth of a gambler who begins with $2, makes a series of wagers for $1 each, and stops when his money becomes $4 or $0.

Let ai be the event of winning in state i and di the event of losing in state i.

There are two absorbing states: 0 and 4.

An absorbing state is a state j with pjj = 1.

Page 21: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of States (cont’)Class: set of states that communicate with each other.

A chain is irreducible if there is only one class

A class is either all recurrent or all transient and may be all periodic or aperiodic.

States in a transient class communicate only with each other so no arcs enter any of the corresponding nodes from outside the class. Arcs may leave, though, passing from a node in the class to one outside.

2

0

1 5

3

4

6

Page 22: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Illustration of Concepts

3 1

0

2

0 0 X 0 X

1 X 0 0 0

2 X 0 0 0

3 0 0 X X

0 1 2 3

State

Example 1

Every pair of states that communicates forms a single recurrent class; however, the states are not periodic.

Thus the stochastic process is aperiodic and irreducible.

* X is a prob. and 0<X1

Page 23: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Illustration of Concepts

Example 2

4

0

0 X X 0 0 X

1 X X 0 0 0

2 0 0 X X 0

3 0 0 0 X 0

0 1 2 3 4

State 4 0 0 0 0 0

23

1

States 0 and 1 communicate and for a recurrent class.

States 3 and 4 form separate transient classes.

State 2 is an absorbing state and forms a recurrent class.

* X is a prob. and 0<X1 * especially, p2,2 is 1

Page 24: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Illustration of ConceptsExample 3

3 1

0

2

0 0 0 0 X

1 X 0 0 0

2 X 0 0 0

3 0 X X 0

0 1 2 3

State

Every state communicates with every other state, so we have irreducible stochastic process.

Periodic?

Yes, so this MC is irreducible and periodic.

* X is a prob. and 0<X1

Page 25: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Classification of States

2.08.00001.04.05.000

07.03.0000005.05.00006.04.0

54321

P

.5.4

.6

.5.3 .5

.4

.8

.7

.1

1

5

24

.2

Example 4* Sometimes states start with 1, not 0

Page 26: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

A state j is accessible from state i if pij(n) > 0 for some n > 0.

In Example 4, state 2 is accessible from state 1 states 3 and 4 are accessible from state 5but states 3 is not accessible from state 2.

States i and j communicate if i is accessible from j and j is accessible from i.

States 1 & 2 communicate; states 3, 4 & 5 communicate;states 1,2 and states 3,4,5 do not communicate.

States 1 & 2 form one communicating class.States 3, 4 & 5 form another communicating class.

Example 4 Review

Page 27: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

If all states in an MC communicate, (i.e., all states are in the same class) then the chain is irreducible.

Example 4 is not an irreducible MC. Gambler’s example has 3 classes: {0}, {1, 2, 3} and {4}.Example 3?

Recurrence Properties (solve Ex 11.19)Let fi = prob. that the process will return to state i (eventually) given that it starts in state i.

If fi = 1 then state i is called recurrent.

If fi < 1 then state i is called transient.

.)n(pn

ii

1

.)n(pn

ii

1

Recurrence properties

Page 28: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.3.3 limiting probabilities

28

Page 29: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Transient vs. recurrentIf a Markov chain has transient classes and recurrent classes, the system will eventually enter and remain in one of recurrent classes (Fig. 11.6(a)). Thus, we can assume only irreducible Markov chains

Suppose a system starts with a recurrent state i at time 0.

Let Ti(k) be the time that elapses between the (k-1)-th and k-th returns.

29

Page 30: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Fig 11.8: recurrence times Ti(k)The process will return to state i at Ti(1), Ti(1)+Ti(2), Ti(1)+Ti(2)+Ti(3),…

30

Page 31: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Steady state prob. vs. recurrence timeProportion of time in state i after k returns to i is

31

Ti’s form an iid sequence since each recurrence time is independent of previous ones.

As the state is recurrent, the process returns to state i an infinite number of times. Then the law of large numbers implies that

Proportion of time in state i

Page 32: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Recurrent timesIf E[Ti] < , then state i is positive recurrent, which

implies that i > 0

32

If E[Ti] = , then state i is null recurrent, which implies that i = 0

But how can we obtain E[Ti] ? )n(nf]T[En

iii

1

Solve Ex 11.26

}iX,iX,,iX|iX{P)n(f nnii 011

n

iii )n(ff If fi is 1, recurrent; if fi <1, transient

Page 33: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Question 11-2Consider an MC with infinite states, where p0,1 = 1,

and for all state i1, we have

33

Check whether state 0 is recurrent or transient.If recurrent, check whether positive or null.

Page 34: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Existence of Steady-State ProbabilitiesA state is ergodic if it is aperiodic and positive recurrent

Once an MC enters an ergodic state, then the process will remain in the state’s class forever.

Also, the process will visit all states in the class sufficiently frequently, so that the long term proportion of time in a state will be non zero.

We mostly deal with MCs, each of which has a single class that consists of ergodic states only.

Thus we can apply = P

Page 35: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Regular vs. ergodic MC

Regular MCs have Pn whose entries are all non-zero for some integer n.

Ergodic MCs have aperiodic, positive recurrent states.

They are almost same, practically.

35

Page 36: 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

Not regular, but ergodic MC

36

0110

P