34
Continuous Time Markov Chains (CTMCs) Continuous Time Markov Chains (CTMCs) Stella Kapodistria and Jacques Resing September 10th, 2012 ISP

Continuous Time Markov Chains (CTMCs)resing/isp/ISP.pdf · Continuous Time Markov Chains (CTMCs) Continuous Time Markov ... and the past X(u), 0 u

Embed Size (px)

Citation preview

Continuous Time Markov Chains (CTMCs)

Continuous Time MarkovChains (CTMCs)

Stella Kapodistria and Jacques Resing

September 10th, 2012

ISP

Continuous Time Markov Chains (CTMCs)

Continuous Time Markov Chains (CTMCs)

In analogy with the definition of a discrete-time Markov chain, given inChapter 4, we say that the process {X (t) : t ≥ 0}, with state space S , isa continuous-time Markov chain if for all s, t ≥ 0 and nonnegativeintegers i , j , x(u), 0 ≤ u < s

P[X (t + s) = j |X (s) = i ,X (u) = x(u), 0 ≤ u < s]

= P[X (t + s) = j |X (s) = i ]

In other words, a continuous-time Markov chain is a stochastic processhaving the Markovian property that the conditional distribution of thefuture X (t + s) given the present X (s) and the past X (u), 0 ≤ u < s,depends only on the present and is independent of the past.

If, in addition,P[X (t + s) = j |X (s) = i ]

is independent of s, then the continuous-time Markov chain is said tohave stationary or homogeneous transition probabilities.

Continuous Time Markov Chains (CTMCs)

Continuous Time Markov Chains (CTMCs)

To what follows, we will restrict our attention to time-homogeneousMarkov processes, i.e., continuous-time Markov chains with the propertythat, for all s, t ≥ 0,

P[X (s + t) = j |X (s) = i ] = P[X (t) = j |X (0) = i ] = Pij(t).

The probabilities Pij(t) are called transition probabilities and the |S | × |S |matrix

P(t) =

P00(t) P01(t) . . .P10(t) P11(t) . . .

......

. . .

is called the transition probability matrix.

The matrix P(t) is for all t a stochastic matrix.

Continuous Time Markov Chains (CTMCs)

Memoryless property

Continuous Time Markov Chains (CTMCs)Memoryless property

Suppose that a continuous-time Markov chain enters state i at sometime, say, time 0, and suppose that the process does not leave state i(that is, a transition does not occur) during the next 10min.

What is the probability that the process will not leave state i during thefollowing 5min?

Now since the process is in state i at time 10 it follows, by the Markovianproperty, that the probability that it remains in that state during theinterval [10, 15] is just the (unconditional) probability that it stays instate i for at least 5min. That is, if we let

Ti : the amount of time that the process stays in state i before making atransition into a different state,

thenP[Ti > 15|Ti > 10] = P[Ti > 5].

Continuous Time Markov Chains (CTMCs)

Memoryless property

Continuous Time Markov Chains (CTMCs)Memoryless property

Suppose that a continuous-time Markov chain enters state i at sometime, say, time s, and suppose that the process does not leave state i(that is, a transition does not occur) during the next tmin.

What is the probability that the process will not leave state i during thefollowing tmin?

With the same reasoning as before, if we let

Ti : the amount of time that the process stays in state i before making atransition into a different state,

thenP[Ti > s + t|Ti > s] = P[Ti > t]

for all s, t ≥ 0. Hence, the random variable Ti is memoryless and mustthus (see Section 5.2.2) be exponentially distributed!

Continuous Time Markov Chains (CTMCs)

Memoryless property

Continuous Time Markov Chains (CTMCs)Memoryless property

In fact, the preceding gives us another way of defining a continuous-timeMarkov chain. Namely, it is a stochastic process having the propertiesthat each time it enters state i

(i) the amount of time it spends in that state before making atransition into a different state is exponentially distributed withmean, say, E [Ti ] = 1/vi , and

(ii) when the process leaves state i , it next enters state j with someprobability, say, Pij . Of course, the Pij must satisfy

Pii = 0, all i∑j

Pij = 1, all i .

Continuous Time Markov Chains (CTMCs)

Poisson process

Continuous Time Markov Chains (CTMCs)Poisson process i → i + 1

X (t) = # of arrivals in (0, t]State space {0, 1, 2, . . .}

// 1λ

// 2λ

// · · ·

Figure: Transition rate diagram of the Poisson Process

Ti ∼ Exp(λ), so vi = λ, i ≥ 0

Pij = 1, j = i + i

Pij = 0, j 6= i + 1.

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

Continuous Time Markov Chains (CTMCs)Birth-Death Process i → i + 1 and i → i − 1

X (t) = population size at time tState space {0, 1, 2, . . .}

0

λ0

%%1

λ1

%%

µ1

ee 2

λ2

&&

µ2

ee · · ·µ3

ee

Figure: Transition rate diagram of the Birth-Death Process

Time till next ‘birth’ : Bi ∼ Exp(λi ), i ≥ 0Time till next ‘death’ : Di ∼ Exp(µi ), i ≥ 1Time till next transition : Ti = min{Bi ,Di} ∼ Exp((λi + µi ) = vi ), i ≥ 1

and T0 ∼ Exp(λ0)

Pi,i+1 =λi

λi + µi, Pi,i−1 =

µi

λi + µi, Pij = 0, j 6= i ± 1, i ≥ 1

P01 = 1

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

Determine M(t) = E [X (t)]

Birth-Death (B-D) Process : Determine M(t)

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

α

%%1

λ+α

%%

µ

ee 2

2λ+α

&&

ee · · ·

ee

Figure: Transition rate diagram of the Birth-Death Process

Suppose that X (0) = i and let M(t) = E [X (t)]

M(t + h) = E [X (t + h)] = E [E [X (t + h)|X (t)]]

X (t + h) =

X (t) + 1, with probability [α + X (t)λ]h + o(h)

X (t)− 1, with probability X (t)µh + o(h)

X (t), otherwise

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

Determine M(t) = E [X (t)]

Birth-Death (B-D) Process : Determine M(t)

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

α

%%1

λ+α

%%

µ

ee 2

2λ+α

&&

ee · · ·

ee

Figure: Transition rate diagram of the Birth-Death Process

Suppose that X (0) = i and let M(t) = E [X (t)]

M(t + h) = E [E [X (t + h)|X (t)]]

E [X (t + h)|X (t)] = (X (t) + 1)[αh + X (t)λh] + (X (t)− 1)[X (t)µh]

+ X (t)[1− αh − X (t)λh − X (t)µh] + o(h)

= X (t) + αh + X (t)λh − X (t)µh + o(h)

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

Determine M(t) = E [X (t)]

Birth-Death (B-D) Process : Determine M(t)

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

α

%%1

λ+α

%%

µ

ee 2

2λ+α

&&

ee · · ·

ee

Figure: Transition rate diagram of the Birth-Death Process

Suppose that X (0) = i and let M(t) = E [X (t)]

M(t + h) = E [E [X (t + h)|X (t)]]

= M(t) + αh + M(t)λh −M(t)µh + o(h)

E [X (t + h)|X (t)] = X (t) + αh + X (t)λh − X (t)µh + o(h)

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

Determine M(t) = E [X (t)]

Birth-Death (B-D) Process : Determine M(t)

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

α

%%1

λ+α

%%

µ

ee 2

2λ+α

&&

ee · · ·

ee

Figure: Transition rate diagram of the Birth-Death Process

Suppose that X (0) = i and let M(t) = E [X (t)]

M(t + h) = E [E [X (t + h)|X (t)]]

= M(t) + αh + M(t)λh −M(t)µh + o(h)

Taking the limit as h→ 0 yields a differential equation

M ′(t) = (λ− µ)M(t) + α =⇒ M(t) = (α

λ− µ+ i)e(λ−µ)t − α

λ− µ

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

First step analysis

Birth-Death (B-D) Process: First step analysis

Let Tij be the time to reach j for the first time starting from i . Then forthe B-D process

E [Ti,j ] =1

λi + µi+ Pi,i+1 × E [Ti+1,j ] + Pi,i−1 × E [Ti−1,j ], i , j ≥ 1

This can be solved, starting from E [T0,1] = 1/λ0.

Example

For the B-D process with λi = λ and µi = µ we obtain recursively

E [T0,1] =1

λ

E [T1,2] =1

λ+ µ+ P1,2 × 0 + P1,0 × E [T0,2]

=1

λ+ µ+

µ

λ+ µ(E [T0,1] + E [T1,2]) =

1

λ[1 +

µ

λ]

E [Ti,i+1] =1

λ[1 +

µ

λ+µ2

λ2+ · · ·+ µi

λi]

Continuous Time Markov Chains (CTMCs)

Birth-Death Process

First step analysis

Birth-Death Process: First step analysis

Let Tij be the time to reach j for the first time starting from i . Then forthe B-D process

E [e−sTi,j ] =λi + µi

λi + µi + s[Pi,i+1 × E [e−sTi+1,j ] + Pi,i−1 × E [e−sTi−1,j ]], i , j ≥ 1

This can be solved, starting from E [e−sT0,1 ] = λ0/(λ0 + s).

Example

For the B-D process with λi = λ and µi = µ we are interested in T1,0

(time to extinction):

E [e−sTi,0 ] =1

λ+ µ+ s[λE [e−sTi+1,0 ] + µE [e−sTi−1,0 ]], i ≥ 1

and E [e−sT0,0 ] = 1. For a fixed s this equation can be seen as a secondorder difference equation yielding

E [e−sTi,0 ] =1

(2λ)i[λ+ µ+ s −

√(λ+ µ+ s)2 − 4λµ]i , i ≥ 1.

Continuous Time Markov Chains (CTMCs)

The Transition Probability Function Pij (t)

The Transition Probability Function Pij(t)

LetPij(t) = P[X (t + s) = j |X (s) = i ]

denote the transition probabilities of the continuous-time Markov chain.How do we calculate them?

Example

For the Poisson process with rate λ

Pij(t) = P[j − i jumps in (0, t]|X (0) = i ]

= P[j − i jumps in (0, t]]

= e−λt(λt)j−i

(j − i)!.

Continuous Time Markov Chains (CTMCs)

The Transition Probability Function Pij (t)

The Transition Probability Function Pij(t)

LetPij(t) = P[X (t + s) = j |X (s) = i ]

denote the transition probabilities of the continuous-time Markov chain.How do we calculate them?

Example

For the Birth process with rates λi , i ≥ 0

Pij(t) = P[X (t) = j |X (0) = i ]

= P[X (t) ≤ j |X (0) = i ]− P[X (t) ≤ j − 1|X (0) = i ]

= P[Xi + · · ·+ Xj > t]− P[Xi + · · ·+ Xj−1 > t],

with Xk ∼ Exp(λk), k=1,2,. . . . Hence, for λk all dirreferent

P[Xi + · · ·+ Xj > t] =

j∑k=i

e−λk t∏r 6=k

λrλr − λk

.

Continuous Time Markov Chains (CTMCs)

The Transition Probability Function Pij (t)

Instantaneous Transition Rates

The Transition Probability Function Pij(t)Transition Rates

We shall derive a set of differential equations that the transitionprobabilities Pij(t) satisfy in a general continuous-time Markov chain.

First we need a definition and a pair of lemmas.

DefinitionFor any pair of states i and j, let

qij = viPij

Since vi is the rate at which the process makes a transition when in statei and Pij is the probability that this transition is into state j, it followsthat qij is the rate, when in state i , at which the process makes atransition into state j. The quantities qij are called the instantaneoustransition rates.

vi =∑j

viPij =∑j

qij and Pij =qij

vi=

qij∑j qij

Continuous Time Markov Chains (CTMCs)

The Transition Probability Function Pij (t)

Instantaneous Transition Rates

The Transition Probability Function Pij(t)Transition Rates

We shall derive a set of differential equations that the transitionprobabilities Pij(t) satisfy in a general continuous-time Markov chain.

First we need a definition and a pair of lemmas.

Lemma (6.2)

a) limh→01−Pii (h)

h = vi

b) limh→0Pij (h)

h = qij , when i 6= j .

Lemma (6.3 – Chapman-Kolmogorov equations)

For all s, t ≥ 0

Pij(t + s) =∑k∈S

Pik(t)Pkj(s)

P(t + s) = P(t)P(s)

Continuous Time Markov Chains (CTMCs)

The Transition Probability Function Pij (t)

Instantaneous Transition Rates

The Transition Probability Function Pij(t)Transition Rates

We shall derive a set of differential equations that the transitionprobabilities Pij(t) satisfy in a general continuous-time Markov chain.

First we need a definition and a pair of lemmas.

Theorem (6.1 – Kolmogorov’s Backward equations)

For all states i , j and times t ≥ 0

P ′ij(t) =∑k 6=i

qikPkj(t)− viPij(t)

with initial conditions Pii (0) = 1 and Pij(0) = 0 for all j 6= i .

Theorem (6.2 – Kolmogorov’s Forward equations)

Under suitable regularity conditions

P ′ij(t) =∑k 6=j

Pik(t)qkj − vjPij(t)

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

CTMC and Limiting probabilities

Assuming that the limt→∞ Pij(t) exists

(a) all states communicate

(b) the MC is positive recurrent (finite mean return time)

we can define limt→∞ Pij(t) = Pj and then by taking properly the limitas t →∞ in the Kolmogorov forward equations yields

vjPj =∑k 6=j

qkjPk

This set of equations together with the normalization equation∑

j Pj = 1can be solved to obtain the limiting probabilities.

Interpretations of Pj :

Limiting distribution

Long run fraction of time spent in j

Stationary distribution

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

CTMC and Limiting probabilities

Assuming that the limt→∞ Pij(t) exists

(a) all states communicate

(b) the MC is positive recurrent (finite mean return time)

we can define limt→∞ Pij(t) = Pj and then by taking properly the limitas t →∞ in the Kolmogorov forward equations yields

vjPj =∑k 6=j

qkjPk

This set of equations together with the normalization equation∑

j Pj = 1can be solved to obtain the limiting probabilities.

When the limiting probabilities exist we say that the MC is ergodic.

The Pj are sometimes called stationary probabilities since it can be shownthat if the initial state is chosen according to the distribution {Pj}, thenthe probability of being in state j at time t is Pj , for all t.

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

CTMC and Limiting probabilities

Assuming that the limt→∞ Pij(t) exists

(a) all states communicate

(b) the MC is positive recurrent (finite mean return time)

we can define the vector P = (limt→∞P·j(t)) and then by taking properlythe limit as t →∞ the Kolmogorov forward equations yield

PQ = 0

This set of equations together with the normalization equation∑

j Pj = 1can be solved to obtain the limiting probabilities.The matrix Q is called generator matrix and contains all rate transitions:

Q =

−v0 q01 . . .q10 −v1 . . .

......

. . .

The sum of all elements in each row is 0.

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

Birth-Death (B-D) Process and Limiting probabilities

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

λ0

%%1

λ1

%%

µ1

ee 2

λ2

&&

µ2

ee · · ·µ3

ee

Figure: Transition rate diagram of the Birth-Death Process

From the balance argument: “Outflow state j = Inflow state j”, weobtain the following system of equations for the limiting probabilities:

P0λ0 = P1µ1,

P1(λ1 + µ1) = P0λ0 + P2µ2,

...

Pi (λi + µi ) = Pi−1λi−1 + Pi+1µi+1, i = 2, 3, . . .

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

Birth-Death (B-D) Process and Limiting probabilities

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

λ0

%%1

λ1

%%

µ1

ee 2

λ2

&&

µ2

ee · · ·µ3

ee

Figure: Transition rate diagram of the Birth-Death Process

By adding to each equation the equation preceding it, we obtain

P0λ0 = P1µ1,

P1λ1 = P2µ2,

...

Piλi = Pi+1µi+1, i = 2, 3, . . .

Solving yields

Pi =λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

P0, i = 1, 2, . . .

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

Birth-Death (B-D) Process and Limiting probabilities

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

λ0

%%1

λ1

%%

µ1

ee 2

λ2

&&

µ2

ee · · ·µ3

ee

Figure: Transition rate diagram of the Birth-Death Process

Pi =λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

P0, i = 1, 2, . . .

And by using the fact that∑∞

j=0 Pj = 1 we can solve for P0

P0 = (1 +∞∑i=1

λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

)−1, i = 1, 2, . . .

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

Birth-Death (B-D) Process and Limiting probabilities

X (t) = population size at time t, state space {0, 1, 2, . . .}

0

λ0

%%1

λ1

%%

µ1

ee 2

λ2

&&

µ2

ee · · ·µ3

ee

Figure: Transition rate diagram of the Birth-Death Process

Pi =λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

P0, i = 1, 2, . . .

P0 = (1 +∞∑i=1

λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

)−1, i = 1, 2, . . .

It is sufficient and necessary for the limiting probabilities to exists

∞∑i=1

λi−1λi−2 · · ·λ0µiµi−1 · · ·µ1

<∞

Otherwise, the MC is transient or null-recurrent; no limiting distribution.

Continuous Time Markov Chains (CTMCs)

Limiting probabilities

Balance equations

CTMC and Limiting probabilities : Balance equations

From the balance argument: “Outflow state j = Inflow state j”, weobtain the following system of equations for the limiting probabilities,called balance equations:

vjPj =∑k 6=j

qkjPk

In many cases we can reduce the original system of balance equations byselecting appropriately a set A and summing over all states i ∈ A∑

j∈A

Pj

∑i 6∈A

qji =∑i 6∈A

Pi

∑j∈A

qij

Continuous Time Markov Chains (CTMCs)

Embedded Markov chain

Embedded Markov chain

Let {X (t), t ≥ 0} be some irreducible and positive recurrent CTMC withparameters vi and Pij = qij/vi . Let Yn be the state of X (t) after ntransitions. Then the stochastic process {Yn, n = 0, 1, . . .} is DTMC withtransition probabilities Pij (Pij = qij/vi i 6= j , and Pjj = 0). This processis called Embedded chain.

The equilibrium probabilities πi of this embedded Markov chain satisfy

πi =∑j∈S

πjPji .

Then the equilibrium probabilities of the Markov process can becomputed by multiplying the equilibrium probabilities of the embeddedchain by the mean times spent in the various states. This leads to,

Pi = Cπivi

where the constant C is determined by the normalization condition.

Continuous Time Markov Chains (CTMCs)

Uniformization

Uniformization

Consider a continuous-time Markov chain in which the mean time spentin a state is the same for all states.

That is, suppose that vi = v , for all states i . In this case since theamount of time spent in each state during a visit is exponentiallydistributed with rate v , it follows that if we let

N(t) denote the number of state transitions by time t,then {N(t), t ≥ 0} will be a Poisson process with rate v .

To compute the transition probabilities Pij(t), we can condition on N(t):

Pij(t) = P[X (t) = j |X (0) = i ]

=∞∑n=0

P[X (t) = j |X (0) = i ,N(t) = n]P[N(t) = n|X (0) = i ]

=∞∑n=0

P[X (t) = j |X (0) = i ,N(t) = n]︸ ︷︷ ︸Pnij

e−vt(vt)n

n!=∞∑n=0

Pnij e−vt (vt)n

n!

Continuous Time Markov Chains (CTMCs)

Uniformization

Uniformization

Consider a continuous-time Markov chain in which the mean time spentin a state is the same for all states.

Given that vi are bounded, set v ≥ max{vi}.Now when in state i , the process actually leaves at rate vi , but this isequivalent to supposing that transitions occur at rate v , but only thefraction vi/v of transitions are real ones and the remaining fraction1− vi/v are fictitious transitions which leave the process in state i .

Continuous Time Markov Chains (CTMCs)

Uniformization

Uniformization

Consider a continuous-time Markov chain in which the mean time spentin a state is the same for all states.

Given that vi are bounded, set v ≥ max{vi}.Any Markov chain with vi bounded, can be thought of as being a processthat spends an exponential amount of time with rate v in state i andthen makes a transition to j with probability

P∗ij =

{1− vi

v , j = iviv Pij , j 6= i

Hence, the transition probabilities can be computed by

Pij(t) =∞∑n=0

P∗ijne−vt

(vt)n

n!

This technique of uniformizing the rate in which a transition occurs fromeach state by introducing transitions from a state to itself is known asuniformization.

Continuous Time Markov Chains (CTMCs)

Uniformization

Summary CTMC

1 Memoryless property

2 Poisson process

3 Birth-Death ProcessDetermine M(t) = E [X (t)]First step analysis

4 The Transition Probability Function Pij(t)Instantaneous Transition Rates

5 Limiting probabilitiesBalance equations

6 Embedded Markov chain

7 Uniformization

Continuous Time Markov Chains (CTMCs)

Uniformization

Exercises

Introduction to Probability ModelsHarcourt/Academic Press, San Diego, 9th ed., 2007Sheldon M. Ross

Chapter 6

Sections 6.1, 6.2, 6.3, 6.4, 6.5 and 6.7

Exercises: 1,3,5,6(a)-(b),8,10(but you do not have to verify that thetransition probabilities satisfy the forward and backwardequations),12,13,15,17,18,22,23+

Continuous Time Markov Chains (CTMCs)

Uniformization

Exercises1 Consider a Markov process with states 0, 1 and 2 and with the

following transition rate matrix Q:

Q =

−λ λ 0µ −(λ+ µ) λµ 0 −µ

where λ, µ > 0.a. Derive the parameters vi and Pij for this Markov process.b. Determine the expected time to go from state 1 to state 0.

2 Consider the following queueing model: customers arrive at a servicestation according to a Poisson process with rate λ. There are cservers; the service times are exponential with rate µ. If an arrivingcustomer finds c servers busy, then he leaves the system immediately.a. Model this system as a birth and death process.b. Suppose now that there are infinitely many servers (c = ∞).Again model this system as a birth and death process.