Upload
corey-patrick
View
233
Download
4
Tags:
Embed Size (px)
Citation preview
Introduction to Discrete Time Semi Markov ProcessNur Aini Masruroh
Recall: Discrete Time Markov Process•In the DTMC…
▫Whenever a process enters a state i, we imagine that it determines the next state j to which it will move instantaneously according to the transition probability of pij
Discrete time semi Markov Process• In semi Markov process, after state j has been selected, but
before making this transition from state i to state j, the process “holds” for a time tij in the state i.
• The holding times tij are positive, integer-valued random variables each governed by a probability mass function hij(.) called the holding time mass function for a transition from state i to state j
• After holding in state i for the holding time tij, the process makes transition to state j and then immediately select a new destination state k using the transition probabilities pjk
• It next chooses a holding time tjk in state j according to the mass function hjk(.) and makes its next transition at time tjk after entering state j
• The process continues in the same way
Discrete time semi Markov Process (cont’d)• To describe semi Markov process completely, we need to define n2
holding time mass functions in addition to the transition probabilities
• Suppose, the cumulative probability distribution of tij, ≤hij(.) is defined as
ntPnhmhnh
ntPhnh
ijijnm
ijij
ij
n
mijij
)(1)()(
and
)(
1
0
mtPmhpmw i
N
jijiji
1
)()(
Suppose we know that the process enters state i and choose successor j but we don’t know the successor chosen. The pmf assigned to the time ti spent in i is defined as
wi(m): probability that the system will spend m time unit in state i
Discrete time semi Markov Process (cont’d)• So,ti: waiting time in state i and wi(.): waiting time pmf
▫ Waiting time is a holding time that is unconditional on the destination state
▫ The mean waiting time is related to the mean holding time by
N
j
ijiji tpt1
N
jijiji tpt
1
22
We compute the second moment of the waiting time from the second moments of the holding time using
2it
2ijt
it ijt
22iii
v ttt Variance waiting time,
Car rental example• A car rental rents cars at two locations, town 1 and town 2. the
experience of the company shows that a car is rented in town 1 is a 0.8 probability that it will be returned to town 1 and a 0.2 probability that it will be returned to town 2. when the car is rented in town 2, there is a 0.7 probability that it will be returned to town 2 and a 0.3 probability that it will be returned to town 1. We assumed that there are always many customers available at both towns and that cars are always rented at the town to which they are last returned
• Because of the nature of the trips involved, the length of time a car will be rented depends on both where it is rented and where it is returned. The holding time tij is thus the length of time a car will be rented if it was rented at town i and returned to town j. From the company records, the possible holding time pmf follows geometric distribution with the following expressions:▫ h11(m) = (1/3)(2/3)m-1
▫ h21(m) = (1/4)(3/4)m-1
▫ h12(m) = (1/6)(5/6)m-1
▫ h22(m) = (1/12)(11/12)m-1
Car rental example: solution
Transition probability matrix
7.03.0
2.08.0P
132,276,12
30,66,6
12,28,4
6,15,3
222
2222
122
1212
212
2121
112
1111
ttt
ttt
ttt
ttt
v
v
v
v
The holding time distribution are all geometric distributions
General term for geometric distribution is (1-a)an-1, with mean 1/(1-a) and second moment (1+a)/(1-a)2, and variance a/(1-a)2
Therefore the moments of four holding times are:
These numbers indicate that people renting cars at town 2 and returning them to town 2 often have long rental periods
Car rental example: solution
1 2
p12 = 0.2h12(m) = 1/6(5/6)m-1
t12 bar = 6
p21 = 0.3h21(m) = 1/4(3/4)m-1
t21 bar = 4
p22 = 0.7h22(m) = 1/12(11/12)m-1
t22 bar = 12
p11 = 0.8h11(m) = 1/3(2/3)m-1
t11 bar = 3
A complete description of the semi Markov process:
Car rental example: solution
1
1
1
1
1
0
,...2,1,0)1()()(
and
,...2,1,01)1()()(
nm
nm
nmijij
n
m
nmn
mijij
naaamhnh
naaamhnh
,...2,1,0)12/11()4/3(
)6/5()3/2()(
,...2,1,0)12/11(1)4/3(1
)6/5(1)3/2(1)(
nnH
nnH
nn
nn
nn
nn
Therefore, the matrix forms of these distributions for the example are
The results show, for example, that the chance that a car rented in town 1 and returned to town 2 will be rented for n or fewer time periods is 1 – (5/6)n. A car rented in town 2 and returned to town 1 has a chance (3/4)n of being rented for more than n periods
If hij(m) is the geometric distribution (1-a)am-1, m=1,2,3,…, then the cumulative and complementary cumulative distributions ≤hij(n) and >hij(n) are
Car rental example: solution
Waiting time
6.9)12(7.0)4(3.0..
6.3)6(2.0)3(8.0..
222221212
121211111
tptpt
tptpt
11222221212
11121211111
)12/11)(12/1(7.0)4/3)(4/1(3.0)(.)(.)(
)6/5)(6/1(2.0)3/2)(3/1(8.0)(.)(.)(
mm
mm
mhpmhpmw
mhpmhpmw
The mean time that a car rented in town 1 will be rented, destination unknown, is 3.6 period. If car rented in town 2, the mean is 9.6 period
Distribution of waiting time (probability that a car rented in each town will be rented for m periods, destination unknown) is:
Car rental example: solution
Cumulative and complementary cumulative distributions of waiting time:
nn
nn
nn
nn
nw
nw
nw
nw
)12/11(7.0)4/3(3.0)(
)6/5(2.0)3/2(8.0)(
and
)12/11(7.0)4/3(3.01)(
)6/5(2.0)3/2(8.01)(
2
1
2
1
The expression for >w2(n), for example, shows the probability that a car rented in town 2 will be rented for more than n periods if its destination is unknown
Interval transition probabilities, Φij(n)•Corresponds to multistep transition
probabilities for the Markov process•Φij(n): probability that a discrete-time semi
Markov process will be in state j at time n given that it entered state i at time zero interval transition probability from state i to state j in the interval (0,n)▫Note that an essential part of the definition is
that the system entered state i at time zero as opposed to its simply being in state i at time zero
Limiting behavior
•The chain structure of semi-Markov process is the same as that of its imbedded Markov process
•Dealing with monodesmic semi Markov process▫ Monodesmic process: Markov process that has a Φ
with equal rows▫ Monodesmic process:
Sufficient condition: able to make transition Necessary condition: exit only one subset of states that must
be occupied after infinitely many transitions
Limiting behavior (cont’d)
• Limiting interval probabilities, Φij for a monodesmic semi Markov process:
j
jj
N
j
jj
jjij
1
With:
πj: limiting state probability of the imbedded Markov process for state j
τj bar: mean waiting time for state j
Consider: car rental example
• Transition probability matrix,
π1 = 0.6, π2 = 0.4
64.0)6.9(4.0)6.3(6.0
)6.9(4.0
36.0)6.9(4.0)6.3(6.0
)6.3(6.0
6.96.3
2211
222
2211
111
21
7.03.0
2.08.0P