Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004

Preview:

DESCRIPTION

Assignment 3. Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004. Example. - PowerPoint PPT Presentation

Citation preview

• Chapter 3: Problems 7, 11, 14• Chapter 4: Problems 5, 6, 14

• Due date: Monday, March 15, 2004

Assignment 3

Inventory System: Inventory at a store is reviewed daily. If inventory drops below 3 units, an order is placed with the supplier which is delivered the next day. The order size should bring inventory position to 6 units. Daily demand D is i.i.d. with distribution P(D = 0) =1/3 P(D = 1) =1/3 P(D = 2) =1/3.

Let Xn describe inventory level on the nth day. Is the process {Xn} a Markov chain? Assume we start with 6 units.

Example

{Xn: n =0, 1, 2, ...} is a discrete time stochastic process

Markov Chains

{Xn: n =0, 1, 2, ...} is a discrete time stochastic process

If Xn = i the process is said to be in state i at time n

Markov Chains

{Xn: n =0, 1, 2, ...} is a discrete time stochastic process

If Xn = i the process is said to be in state i at time n

{i: i=0, 1, 2, ...} is the state space

Markov Chains

{Xn: n =0, 1, 2, ...} is a discrete time stochastic process

If Xn = i the process is said to be in state i at time n

{i: i=0, 1, 2, ...} is the state space

If P(Xn+1 =j|Xn =i, Xn-1 =in-1, ..., X0 =i0}=P(Xn+1 =j|Xn =i} = Pij, the process is said to be a Discrete Time Markov Chain (DTMC).

Markov Chains

{Xn: n =0, 1, 2, ...} is a discrete time stochastic process

If Xn = i the process is said to be in state i at time n

{i: i=0, 1, 2, ...} is the state space

If P(Xn+1 =j|Xn =i, Xn-1 =in-1, ..., X0 =i0}=P(Xn+1 =j|Xn =i} = Pij, the process is said to be a Discrete Time Markov Chain (DTMC).

Pij is the transition probability from state i to state j

Markov Chains

0

00 01 02

10 11 12

0 1 2

0, , 0 1, 0,1,...

...

...

. . . .

. . . .

...

. . . .

. . . .

ij ijj

i i i

P i j P i

P P P

P P P

P P P

P

P: transition matrix

Example 1: Probability it will rain tomorrow depends only on whether it rains today or not:

P(rain tomorrow|rain today) = P(rain tomorrow|no rain today) =

Example 1: Probability it will rain tomorrow depends only on whether it rains today or not:

P(rain tomorrow|rain today) = P(rain tomorrow|no rain today) =

State 0 = rainState 1 = no rain

Example 1: Probability it will rain tomorrow depends only on whether it rains today or not:

P(rain tomorrow|rain today) = P(rain tomorrow|no rain today) =

State 0 = rainState 1 = no rain

1

1

P

Example 4: A gambler wins $1 with probability p, loses $1 with probability 1-p. She starts with $N and quits if she reaches either $M or $0. Xn is the amount of money the gambler has after playing n rounds.

Example 4: A gambler wins $1 with probability p, loses $1 with probability 1-p. She starts with $N and quits if she reaches either $M or $0. Xn is the amount of money the gambler has after playing n rounds.

P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p

(i≠0, M)

Example 4: A gambler wins $1 with probability p, loses $1 with probability 1-p. She starts with $N and quits if she reaches either $M or $0. Xn is the amount of money the gambler has after playing n rounds.

P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p

(i≠0, M)

P(Xn=i-1| Xn-1 =i, Xn-2 = in-2, ..., X0 =N} = P(Xn =i-1|Xn-1 =i}=1–p

(i≠0, M)

Example 4: A gambler wins $1 with probability p, loses $1 with probability 1-p. She starts with $N and quits if she reaches either $M or $0. Xn is the amount of money the gambler has after playing n rounds.

P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p

(i≠0, M)

P(Xn=i-1| Xn-1 =i, Xn-2 = in-2, ..., X0 =N} = P(Xn =i-1|Xn-1 =i}=1–p

(i≠0, M)

Pi, i+1=P(Xn=i+1|Xn-1 =i}; Pi, i-1=P(Xn=i-1|Xn-1 =i}

Pi, i+1= p;

Pi, i-1=1-p for i≠0, M

P0,0= 1; PM, M=1 for i≠0, M (0 and M are called absorbing states)

Pi, j= 0, otherwise

random walk: A Markov chain whose state space is 0, 1, 2, ..., and Pi,i+1= p = 1 - Pi,i-1 for i=0, 1,

2, ..., and 0 < p < 1 is said to be a random walk.

Chapman-Kolmogorv Equations

{ | }, 0, , 0nij n m mP P X j X i n i j

Chapman-Kolmogorv Equations

1

{ | }, 0, , 0nij n m m

ij ij

P P X j X i n i j

P P

Chapman-Kolmogorv Equations

1

0

{ | }, 0, , 0

for all , 0, and , 0

( )

nij n m m

ij ij

n m n mij ik kjk

P P X j X i n i j

P P

P P P n m i j

Chapman - Kolmogrov equations

0{ | },

n mij n mP P X j X i

0

00

{ | },

= { , | }

n mij n m

n m nk

P P X j X i

P X j X k X i

0

00

0 00

{ | },

= { , | }

{ | , } { | }

n mij n m

n m nk

n m n nk

P P X j X i

P X j X k X i

P X j X k X i P X k X i

0

00

0 00

00

{ | },

= { , | }

{ | , } { | }

{ | } { | }

n mij n m

n m nk

n m n nk

n m n nk

P P X j X i

P X j X k X i

P X j X k X i P X k X i

P X j X k P X k X i

0

00

0 00

00

0 0

{ | },

= { , | }

{ | , } { | }

{ | } { | }

n mij n m

n m nk

n m n nk

n m n nk

m n n mkj ik ik kjk k

P P X j X i

P X j X k X i

P X j X k X i P X k X i

P X j X k P X k X i

P P P P

( ) : the matrix of transition probabilities n nijn P

P

( )

( ) ( ) ( )

: the matrix of transition probabilities n nij

n m n m

n P

P

P P × P

( )

( ) ( ) ( )

1

: the matrix of transition probabilities

(Note: if [ ] and [ ], then [ ])

n nij

n m n m

M

ij ij ik kjk

n P

a b a b

P

P P × P

A B A × B

Example 1: Probability it will rain tomorrow depends only on whether it rains today or not:

P(rain tomorrow|rain today) = P(rain tomorrow|no rain today) =

What is the probability that it will rain four days from today given that it is raining today? Let = 0.7 and = 0.4.

State 0 = rainState 1 = no rain

400What is ?P

400What is ?

0.7 0.3

0.4 0.6

P

P

400

(2)

What is ?

0.7 0.3

0.4 0.6

0.7 0.3 0.7 0.3 0.61 0.39

0.4 0.6 0.4 0.6 0.52 0.48

P

P

P ×

400

(2)

(4) (2) (2)

What is ?

0.7 0.3

0.4 0.6

0.7 0.3 0.7 0.3 0.61 0.39

0.4 0.6 0.4 0.6 0.52 0.48

0.61 0.39 0.61 0.39 0.5749 0.4251

0.52 0.48 0.52 0.48 0.5668 0.4332

P

P

P ×

P P × P ×

400

(2)

(4) (2) (2)

400

What is ?

0.7 0.3

0.4 0.6

0.7 0.3 0.7 0.3 0.61 0.39

0.4 0.6 0.4 0.6 0.52 0.48

0.61 0.39 0.61 0.39 0.5749 0.4251

0.52 0.48 0.52 0.48 0.5668 0.4332

0.574

P

P

P

P ×

P P × P ×

9

How do we calculate ( )?nP X j

Unconditional probabilities

0

How do we calculate ( )?

Let ( )

n

i

P X j

P X i

Unconditional probabilities

0

0 01

How do we calculate ( )?

Let ( )

( ) ( | ) ( )

n

i

n ni

P X j

P X i

P X j P X j X i P X i

Unconditional probabilities

0

0 01

1

How do we calculate ( )?

Let ( )

( ) ( | ) ( )

n

i

n ni

nij ii

P X j

P X i

P X j P X j X i P X i

P

Unconditional probabilities

0

State is accessible from state if 0 for some 0.

Two states that are accessible to each other are said

to communicate ( ).

Any state communicates with itself since 1.

nij

ii

j i P n

i j

P

Classification of States

If state communicates with state , then state communicates

with state .

i j j

i

Communicating states

If state communicates with state , then state communicates

with state .

If state communicates with state , and state communicates

with state , then state communicates with state .

i j j

i

i j j

k i k

Communicating states

0

If communicates with and communicates with ,

then there exist some and for which 0 and 0.

0.

n mij jk

n m n m n mik ir rk ij jkr

i j j k

m n P P

P P P P P

Proof

Two states that communicate are said to belong to the same class.

Classification of States (continued)

Two states that communicate are said to belong to the same class.

Two classes are either identical or disjoint

(have no communicating states).

Classification of States (continued)

Two states that communicate are said to belong to the same class.

Two classes are either identical or disjoint

(have no communicating states).

A Markov chain is said to be if it has onl

irreducible y one class

(all states communicate with each other).

Classification of States (continued)

1/ 2 1/ 2 0

1/ 2 1/ 2 1/ 4

0 1/ 3 2 / 3

P

1/ 2 1/ 2 0

1/ 2 1/ 2 1/ 4

0 1/ 3 2 / 3

P

The Markov chain with transition probability matrix P is irreducible.

1/ 2 1/ 2 0 0

1/ 2 1/ 2 0 0

1/ 4 1/ 4 1/ 4 1/ 4

0 0 0 1

P

1/ 2 1/ 2 0 0

1/ 2 1/ 2 0 0

1/ 4 1/ 4 1/ 4 1/ 4

0 0 0 1

P

The classes of this Markov chain are {0, 1}, {2}, and {3}.

• fi: probability that starting in state i, the process will eventually re-enter state i.

Recurrent and transient states

• fi: probability that starting in state i, the process will eventually re-enter state i.

• State i is recurrent if fi = 1.

Recurrent and transient states

• fi: probability that starting in state i, the process will eventually re-enter state i.

• State i is recurrent if fi = 1.

• State i is transient if fi < 1.

Recurrent and transient states

• fi: probability that starting in state i, the process will eventually re-enter state i.

• State i is recurrent if fi = 1.

• State i is transient if fi < 1.

• Probability the process will be in state i for exactly n periods is fi n-1(1- fi), n ≥ 1.

Recurrent and transient states

1 1State is recurrent if and transient if n n

ii iin ni P P

1, if

0 if n

nn

X iI

X i

Proof

00

1, if

0 if

: number of periods the process is in state .

given that it starts in

nn

n

nn

X iI

X i

I X i i

i

Proof

00

0 00 0

00

0

1, if

0 if

: number of periods the process is in state .

given that it starts in

[ ]

{ }

nn

n

nn

n nn n

nn

niin

X iI

X i

I X i i

i

E I X i E I X i

P X i X i

P

Proof

• Not all states can be transient.

•If state i is recurrent, and state i communicates with state j, then state j is recurrent.

1 1 1

Since , there exists and for which >0 and >0.

, for any .

.

k mij ji

m n k m n kjj ji ii ij

m n k m n k m k njj ji ii ij ji ij iin n n

i j k m P P

P P P P n

P P P P P P P

Proof

• Not all states can be transient.

• If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property.

• Not all states can be transient.

• If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property.

• Not all states can be transient.

• If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property.

• If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property.

• Not all states can be transient.

• If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property.

• All states in an irreducible Markov chain are recurrent.

0 0 1/ 2 1/ 2

1 0 0 0

0 1 0 0

0 1 0 0

P

0 0 1/ 2 1/ 2

1 0 0 0

0 1 0 0

0 1 0 0

P

All states communicate. Therefore all states are recurrent.

1/ 2 1/ 2 0 0 0

1/ 2 1/ 2 0 0 0

0 0 1/ 2 1/ 2 0

0 0 1/ 2 1/ 2 0

1/ 4 1/ 4 0 0 1/ 2

P

1/ 2 1/ 2 0 0 0

1/ 2 1/ 2 0 0 0

0 0 1/ 2 1/ 2 0

0 0 1/ 2 1/ 2 0

1/ 4 1/ 4 0 0 1/ 2

P

There are three classes {0, 1}, {2, 3} and {4}. The first two are recurrent and the third is transient

Recommended