+ All Categories
Home > Documents > G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov...

G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov...

Date post: 18-Dec-2015
Category:
Upload: berenice-maxwell
View: 231 times
Download: 0 times
Share this document with a friend
Popular Tags:
53
G12: Management Science Markov Chains
Transcript
Page 1: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

G12: Management Science

Markov Chains

Page 2: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline

• Classification of stochastic processes• Markov processes and Markov chains

• Transition probabilities

• Transition networks and classes of states

• First passage time probabilities and expected first passage time

• Long-term behaviour and steady state distribution

Page 3: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Analysing Uncertainty• Computer Models of Uncertainty:

– Building blocks: Random number generators– Simulation Models

• Static (product launch example)• Dynamic (inventory example and queuing models)

• Mathematical Models of Uncertainty: – Building blocks: Random Variables– Mathematical Models

• Static: Functions of Random Variables • Dynamic: Stochastic (Random) Processes

Page 4: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Stochastic Processes• Collection of random variables Xt, t in T

– Xt’s are typically statistically dependent

• State space: set of possible values of Xt’s– State space is the same for all Xt’s – Discrete space: Xt’s are discrete RVs– Continuous space: Xt’s are continuous RVs

• Time domain: – Discrete time: T={0,1,2,3,…}– Continuous time: T is an interval (possibly unbounded)

Page 5: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Examples from Queuing Theory• Discrete time, discrete space

– Ln: queue length upon arrival of nth customer

• Discrete time, continuous space– Wn: waiting time of nth customer

• Continuous time, discrete space– Lt: queue length at time t

• Continuous time, continuous space– Wt: waiting time for a customer arriving at time t

Page 6: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

A gambling example

• Game: Flip a coin. You win £ 10 if coin shows head and loose £ 10 otherwise

• You start with £ 10 and you keep playing until you are broke

• Typical questions– What is the expected amount of money after t

flips?– What is the expected length of the game?

Page 7: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

£10

£ 0

£20

10

£30

0.5

0.5

0.5

0.5

0.5

0.5

0.5

0.5

A Branching Process

….

….

….

Page 8: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Discrete Time - Discrete State Stochastic Processes

• Xt: Amount of money you own after t flips• Stochastic Process: X1,X2,X3,…• Each Xt has its own probability distribution• The RVs are dependent: the probability of

having £ k after t flips depends on what you had after t’ (<t) flips – Knowing Xt’ changes the probability distribution

of Xt (conditional probability)

Page 9: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline• Classification of stochastic processes

• Markov processes and Markov chains• Transition probabilities

• Transition networks and classes of states

• First passage time probabilities and expected first passage time

• Long-term behaviour and steady state distribution

Page 10: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Markovian Property• Waiting time at time t depends on waiting time at times t’<t

– Knowing waiting time at some time t’<t changes the probability distribution of waiting time at time t (Conditional probability)

– Knowledge of history generally improves probability distribution (smaller variance)

• Generally: The distribution of states at time t depends on the whole history of the process – Knowing states of the system at times t1,…tn<t changes the distribution of states at

time t

• Markov property: The distribution of states at time t, given the states at times t1<…<tn<t is the same as the distribution of states at time t, given only knowledge of the state at time tn. – The distribution depends only on the last observed state– Knowledge about earlier states does not improve probability distribution

Page 11: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Discrete time, discrete space

• P(Xt+1= j | X0=i0,…,Xt=it) = P(Xt+1= j | Xt=it)

• In words: The probabilities that govern a transition from state i at time t to state j at time t+1 only depend on the state i at time t and not on the states the process was in before time t

Page 12: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Transition Probabilities• The transition probabilites are

P(Xt+1= j | Xt=i)

• Transition probabilities are called stationary if

P(Xt+1= j | Xt=i) = P(X1= j | X0=i)

• If there are only finitely many possible states of the RVs Xt then the stationary transition probabilities are conveniently stored in a transition matrix

Pij= P(X1= j | X0=i)

• Find the transition matrix for our first example if the game ends if the gambler is either broke or has earned £ 30

Page 13: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Markov Chains• Stochastic process with a finite number, say n,

possible states that has the Markov property• Transitions between states in discrete time steps• MC is completely characterised by transition

probabilities Pij from state i to state j are stored in an n x n transition matrix P– Rows of transition matrix sum up to 1. Such a matrix

is called a stochastic matrix• Initial distribution of states is given by an initial

probability vector p(0)=(p1(0),…,pn

(0))• We are interested in the change of the probability

distribution of the states over time

Page 14: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Markov Chains as Modelling Templates

• Lawn mower example:– Weekly demand D for lawn mowers has distribution

P(D=0)=1/3, P(D=1)=1/2, P(D=2)=1/6

– Mowers can be ordered at the end of each week and are delivered right at the beginning of the next week

– Inventory policy: Order two new mowers if stock is empty at the end of the week

– Currently (beginning of week 0) there are two lawn mowers in stock

• Determine the transition matrix

Page 15: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Market Shares• Two software packages, B and C, enter a market

that has so far been dominated by software A• C is more powerful than B which is more

powerful than A• C is a big departure from A, while B has some

elements in common with both A and C• Market research shows that about 65% of A-users

are satisfied with the product and won’t change over the next three months

• 30% of A-users are willing to move to B, 5% are willing to move to C….

Page 16: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Transition Matrix• All transition probabilities over the next three months

can be found in the following transition matrix

• What are the approximate market shares going to be?

ToFrom

A B C

A 65% 30% 5%

B 10% 75% 15%

C 0% 10% 90%

Page 17: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Machine Replacement• Many identical machines are used in a

manufacturing environment

• They deteriorate over time with the following monthly transition probabilities:

ToFrom

New OK Worn Fail

New 0 0.9 0.1 0

OK 0 0.6 0.3 0.1

Worn 0 0 0.6 0.4

Fail 1 0 0 0

Page 18: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline• Classification of stochastic processes

• Markov processes and Markov chains

• Transition probabilities• Transition networks and classes of states

• First passage time probabilities and expected first passage time

• Long-term behaviour and steady state distribution

Page 19: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

2-step transition probability (graphically)

i

0

1

2

j

Pi0

Pi1

Pi2

P2j

P1j

P0j

Page 20: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

2-step transition probabilities (formally)

kikkj

k

k

k

ij

PP

iXkXPkXjXP

iXkXPkXjXP

iXkXPkXiXjXP

iXjXPP

)|()|(

)|()|(

)|(),|(

)|(

0101

0112

01102

02)2(

Page 21: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Chapman-Kolmogorov Equations

• Similarly, one shows that n-step transition probabilities Pij

(n)=P(Xn=j | X0=i) obey the following law (for arbitrary m<n:)

• The n-step transition probability matrix P(n) is the n-th power of the 1-step TPM P:

P(n) =Pn=P…P (n times)

k

mik

mnkj

nij PPP )()(

Page 22: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Example

iesprobabilitn transitiostep-3 theFind

9.01.0

7.03.0

matrix n transitioGiven the

P

see spreadsheet Markov.xls

Page 23: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Distribution of Xn

• Given – Markov chain with m states (1,…,m) and transition matrix P

– Probability vector for initial state (t=0): p(0)=(p1(0),…, pm

(0))

• What is the probability that the process is in state i after n transitions?

• Bayes’ formula:P(Xn=i)=P(Xn=i¦X0=1)p1

(0)+…+P(Xn=i¦X0=m)pm(0)

• Probability vector for Xn: p(n)= p(0)Pn

• Iteratively: p(n+1)= p(n)P • Open spreadsheet Markov.xls for lawn mower, market

share, and machine replacement examples

Page 24: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline• Classification of stochastic processes

• Markov processes and Markov chains

• Transition probabilities

• Transition networks and classes of states• First passage time probabilities and expected first passage

time

• Long-term behaviour and steady state distribution

Page 25: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

An Alternative Representation of the Machine Replacement Example

New

Fail

OK

Worn

0.9

0.1

0.6

0.30.1

0.60.4

1

Page 26: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

The transition network

• The nodes of the network correspond to the states

• There is an arc from node i to node j if Pij > 0 and this arc has an associated value Pij

• State i is accessible from state j if there is a path in the network from node i to node j

• A stochastic matrix is said to be irreducible if each state is accessible from each other state

Page 27: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Classes of States • State i and j communicate if i is accessible

from j and j is accessible from i• Communicating states form classes • A class is called absorbing if it is not possible

to escape from it• A class A is said to be accessible from a class

B if each state in A is accessible from each state in B– Equivalently: …if some state in A is accessible from

some state in B

Page 28: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Find all classes in this example and indicate their accessibility from other classes

1

4

7

2

6

3

5

1/3

2/3

1

1/3

1/2

1/6

1

1/2

1/2

1

1/2

2/3

Page 29: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Return to Gambling Example

• Draw the transition network

• Find all classes

• Is the Markov chain irreducible?

• Indicate the accessibility of the classes

• Is there an absorbing class?

Page 30: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline• Classification of stochastic processes

• Markov processes and Markov chains

• Transition probabilities

• Transition networks and classes of states

• First passage time probabilities and expected first passage time

• Long-term behaviour and steady state distribution

Page 31: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

First passage times

• The first passage time from state i to state j is the number of transitions until the process hits state j if it starts at state i

• First passage time is a random variable

• Define fij(k) = probability that the first passage from state i to state j occurs after k transitions

Page 32: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Calculating fij(k)

• Use Bayes’ formula

P(A)=P(A|B1)P(B1)+…+ P(A|Bn)P(Bn)

• Event A: starting from sate i the process is in state j after n transitions (P(A)=Pij

(n))

• Event Bk: first passage from i to j happens after k transitions

Page 33: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Calculating fij(k) (cont.)

)1()1()(

)1(

1111

)(

)1(...)1()(

)1(

)()1(...)1(

)()|()()|(...)()|(

)(

jjijn

jjijn

ijij

ijij

ijijjjijn

jj

nnnn

nij

PnfPfPnf

Pf

nfnfPfP

BPBAPBPBAPBPBAP

APP

Bayes’ formula gives:

This results in the recursion formula:

Page 34: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Alternative: Simulation

• Do a number of simulations, starting from state i and stopping when you have reached state j

• Estimate fij(k) = Percentage of runs of length k

• BUT: This may take a long time if you want to do this for all state combinations (i,j) and many k’s

Page 35: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Expected first passage time

• If Xij = time of first passage from i to j then

E(Xij)=fij(1)+2fij(2)+3fij(3)+….

• Use conditional expectation formula

E(Xij)=E(Xij|B1)P(B1)+…+ E(Xij|Bn)P(Bn)

• Event Bk: first transition goes from i to k

• Notice– E(Xij |Bj)=1 and E(Xij|Bk)=1+E(Xkj)

Page 36: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Hence

jkikkj

k jkikkjik

jkikkjij

ikkk

ijij

PXE

PXEP

PXEP

PBXEXE

)(1

)(

))(1(

)|()(

unknowns in equations Fix mmj

Page 37: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Example

4/34/1

3/23/1P

4)(,3/11)(

)(4/31)(

)(3/21)(

2111

2121

2111

XEXE

XEXE

XEXE

Page 38: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Outline• Classification of stochastic processes

• Markov processes and Markov chains

• Transition probabilities

• Transition networks and classes of states

• First passage time probabilities and expected first passage time

• Long-term behaviour and steady state distribution

Page 39: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Long term behaviour• We are interested in distribution of Xn as n tends to

infinity: lim p(n)=lim p(0)P(n)= p(0) lim P(n)

• If lim P(n) exists then P is called

• The limit may not exist, though:

• See Markov.xls• Problem: Process has periodic behaviour

– Process can only recur to state i after t,2t,3t,… steps– There exists t: if n Not in {t,2t,3t} then Pii

(n) = 0

• Period of a state i: maximal such t

01

10P

Page 40: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Find the periods of the states

1

4

7

2

6

3

5

1/3

2/3

1

1/3

1/2

1/6

1

1/2

1/2

1

1/2

2/3

Page 41: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Aperiodicity

• A state with period 1 is called aperiodic

• State i is aperiodic if and only if there exists N such that Pii

(N) > 0 and Pii(N+1) > 0

• The Chapman-Kolmogorov Equations therefore imply that Pii

(n)>0 for every n>=N

• Aperiodicity is a class property, i.e. if one state in a class is aperiodic, then so are all others

Page 42: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Regular matrices

• A stochastic matrix P is called regular if there exists a number n such that all entries of Pn are positive

• A Markov chain with a regular transition matrix is aperiodic (i.e. all states are aperiodic) and irreducible (i.e. all states communicate)

Page 43: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Back to long-term behaviour

• Mathematical Fact: If a Markov chain is irreducible and aperiodic then it is ergodic, i.e., all limits

exist

)(lim nij

nij PP

Page 44: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Finding the long term probabilities

• Mathematical Result: If a Markov chain is irreducible and aperiodic then all rows of its long term transition probability matrix are identical to the unique solution =(1,…, m) of the equations

11

m

ii

P

P

Page 45: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

However,...

• …the latter system is of the form P=, 1+…+m=1 and has m+1 equations and m unknowns– It has a solution because P is a stochastic matrix and

therefore has 1 as an eigenvalue (with eigenvector x=(1,…,1)). Hence is just a left eigenvector of P to the eigenvalue 1 and the additional equation normalizes the eigenvector

• Calculation: solve the system without the first equation - then check first equation

Page 46: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Example

• Find the steady state probabilities for

• Solution: (1,2)=(0.6,0.4)

2/12/1

3/13/2P

Page 47: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Steady state probabilities

• The probability vector with P= and 1+..+m=1 is called the steady state (or stationary) probability distribution of the Markov chain

• A Markov chain does not necessarily have a steady state distribution

• Mathematical result: an irreducible Markov chain has a steady state distribution

Page 48: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Tending towards steady state

• If we start with the steady state distribution then the probability distribution of the states does not change over time

• More importantly: If the Markov chain is irreducible and aperiodic then, independently of the initial distribution, the distribution of states gets closer and closer to the steady state distribution

• Illustration: see spreadsheet Markov.xls

Page 49: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

More on steady state distributions

• j can be interpreted as the long-run proportion of time the process is in state j

• Alternatively: j=1/E(Xjj) where Xjj is the time of the first recurrence to j– E.g. if the expected recurrence time to state j

is 2 transitions then, on the long run, the process will be in state j after every 1 out of two transitions,i.e. 1/2 of the time

Page 50: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Average Payoff Per Unit Time

• Setting: If process hits state i, a payoff of g(i) is realized (costs = negative payoffs)

• Average payoff per period after n transitions

Yn=(g(X1)+…+g(Xn))/n

• Long-run expected average payoff per time period: lim E(Yn) as n tends to infinity

Page 51: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Calculating long-run average pay-offs

Mathematical Fact: If a Markov chain is irreducible and aperiodic then

)()(lim1

jgYEm

jjn

n

Page 52: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Example

2/12/1

3/13/2P

A transition takes place every week. A weekly cost of £ 1 has to be payed if the process is in state 1, while a weekly profit of £ 1 is obtained if the process is in state 1. Find the average payoff per week. (Solution: £ -0.2 per week)

Page 53: G12: Management Science Markov Chains. Outline Classification of stochastic processes Markov processes and Markov chains Transition probabilities Transition.

Key Learning Points• Markov chains are a template for the analysis of

systems with finitely many states where random transitions between states happen at discrete points in time

• We have seen how to calculate n-step transition probabilities, first passage time probabilities and expected first passage times

• We have discussed steady state behaviour of a Markov chain and how to calculate steady state distributions


Recommended