+ All Categories
Home > Documents > 12.5 Markov Chains 2 (or Models)

12.5 Markov Chains 2 (or Models)

Date post: 04-Apr-2018
Category:
Upload: vijaya-shree
View: 218 times
Download: 0 times
Share this document with a friend

of 38

Transcript
  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    1/38

    Lecture 12.5Additional IssuesConcerning Discrete-Time

    Markov Chains

    Topics

    Review of DTMC Classification of states

    Economic analysis

    First-time passage

    Absorbing states

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    2/38

    A stochastic process {Xn} where nN= {0, 1, 2, . . . } iscalled a discrete-timeMarkov chain if

    Pr{Xn+1 =j |X0 = k0, . . . , Xn-1 = kn-1, Xn= i}

    = Pr{Xn+1 =j|Xn= i} transition probabilities

    for every i,j, k0, . . . , kn-1 and for every n.

    The future behavior of the system depends only on the

    current state iand not on any of the previous states.

    Discrete-Time Markov Chain

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    3/38

    Pr{Xn+1

    =j|

    Xn

    = i} = Pr{X1=j|X

    0= i} for all n

    (They dont change over time)

    We will only consider stationary Markov chains.

    The one-step transition matrix for a Markov chain

    with states S= { 0, 1, 2 } is

    wherepij= Pr{X1 =j | X0 = i}

    222120

    121110

    020100

    ppp

    ppp

    ppp

    P

    Stationary Transition Probabilities

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    4/38

    Classification of States

    Accessible: Possible to go from state i to statej (path exists in

    the network from i toj).

    2 3 4 10

    d4

    d1

    d2 d3

    a0

    a1 a2

    a3

    2 3 4 a a

    1a

    0a0 1 2 3

    Two states communicate if both are accessible from each other. A

    system is irreducible if all states communicate.

    State i is recurrent if the system will return to it after leaving some

    time in the future.

    If a state is not recurrent, it is transient.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    5/38

    Classification of States (continued)

    A state is periodic if it can only return to itself after a

    fixed number of transitions greater than 1 (or multiple

    of a fixed number).

    A state that is not periodic is aperiodic.

    2

    0

    1

    (1) (1)

    (1)

    a. Each state visited

    every 3 iterations

    (1 )

    2

    0

    1

    (1 )

    (0.5)

    (1 )

    4(0.5)

    b. Each state visited in multiples

    of 3 iterations

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    6/38

    Classification of States (continued)

    Anabsorbingstate is one that locks in the system once it enters.

    2 3 4

    a a

    1

    a

    0

    d d d1 2 3

    1 2 3

    This diagram might represent the wealth of a gambler who

    begins with $2 and makes a series of wagers for $1 each.

    Let ai be the event of winning in state i and dithe event oflosing in state i.

    There are two absorbing states: 0 and 4.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    7/38

    Classification of States (continued)

    Class: set of states thatcommunicatewith each other.

    A class is either allrecurrentor alltransientand may be either

    allperiodicoraperiodic.

    Statesin atransientclass communicate only with each other so

    no arcs enter any of the corresponding nodes in the networkdiagram from outside the class. Arcs may leave, though,

    passing from a node in the class to one outside.

    2

    0

    1 5

    3

    4

    6

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    8/38

    Illustration of Concepts

    3 1

    0

    2

    0

    0

    X

    0

    X

    1

    X

    0

    0

    0

    2

    X

    0

    0

    0

    3

    0

    0

    X

    X

    0

    1

    2

    3

    StateExample 1

    Every pair of statescommunicates, forming a single

    recurrentclass; however, the states are not periodic.

    Thus the stochastic process isaperiodic andirreducible.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    9/38

    Illustration of Concepts

    Example 2

    4

    0

    0

    X

    X

    0

    0X

    1

    X

    X

    0

    00

    2

    0

    0

    X

    X0

    3

    0

    0

    0

    X0

    0

    1

    2

    34

    State 4

    0

    0

    0

    00

    23

    1

    States 0 and 1 communicate and for arecurrent class.States 3 and 4 form separatetransient classes.

    State 2 is an absorbing state and forms arecurrent class.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    10/38

    Illustration of Concepts

    Example 3

    3 1

    0

    2

    0

    0

    0

    0

    X

    1

    X

    0

    0

    0

    2

    X

    0

    0

    0

    3

    0

    X

    X

    0

    0

    1

    2

    3

    State

    Every state communicates with every other state, so we

    have irreducible stochastic process.

    Periodic? Yes, so Markov chain is irreducible and periodic.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    11/38

    Example

    Classification of States

    2.08.0000

    1.04.05.000

    07.03.000

    0005.05.0

    0006.04.0

    5

    4

    3

    2

    1

    P

    .5.4

    .6

    .5.3 .5

    .4

    .8

    .7

    .1

    1

    5

    2

    3 4

    .2

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    12/38

    A statejis accessible from state iifpij(n) > 0 for some n> 0.

    In example, state 2 is accessible from state 1

    & state 3 is accessible from state 5but state 3 is not accessible from state 2.

    States iandjcommunicateifiis accessible fromjandjis accessible from i.

    States 1 & 2 communicate; alsostates 3, 4 & 5 communicate.

    States 2 & 4 do not communicate

    States 1 & 2 form one communicating class.States 3, 4 & 5 form a 2nd communicating class.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    13/38

    If all states in a Markov chain communicate(i.e.,all states are members of the same communicating class)

    then the chain is irreducible.

    The current example is not an irreducible Markov chain.Neither is the Gamblers Ruin example which

    has 3 classes: {0}, {1, 2, 3} and {4}.

    First Passage TimesLetfii= probability that the process will return to state i

    (eventually) given that it starts in state i.

    Iffii= 1 then state iis called recurrent.

    Iffii< 1 then state iis called transient.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    14/38

    Ifpii= 1 then state iis called an absorbing state.

    Above example has no absorbing states

    States 0 & 4 are absorbing in Gamblers Ruin problem.

    The period of a stateiis the smallest k> 1 such thatall paths leading back to ihave a length that is

    a multiple ofk;i.e.,pii

    (n) = 0 unless n= k, 2k, 3k, . . .

    If a process can be in state iat time nor time n+ 1having started at state ithen state iis aperiodic.

    Each of the states in the current example are aperiodic

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    15/38

    If all states in a Markov chain arerecurrent, aperiodic, & the chain is irreducible

    then it is ergodic.

    States 1, 2 and 3 each have period 2.

    0 1 2 3 40 1 0 0 0 01 1-p 0 p 0 0

    2 0 1-p 0 p 03 0 0 1-p 0 p4 0 0 0 0 1

    Example of Periodicity - Gamblers Ruin

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    16/38

    Existence of Steady-State Probabilities

    A Markov chain is ergodic if it is aperiodic and allows

    the attainment ofany future state from any initial stateafter one or more transitions. If these conditions hold,then

    ( )lim steady state probabilty for statenj ij

    np j

    For example,

    1.09.00

    3.03.04.0

    2.008.0

    P

    State-transition network

    1

    3

    2

    Conclusion: chain is ergodic.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    17/38

    Economic Analysis

    Two kinds of economic effects:

    (i) those incurred when the system is in a specified state, and

    (ii) those incurred when the system makes a transition from one

    state to another.

    The cost (profit) of being in a particular state is represented by the

    m-dimensional column vector

    where each component is the cost associated with state i.

    The cost of a transition is embodied in the m m matrix .

    where each component specifies the cost of going from state i to

    statej in a single step.

    TSS2S1S ,...,, mcccC

    RRijcC

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    18/38

    Expected Cost for Markov Chain

    Expected cost of being in state i: ij

    m

    j

    ijii pccc

    1

    RS

    Let C = (c1, . . . cm)T

    ei = (0, 0, 1, 0, 0) be the ith row of the m m identity

    matrix, and

    fn = a random variable representing the economic returnassociated with the stochastic process at time n.

    Property 3: Let {Xn: n = 0, 1, . . .} be a Markov chain with finite

    state space S, state-transition matrix P, and expected

    state cost (profit) vectorC. Assuming that the process

    starts in state i, the expected cost (profit) at the nth step

    is given by

    E[fn(Xn)|X0 = i] = eiP

    (n)

    C.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    19/38

    Additional Cost Results

    What if the initial state is not known?

    Property 5: Let {Xn: n = 0, 1, . . .} be a Markov chain with finite

    state space S, state-transition matrix P, initial

    probability vectorq(0),and expected state cost (profit)

    vectorC. The expected economic return at the nth step

    is given by

    E[fn(Xn) |q(0)] = q(0)P(n)C.

    Property 6: Let {Xn: n = 0, 1, . . .} be a Markov chain with finite

    state space S, state-transition matrix P, steady-statevector,and expected state cost (profit) vectorC. Then

    the long-run average return per unit time is given by

    Si

    S

    i

    ci

    = C.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    20/38

    An insurance company charges customers annual

    premiums based on their accident historyin the following fashion:

    No accident in last 2 years: $250 annual premium

    Accidents in each of last 2 years: $800 annual premium

    Accident in only 1 of last 2 years: $400 annual premium

    Historical statistics:

    1. If a customer had an accident last year then theyhave a 10% chance of having one this year;

    2. If they had no accident last year then they have a3% chance of having one this year.

    Insurance Company Example

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    21/38

    Problem: Find the steady-state probability and the long-run average annual premium paid by the customer.

    Solution approach: Construct a Markov chain with fourstates: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate(accident last year, accident this year).

    (N, N) (N, Y) (Y, N) (Y, Y)

    (N, N) 0.97 0.03 0 0(N, Y) 0 0 0.90 0.10(Y, N) 0.97 0.03 0 0

    (Y, Y) 0 0 0.90 0.10

    P =

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    22/38

    Y, Y.97

    .03

    .97

    .90

    .03

    .10

    .90

    .10Y, NN, YN, N

    State-Transition Network forInsurance Company

    This is an ergodicMarkov chain.

    All states communicate (irreducible)

    Each state is recurrent (you will return, eventually) Each state is aperiodic

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    23/38

    Solving the SteadyState Equations

    (N,N) = 0.97(N,N) + 0.97(Y,N)(N,Y) = 0.03(N,N) + 0.03(Y,N)

    (Y,N) = 0.9(N,Y) + 0.9(Y,Y)

    (N,N) + (N,Y)+(Y,N) + (Y,Y) =1

    Solution:

    (N,N) = 0.939, (N,Y) = 0.029, (Y,N) = 0.029, (Y,Y) = 0.003

    & the long-run average annual premium is

    0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5

    j= ipij, j= 0,,m

    j= 1, j 0, j

    m

    i=1

    m

    j=1

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    24/38

    Markov Chain Add-in Matrix

    Transition MatrixCalculate Regular matrix. Rows sum to 1.

    Change 4 Recurrent States

    Analyze 1 Recurrent State Class

    0 Transient States

    State 4 0 1 2 3Index Names (N, N) (N, Y) (Y, N) (Y, Y) Sum Status

    0 (N, N) (N, N) 0.97 0.03 0 0 1 Class-1

    1 (N, Y) (N, Y) 0 0 0.9 0.1 1 Class-1

    2 (Y, N) (Y, N) 0.97 0.03 0 0 1 Class-1

    3 (Y, Y) (Y, Y) 0 0 0.9 0.1 1 Class-1

    Sum 1.94 0.06 1.8 0.2

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    25/38

    Economic Data and Solution

    Economic Data Measure: Cost

    Calculate Discount Expected Transition Cost Matrix

    Rate State State 0 1 2 3

    0 Cost Cost (N, N) (N, Y) (Y, N) (Y, Y)

    0 (N, N) 250 250 0 0 0 0

    1 (N, Y) 400 400 0 0 0 0

    2 (Y, N) 400 400 0 0 0 0

    3 (Y, Y) 800 800 0 0 0 0

    Steady State The vector shows the long run probabilities Expected

    Analysis 0 1 2 3 Cost

    (N, N) (N, Y) (Y, N) (Y, Y) per period

    Steady State 0.93871 0.029032 0.029032 0.003226 260.483871

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    26/38

    Transient Analysis for Insurance Company

    Transient

    Analysis Average Cost 260.1622 Discounted Cost 5203.2430 1 2 3 Step Cum. Present

    (N, N) (N, Y) (Y, N) (Y, Y) Cost Cost Worth

    Start Initial 0 0 0 1 0 0

    1 0 0 0.9 0.1 440 440 440

    2 0.873 0.027 0.09 0.01 273.05 713.05 713.05

    More 3 0.93411 0.02889 0.0333 0.0037 261.3635 974.4135 974.41354 0.938388 0.029022 0.029331 0.003259 260.5454 1234.959 1234.959

    5 0.938687 0.029032 0.029053 0.003228 260.4882 1495.447 1495.447

    Chart 6 0.938708 0.029032 0.029034 0.003226 260.4842 1755.931 1755.931

    7 0.93871 0.029032 0.029032 0.003226 260.4839 2016.415 2016.415

    8 0.93871 0.029032 0.029032 0.003226 260.4839 2276.899 2276.899

    9 0.93871 0.029032 0.029032 0.003226 260.4839 2537.383 2537.383

    10 0.93871 0.029032 0.029032 0.003226 260.4839 2797.867 2797.867

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    27/38

    Let ij

    = expected number of steps to transitionfrom state ito statej

    If the probability that we will eventually visit statejgiven that we start in iis less than 1, then

    we will have ij = +.

    First Passage Times

    For example, in the Gamblers Ruin problem,20 = + because there is a positive probability

    that we will be absorbed in state 4 given that westart in state 2 (and hence visit state 0).

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    28/38

    If the probability of eventually visiting statejgiven

    that we start in iis 1 then the expected numberof steps until we first visitjis given by

    It will always take

    at least one step.

    We go from ito rin the first stepwith probabilityp

    ir

    and it takes rj

    steps from rtoj.

    Computations for All States Recurrent

    ij= 1 + pirrj, for i= 0,1, . . . , m1

    rj

    Forj fixed, we have linear system in m equations and m

    unknowns ij, i= 0,1, . . . , m1.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    29/38

    Suppose that we start in state (N,N) and want to find

    the expected number of years until we have accidentsin two consecutive years (Y,Y).

    This transition will occur with probability 1, eventually.

    First-Passage Analysis for Insurance Company

    For convenience number the states0 1 2 3

    (N,N) (N,Y) (Y,N) (Y,Y)

    Then, 03 = 1 + p00 03 + p01 13 + p0223

    13 = 1 + p10 03 + p11 13 + p1223

    23 = 1 + p20 03 + p21 13 + p2223

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    30/38

    03

    = 1 + 0.9703

    + 0.031313 = 1 + 0.923

    23 = 1 + 0.9703 + 0.0313

    (N, N) 0.97 0.03 0 0(N, Y) 0 0 0.90 0.10(Y, N) 0.97 0.03 0 0

    (Y, Y) 0 0 0.90 0.10

    Using P =

    So, on average it takes 343.3 years to transitionfrom (N,N) to (Y,Y).

    Note, 03 = 23. Why? Note, 13 < 03.

    Solution: 03 = 343.3, 13 = 310, 23 = 343.3

    (N, N) (N, Y) (Y, N) (Y, Y)

    0123

    First-Passage Computations

    states

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    31/38

    Expected number of steps until the first passage into state 3

    From 0 1 2 3

    (N, N) (N, Y) (Y, N) (Y, Y)

    343.3333 310 343.3333 310

    First Passage Probabilities

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    32/38

    Game of Craps

    Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223

    Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112

    Start Win Lose P4 P5 P6 P8 P9 P10

    Start 0 0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083

    Win 0 1 0 0 0 0 0 0 0

    Lose 0 0 1 0 0 0 0 0 0

    P4 0 0.083 0.167 0.75 0 0 0 0 0

    P = P5 0 0.111 0.167 0 0.722 0 0 0 0

    P6 0 0.139 0.167 0 0 0.694 0 0 0

    P8 0 0.139 0.167 0 0 0 0.694 0 0

    P9 0 0.111 0.167 0 0 0 0 0.722 0

    P10 0 0.083 0.167 0 0 0 0 0 0.75

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    33/38

    First Passage Probabilities for Craps

    Rolls Start-win Start-lose Sum Cumulative

    1 0.222 0.111 0.333 0.333

    2 0.077 0.111 0.188 0.522

    3 0.055 0.080 0.135 0.656

    4 0.039 0.057 0.097 0.753

    5 0.028 0.041 0.069 0.822

    6 0.020 0.030 0.050 0.872

    7 0.014 0.021 0.036 0.908

    8 0.010 0.015 0.026 0.933

    9 0.007 0.011 0.018 0.952

    10 0.005 0.008 0.013 0.965

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    34/38

    An absorbing state is a statejwithpjj= 1.

    Given that we start in state i, we can calculate theprobability of being absorbed in statej.

    We essentially performed this calculation for theGamblers Ruin problem by finding

    P(n) = (p

    ij(n) ) for large n.

    But we can use a more efficient analysislike that used for calculating first passage times.

    Absorbing States

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    35/38

    Go directly toj Go to rand then to j

    Let qij= probability of being absorbed in statejgiven that we start in transient state i.

    Then for eachjwe have the following relationship

    qij =pij+ pirqrj, i= 0, 1, . . . , k

    Let 0, 1, . . . , k be transient states and

    k+ 1, . . . , m1 be absorbing states.

    k

    r= 0

    For fixedj(absorbing state) we have k+ 1 linear

    equations in k+ 1 unknowns, qrj, i= 0, 1, . . . , k.

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    36/38

    Suppose that we start with $2 and want to calculate theprobability of going broke, i.e., of being absorbed in state 0.

    We know p00 = 1 and p40 = 0, thus

    q20 =p20 +p21q10 +p22q20 + p23q30 (+p24q40)

    q10 =p10 +p11q10 +p12q20 + p13q30 + 0

    q30 =p30 +p31q10 +p32q20 + p33q30 + 0

    where

    P =

    0 1 2 3 4

    0 1 0 0 0 01 1-p 0 p 0 02 0 1-p 0 p 03 0 0 1-p 0 p

    4 0 0 0 0 1

    Absorbing StatesGamblers Ruin

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    37/38

    Now we have three equations with three unknowns.

    Usingp= 0.75 (probability of winning a single bet)

    we have

    q20 = 0 + 0.25 q10 + 0.75 q30

    q10 = 0.25 + 0.75 q20

    q30 = 0 + 0.25 q20

    Solving yields q10 = 0.325, q20 = 0.1, q30 = 0.025

    (This is consistent with the values found earlier.)

    Solution to Gamblers Ruin Example

  • 7/29/2019 12.5 Markov Chains 2 (or Models)

    38/38

    What You Should Know AboutThe Mathematics of DTMCs

    How to classify states.

    What an ergodic process is.

    How to perform economic analysis.

    How to compute first-time passages.

    How to compute absorbing probabilities.


Recommended