+ All Categories
Home > Documents > Set3 Markov Chains

Set3 Markov Chains

Date post: 07-Apr-2018
Category:
Upload: kiran-attal-sakhadev
View: 243 times
Download: 2 times
Share this document with a friend

of 50

Transcript
  • 8/3/2019 Set3 Markov Chains

    1/50

    1

    Chapter 17Markov Chains

  • 8/3/2019 Set3 Markov Chains

    2/50

    2

    Description

    Sometimes we are interested in how a random variablechanges over time.

    The study of how a random variable evolves over time

    includes stochastic processes.

    An explanation of stochastic processes in particular, a type

    of stochastic process known as a Markov chain is included.

    We begin by defining the concept of a stochastic process.

  • 8/3/2019 Set3 Markov Chains

    3/50

    3

    5.1 What is a Stochastic Process?

    Suppose we observe some characteristic of a system at discretepoints in time.

    Let Xt be the value of the system characteristic at time t. In

    most situations, Xt is not known with certainty before time t

    and may be viewed as a random variable. A discrete-time stochastic process is simply a description of

    the relation between the random variables X0, X1, X2 ..

    Example: Observing the price of a share of Intel at the

    beginning of each day Application areas: education, marketing, health services,

    finance, accounting, and production

  • 8/3/2019 Set3 Markov Chains

    4/50

    4

    A continuous time stochastic process is simplythe stochastic process in which the state of the

    system can be viewed at any time, not just at discrete

    instants in time.

    For example, the number of people in a supermarket

    tminutes after the store opens for business may be

    viewed as a continuous-time stochastic process.

  • 8/3/2019 Set3 Markov Chains

    5/50

    5

    5.2 What is a Markov Chain?

    One special type of discrete-time is called a MarkovChain.

    Definition: A discrete-time stochastic process is a

    Markov chain if, fort= 0,1,2 and all statesP(Xt+1 = it+1|Xt = it, Xt-1=it-1,,X1=i1, X0=i0)

    =P(Xt+1=it+1|Xt = it)

    Essentially this says that the probability distribution

    of the state at time t+1 depends on the state at time

    t(it) and does not depend on the states the chain

    passed through on the way to it at time t.

  • 8/3/2019 Set3 Markov Chains

    6/50

    6

    In our study of Markov chains, we make furtherassumption that for all states i andj and all t,P(Xt+1

    =j|Xt = i) is independent oft.

    This assumption allows us to writeP(Xt+1 =j|Xt = i)=pij wherepijis the probability that given the system

    is in state i at time t, it will be in a statej at time t+1.

    If the system moves from state i during one period tostatej during the next period, we call that a

    transition from i toj has occurred.

  • 8/3/2019 Set3 Markov Chains

    7/50

    7

    Thepijs are often referred to as the transition

    probabilities for the Markov chain.

    This equation implies that the probability law

    relating the next periods state to the current state

    does not change over time. It is often called the Stationary Assumption and

    any Markov chain that satisfies it is called a

    stationary Markov chain.

    We also must define qi to be the probability that the

    chain is in state i at the time 0; in other words,

    P(X0=i) = qi.

  • 8/3/2019 Set3 Markov Chains

    8/50

    8

    We call the vectorq= [q1, q2,qs] the initial

    probability distribution for the Markov chain.

    In most applications, the transition probabilities are

    displayed as ans xstransition probability matrixP. The transition probability matrixPmay be written

    as

    =

    ssss

    s

    s

    ppp

    ppp

    ppp

    P

    21

    22221

    11211

  • 8/3/2019 Set3 Markov Chains

    9/50

    9

    For eachI

    We also know that each entry in thePmatrix must benonnegative.

    Hence, all entries in the transition probability matrix

    are nonnegative, and the entries in each row must

    sum to 1.

    =

    =

    =sj

    j

    ijp1

    1

  • 8/3/2019 Set3 Markov Chains

    10/50

    10

    The Gamblers Ruin Problem

    At time 0, I have $2. At times 1, 2, , I play a game in

    which I bet $1, with probabilities p, I win the game,and with probability 1 p, I lose the game. My goal isto increase my capital to $4, and as soon as I do, thegame is over. The game is also over if my capital is

    reduced to 0. Let Xt represent my capital position after the time t game (if

    any) is played

    X0, X1, X2, . May be viewed as a discrete-time stochastic

    process

  • 8/3/2019 Set3 Markov Chains

    11/50

    11

    The Gamblers Ruin Problem

    $0 $1 $2 $3 $4

    P =

    100000100

    0010

    0001

    00001

    pp

    pp

    pp

  • 8/3/2019 Set3 Markov Chains

    12/50

    12

    5.3 n-Step Transition Probabilities

    A question of interest when studying a Markov chain is: If aMarkov chain is in a state i at time m, what is the probability

    that n periods later the Markov chain will be in statej?

    This probability will be independent ofm, so we may write

    P(Xm+n =j|Xm = i) =P(Xn =j|X0 = i) =Pij(n)wherePij(n) is called the n-step probability of a transition

    from state i to statej.

    Forn > 1,Pij(n) = ijth element ofPn

    Pij(2) is the (i, j)th element of matrix P2 = P1 P1

    Pij(n) is the (i, j)th element of matrix Pn = P1 Pn-1

  • 8/3/2019 Set3 Markov Chains

    13/50

    13

    The Cola Example

    Suppose the entire cola industry produces only twocolas.

    Given that a person last purchased cola 1, there is a

    90% chance that their next purchase will be cola 1.

    Given that a person last purchased cola 2, there is an

    80% chance that their next purchase will be cola 2.1. If a person is currently a cola 2 purchaser, what is the probability

    that they will purchase cola 1 two purchases from now?

    2. If a person is currently a cola 1 a purchaser, what is the

    probability that they will purchase cola 1 three purchases from

    now?

  • 8/3/2019 Set3 Markov Chains

    14/50

    14

    The Cola Example

    We view each persons purchases as a Markov chainwith the state at any given time being the type of cola

    the person last purchased.

    Hence, each persons cola purchases may be

    represented by a two-state Markov chain, where State 1 = person has last purchased cola 1

    State 2 = person has last purchased cola 2

    If we define Xn to be the type of cola purchased by aperson on hernth future cola purchase, then X0, X1,

    may be described as the Markov chain with the

    following transition matrix:

  • 8/3/2019 Set3 Markov Chains

    15/50

    15

    The Cola Example

    We can now answer questions 1 and 2.

    1. We seekP(X2

    = 1|X0

    = 2) =P21

    (2) = element 21 of

    P2:

    =

    21

    80.20.

    10.90.

    2

    1ColaCola

    Cola

    ColaP

    =

    =

    66.34.

    17.83.

    80.20.

    10.90.

    80.20.

    10.90.2P

  • 8/3/2019 Set3 Markov Chains

    16/50

    16

    The Cola Example

    Hence,P21(2) =.34. This means that the probability is .34

    that two purchases in the future a cola 2 drinker will

    purchase cola 1.

    2. We seekP11(3) = element 11 ofP3:

    Therefore,P11(3) = .781

    =

    == 562.438.

    219.781.

    66.34.

    17.83.

    80.20.

    10.90.

    )(

    23PPP

  • 8/3/2019 Set3 Markov Chains

    17/50

    17

    Many times we do not know the state of the Markov chain at time0. Then we can determine the probability that the system is in state i

    at time n by using the reasoning.

    Probability of being in statej at time n

    where q=[q1, q2, q3].

    Hence, qn = qopn = qn-1p

    Example, q0

    = (.4,.6) q1= (.4,.6)

    q1 = (.48,.52)

    )(1

    nPq ij

    si

    i

    i=

    =

    =

    80.20.

    10.90.

  • 8/3/2019 Set3 Markov Chains

    18/50

    18

    To illustrate the behavior of the n-step transitionprobabilities for large values ofn, we have computed

    several of the n-step transition probabilities for the

    Cola example.

    This means that for large n, no matter what the initial

    state, there is a .67 chance that a person will be a cola

    1 purchaser.

  • 8/3/2019 Set3 Markov Chains

    19/50

    19

    5.4 Classification of States in a

    Markov Chain

    To understand the n-step transition in more detail, weneed to study how mathematicians classify the states

    of a Markov chain.

    The following transition matrix illustrates most ofthe following definitions. A graphical representation

    is shown in the book (State-Transition diagram)

    =

    2.8.000

    1.4.5.00

    07.3.000005.5.

    0006.4.

    P

  • 8/3/2019 Set3 Markov Chains

    20/50

    20

    Definition: Given two states ofi andj, a path from i

    toj is a sequence of transitions that begins in i andends inj, such that each transition in the sequence

    has a positive probability of occurring.

    Definition: A statej is reachable from state i if

    there is a path leading from i toj. Definition: Two states i andj are said to

    communicate ifj is reachable from i, and i is

    reachable fromj.

    Definition: A set of states Sin a Markov chain is a

    closed set if no state outside ofSis reachable from

    any state in S.

  • 8/3/2019 Set3 Markov Chains

    21/50

    21

    Definition: A state i is an absorbing state ifpij

    =1.

    Definition: A state i is a transient state if there exists a statej that isreachable from i, but the state i is not reachable from statej.

    Definition:If a state is not transient, it is called a recurrent state.

    Definition:A state i is periodic with period k > 1 if k is the smallestnumber such that all paths leading from state i back to state i have alength that is a multiple of k. If a recurrent state is not periodic, it isreferred to as aperiodic.

    If all states in a chain are recurrent, aperiodic, and communicate witheach other, the chain is said to be ergodic.

    The importance of these concepts will become clear after the next two

    sections.

  • 8/3/2019 Set3 Markov Chains

    22/50

    22

    5.5 Steady-State Probabilities and

    Mean First Passage Times

    Steady-state probabilities are used to describe thelong-run behavior of a Markov chain.

    Theorem 1: LetPbe the transition matrix for an

    s-state ergodic chain. Then there exists a vector = [1 2 s] such that

    =

    s

    s

    s

    nn

    P

    21

    21

    21

    lim

  • 8/3/2019 Set3 Markov Chains

    23/50

    23

    Theorem 1 tells us that for any initial state i,

    The vector = [1 2 s] is often called the

    steady-state distribution, orequilibriumdistribution, for the Markov chain. Hence, they are

    independent of the initial probability distribution

    defined over the states

    jijn

    nP =

    )(lim

  • 8/3/2019 Set3 Markov Chains

    24/50

    24

    Transient Analysis & Intuitive

    Interpretation

    The behavior of a Markov chain before the steadystate is reached is often call transient (or short-run)

    behavior.

    An interpretation can be given to the steady-state

    probability equations.

    This equation may be viewed as saying that in thesteady-state, the flow of probability into each state

    must equal the flow of probability out of each state.

    =jk

    kjkjjj pp )1(

  • 8/3/2019 Set3 Markov Chains

    25/50

    25

    Steady-State Probabilities

    The vector= [1, 2, . , s ] is often known as the steady-state

    distribution for the Markov chain For large n and all i,

    Pij(n+1) Pij(n) j

    In matrix form = P

    For any n and any i,

    Pi1(n) + Pi2(n) + + Pis(n) = 1

    As n, we have 1 + 2 + . + s = 1

    =

    =

    =

    =

    =+

    s

    kkjkj

    kj

    sk

    k ikij

    p

    PnPnP

    1

    1)()1(

  • 8/3/2019 Set3 Markov Chains

    26/50

    26

    An Intuitive Interpretation of Steady-State

    Probabilities

    Consider

    Subtracting jpjj from both sides of the above equation, wehave

    Probability that a particular transition enters state j =probability that a particular transition leaves state j

    =

    =s

    kkjkj p

    1

    =jk

    kjkjjj pp )1(

    f S d S b bili i i

  • 8/3/2019 Set3 Markov Chains

    27/50

    27

    Use of Steady-State Probabilities in

    Decision Making

    In the Cola Example, suppose that each customermakes one purchase of cola during any week.

    Suppose there are 100 million cola customers.

    One selling unit of cola costs the company $1 toproduce and is sold for $2.

    For $500 million/year, an advertising firm guarantees

    to decrease from 10% to 5% the fraction of cola 1

    customers who switch after a purchase.

    Should the company that makes cola 1 hire the firm?

  • 8/3/2019 Set3 Markov Chains

    28/50

    28

    At present, a fraction 1 = of all purchases are cola1 purchases, since:

    1 = .901+.2022 = .101+.802

    and using the following equation by 1 + 2 = 1

    Each purchase of cola 1 earns the company a $1profit. We can calculate the annual profit as

    $3,466,666,667 [2/3(100 million)(52 weeks)$1].

    The advertising firm is offering to change theP

    matrix to

    =80.20.

    05.95.1P

  • 8/3/2019 Set3 Markov Chains

    29/50

    29

    ForP1, the steady-state equations become1 = .951+.2022 = .051+.802

    Replacing the second equation by 1 + 2 = 1 andsolving, we obtain 1=.8 and2 = .2.

    Now the cola 1 companys annual profit will be

    $3,660,000,000 [.8(100 million)(52 weeks)$1-($500million)].

    Hence, the cola 1 company should hire the ad

    agency.

  • 8/3/2019 Set3 Markov Chains

    30/50

    30

    Inventory Example A camera store stocks a particular model camera that can be ordered

    weekly. Let D1, D2, represent the demand for this camera (the number

    of units that would be sold if the inventory is not depleted) during the firstweek, second week, , respectively. It is assumed that the Dis areindependent and identically distributed random variables having a Poissondistribution with a mean of 1. Let X0 represent the number of cameras onhand at the outset, X1 the number of cameras on hand at the end of week 1,X2 the number of cameras on hand at the end of week 2, and so on.

    Assume that X0 = 3. On Saturday night the store places an order that is delivered in time for the

    next opening of the store on Monday. The store using the following order policy: If there are no cameras in stock,

    3 cameras are ordered. Otherwise, no order is placed. Sales are lost when demand exceeds the inventory on hand

    I E l

  • 8/3/2019 Set3 Markov Chains

    31/50

    31

    Inventory Example

    Xt is the number of Cameras in stock at the end of week t (as defined

    earlier), where Xt represents the state of the system at time t

    Given that Xt = i, Xt+1 depends only on Dt+1 and Xt (Markovian property)

    Dt has a Poisson distribution with mean equal to one. This means that

    P(Dt+1 = n) = e-11n/n! for n = 0, 1,

    P(Dt = 0 ) = e-1 = 0.368

    P(Dt = 1 ) = e

    -1

    = 0.368 P(Dt = 2 ) = (1/2)e

    -1 = 0.184

    P(Dt 3 ) = 1 P(Dt 2) = 1 (.368 + .368 + .184) = 0.08

    Xt+1 = max(3-Dt+1, 0) if Xt = 0 and Xt+1 = max(Xt Dt+1, 0) if Xt 1, for t =

    0, 1, 2, .

    I t E l (O St ) T iti

  • 8/3/2019 Set3 Markov Chains

    32/50

    32

    Inventory Example: (One-Step) Transition

    Matrix P

    03

    = P(Dt+1

    = 0) = 0.368

    P02 = P(Dt+1 = 1) = 0.368

    P01 = P(Dt+1 = 2) = 0.184

    P00 = P(Dt+1 3) = 0.080

    44434241

    34333231

    23221211

    04030201

    3

    2

    1

    0

    3210

    pppp

    pppp

    pppp

    pppp

  • 8/3/2019 Set3 Markov Chains

    33/50

    33

    Inventory Example: Transition Diagram

    0 1

    2 3

  • 8/3/2019 Set3 Markov Chains

    34/50

    34

    Inventory Example: (One-Step) Transition

    Matrix

    368.368.184.080.3

    0368.368.264.2

    00368.632.1

    368.368.184.080.0

    3210

  • 8/3/2019 Set3 Markov Chains

    35/50

    35

    Transition Matrix: Two-Step

    P(2) = PP

    165.300.286.249.3097.233.319.351.2

    233.233.252.283.1

    165.300.286.249.0

    3210

    )2( =P

  • 8/3/2019 Set3 Markov Chains

    36/50

    36

    Transition Matrix: Four-Step

    P(4) = P(2)P(2)

    164.261.286.289.3171.263.283.284.2

    166.268.285.282.1

    164.261.286.289.0

    3210

    )4( =P

  • 8/3/2019 Set3 Markov Chains

    37/50

    37

    Transition Matrix: Eight-Step

    P(8) = P(4)P(4)

    166.264.285.286.3166.264.285.286.2

    166.264.285.286.1

    166.264.285.286.0

    3210

    )8(=P

  • 8/3/2019 Set3 Markov Chains

    38/50

    38

    Steady-State Probabilities

    The steady-state probabilities uniquely satisfy the following

    steady-state equations

    0 = 0p00 + 1p10 + 2p20 + 3p30

    1 =

    0p01 +

    1p11 +

    2p21 +

    3p31 2 = 0p02 + 1p12 + 2p22 + 3p32

    3 = 0p03 + 1p13 + 2p23 + 3p33

    1 = 0 + 1 + 2 + 3

    1

    ......s2,1,0,jfor

    0

    0

    =

    ==

    =

    =

    s

    jj

    s

    iijij p

  • 8/3/2019 Set3 Markov Chains

    39/50

    39

    Steady-State Probabilities: Inventory Example

    0 = .0800 + .6321 + .2642+ .0803

    1 = .1840 + .3681 + .3682 + .1843

    2 = .3680 + .3682 + .3683

    3 = .3680 + .3683

    1 = 0 + 1 + 2 + 3

    0 = .286, 1 = .285, 2 = .263, 3 = .166

    The numbers in each row of matrix P(8) match the

    corresponding steady-state probability

  • 8/3/2019 Set3 Markov Chains

    40/50

    40

    Mean First Passage Times

    For an ergodic chain, let mij = expected number oftransitions before we first reach statej, given that we

    are currently in state i; mij is called the mean first

    passage time from state i to statej. In the example, we assume we are currently in state i.

    Then with probabilitypij, it will take one transition to

    go from state i to statej. Fork j, we next go with

    probabilitypik to state k. In this case, it will take an

    average of 1 + mkj transitions to go from i andj.

  • 8/3/2019 Set3 Markov Chains

    41/50

    41

    This reasoning implies

    By solving the linear equations of the equationabove, we find all the mean first passage times. It can

    be shown that

    +=jk

    kjikij mpm 1

    i

    iim1=

  • 8/3/2019 Set3 Markov Chains

    42/50

    42

    For the cola example, 1=2/3 and2 = 1/3

    Hence, m11 = 1.5 andm22 = 3

    m12 = 1 + p11m12= 1 + .9m12

    m21 = 1 + p22m21 = 1 + .8m21

    Solving these two equations yields,

    m12 = 10 andm21 = 5

    S l i f St d St t P b biliti

  • 8/3/2019 Set3 Markov Chains

    43/50

    43

    Solving for Steady-State Probabilities

    and Mean First Passage Times on the

    Computer Since we solve steady-state probabilities and mean

    first passage times by solving a system of linear

    equations, we may use LINDO to determine them. Simply type in an objective function of 0, and type

    the equations you need to solve as your constraints.

  • 8/3/2019 Set3 Markov Chains

    44/50

    44

    5.6 Absorbing Chains

    Many interesting applications of Markov chainsinvolve chains in which some of the states are

    absorbing and the rest are transient states.

    This type of chain is called an absorbing chain.

    To see why we are interested in absorbing chains we

    consider the following accounts receivable example.

    A R i bl E l

  • 8/3/2019 Set3 Markov Chains

    45/50

    45

    Accounts Receivable Example The accounts receivable situation of a firm is often

    modeled as an absorbing Markov chain. Suppose a firm assumes that an account is

    uncollected if the account is more than three months

    overdue.

    Then at the beginning of each month, each account

    may be classified into one of the following states: State 1 New account

    State 2 Payment on account is one month overdue

    State 3 Payment on account is two months overdue

    State 4 Payment on account is three months overdue

    State 5 Account has been paid

    State 6 Account is written off as bad debt

  • 8/3/2019 Set3 Markov Chains

    46/50

    46

    Suppose that past data indicate that the followingMarkov chain describes how the status of an account

    changes from one month to the next month:

    100000

    010000

    3.7.0000

    06.4.000

    05.05.00

    04.006.0New

    1 month

    2 months

    3 monthsPaid

    Bad Debt

    New 1 month 2 months 3 months Paid Bad Debt

  • 8/3/2019 Set3 Markov Chains

    47/50

    47

    To simplify our example, we assume that after threemonths, a debt is either collected or written off as a

    bad debt.

    Once a debt is paid up or written off as a bad debt,

    the account if closed, and no further transitions

    occur.

    Hence, Paid or Bad Debt are absorbing states. Since

    every account will eventually be paid or written offas a bad debt, New, 1 month, 2 months, and 3 months

    are transient states.

  • 8/3/2019 Set3 Markov Chains

    48/50

    48

    A typical new account will be absorbed as either acollected debt or a bad debt.

    What is the probability that a new account will

    eventually be collected?

    To answer this questions we must write a transition

    matrix. We assumes m transient states and m

    absorbing states. The transition matrix is written in

    the form of mcolumns

    10

    RQs-m rows

    m rows

    s-mcolumns

    P =

  • 8/3/2019 Set3 Markov Chains

    49/50

    49

    The transition matrix for this example is

    Thens =6, m =2, and Q andR are as shown.

    100000

    010000

    3.7.0000

    06.4.000

    05.05.00

    04.006.0New

    1 month

    2 months

    3 months

    Paid

    Bad Debt

    New 1 month 2 months 3 months Paid Bad Debt

    Q R

  • 8/3/2019 Set3 Markov Chains

    50/50

    50

    1. What is the probability that a new account willeventually be collected? (.964)

    2. What is the probability that a one-month overdue

    account will eventually become a bad debt? (.06)

    3. If the firms sales average $100,000 per month,

    how much money per year will go uncollected?

    From answer 1, only 3.6% of all debts are

    uncollected. Since yearly accounts payable are$1,200,000 on the average, (0.036)(1,200,000) =

    $43,200 per year will be uncollected.


Recommended