Markovian Queues and Stochastic NetworksLecture 1
Richard J. BoucherieStochastic Operations Research
Overview MQSN
I Background on Markov chainsI Reversibility, output theorem, tandem networks,
feedforward networksI Partial balance, Markovian routing, Kelly-Whittle networksI Kelly’s lemma, time-reversed process, networks with fixed
routesI Advanced topics
Markovian Queues and Stochastic Networks 2 / 40
Literature
I R.D. Nelson, Probability, Stochastic Processes, andQueueing Theory, 1995, chapter 10
I F.P. Kelly, Reversibility and stochastic networks, 1979,chapters 1–4www.statslab.cam.ac.uk/∼frank/BOOKS/kelly book.html
I R.W. Wolff, Stochastic Modeling and the Theory ofQueues, Prentice Hall, 1989
I R.J. Boucherie, N.M. van Dijk (editors), QueueingNetworks - A Fundamental Approach, International Seriesin Operations Research and Management Science Vol154, Springer, 2011
I Reader: R.J. Boucherie, Markovian queueing networks,2018 (work in progress)
Markovian Queues and Stochastic Networks 3 / 40
Internet of Things: optimal route in Jacksonnetwork
I Jobs arrive at outside nodes withgiven destination
I Each node single server queueminimize sojourn time
I Optimal route selectionI Inform jobs in neighbouring nodeI alternative route
Markovian Queues and Stochastic Networks 4 / 40
Internet of Things: optimal route in Jacksonnetwork
I Tandem of M|M|1 queuesI Sojourn timeI Average sojourn time queue i :
ESi = (µi − λi)−1
I On routeES =
∑i(µi − λi)−1
Markovian Queues and Stochastic Networks 5 / 40
Internet of Things: optimal route in Jacksonnetwork
I For fixed routes via set of queuesI On route r
ESr =∑
i(µi − λi)−11(i on r)
Markovian Queues and Stochastic Networks 6 / 40
ChallengeI Grid N × NI On each side k flows arrive from sources at randomly
selected (but fixed) nodes with destination a randomlyselected (but fixed) node on one of the 4 sides
I At each gridpoint a single server queue handles andforwards packets
I Packets select their route from source to destination tominimize their travelling time (no travelling time on link)
I Packets may communicate with neighbours to avoidcongestions and change their route accordingly
I Poisson arrivals of packets; general processing time atnodes; one destination on each side
I Develop decentralized routing algorithm to minimize meantravelling times and demonstrate that it outperformsshortest (and fixed) route selection
Markovian Queues and Stochastic Networks 7 / 40
Continuous-time Markov chain
I Stochastic process {N(t), t ∈ T} records evolution ofrandom variable, T = R
I State space S ⊆ NJ0, state n = (n1, . . . ,nJ)I Stationary process if (N(t1),N(t2), . . . ,N(tk )) has the
same distribution as (N(t1 + τ),N(t2 + τ), . . . ,N(tk + τ))for all k ∈ N, t1, t2, . . . , tk ∈ T , τ ∈ T
I Markov proces satisfies the Markov property: for everyk ≥ 1, 0 ≤ t1 < · · · < tk < tk+1, and any n1, . . . ,nk+1 in S,the joint distribution of (N(t1), . . . ,N(tk+1)) is such that
P {N(tk+1) = nk+1|N(t1) = n1, . . . ,N(tk ) = nk}= P {N(tk+1) = nk+1|N(tk ) = nk} ,
whenever the conditioning event(N(t1) = n1, . . . ,N(tk ) = nk ) has positive probability.
Markovian Queues and Stochastic Networks 8 / 40
Continuous-time Markov chain – 2
I A Markov process is time-homogeneous if the conditionalprobability P {N(t + s) = n′|N(t) = n} is independent of tfor all s > 0, n,n′ ∈ S.
I For a time-homogeneous Markov process the transitionprobability from state n to state n′ in time s is defined as
P(n,n′; s) = P{
N(t + s) = n′|N(t) = n}, s > 0.
Markovian Queues and Stochastic Networks 9 / 40
Continuous-time Markov chain – 3I The transition matrix P(t) = (P(n,n′; t), n,n′ ∈ S) has
non-negative entries (1) and row sums equal to one (2).I The Markov property implies that the transition
probabilities satisfy the Chapman-Kolmogorovequations (3). Assume that the transition matrix isstandard (4). For all n,n′ ∈ S, s, t ∈ T , a standardtransition matrix satisfies:
P(n,n′; t) ≥ 0; (1)
∑n′∈S
P(n,n′; t) = 1; (2)
P(n,n′′; t + s) =∑n′∈S
P(n,n′; t)P(n′,n′′; s); (3)
limt↓0
P(n,n′; t) = δn,n′ . (4)
Markovian Queues and Stochastic Networks 10 / 40
Continuous-time Markov chain – 4I For a standard transition matrix the transition rate from
state n to state n′ can be defined as
q(n,n′) = limh↓0
P(n,n′;h)− δn,n′h
.
I For all n,n′ ∈ S this limit exists.I Markov process is called continuous-time Markov chain if
for all n,n′ ∈ S the limit exists and is finite (5).I Assume that the rate matrix Q = (q(n,n′), n,n′ ∈ S) is
stable (6), and conservative (7)
0 ≤ q(n,n′)
Continuous-time Markov chain – 5
I If the rate matrix is stable the transition probabilities canbe expressed in the transition rates: for n,n′ ∈ S,
P(n,n′;h) = δn,n′ + q(n,n′)h + o(h) for h ↓ 0, (8)
where o(h) denotes a function g(h) with the property thatg(h)/h→ 0 as h→ 0.
I For small positive values of h, for n′ 6= n, q(n,n′)h may beinterpreted as the conditional probability that the Markovchain {N(t)} makes a transition to state n′ during (t , t + h)given that the process is in state n at time t .
Markovian Queues and Stochastic Networks 12 / 40
Continuous-time Markov chain – 6I For every initial state N(0) = n, {N(t), t ∈ T} is a
pure-jump process: the process jumps from state to stateand remains in each state a strictly positive sojourn-timewith probability 1.
I Markov chain remains in state n for a negative-exponentialsojourn-time with mean q(n)−1.
I Conditional on the process departing from state n it jumpsto state n′ with probability p(n,n′) = q(n,n′)/q(n).
I The Markov chain represented via the holding times q(n)and transition probabilities p(n,n′), n,n′ ∈ S, is referred toas the Markov jump chain.
I The Markov chain with transition rates q(n,n′) is obtainedfrom the Markov jump chain with holding times with meanq(n)−1 and transition probabilities p(n,n′) asq(n,n′) = q(n)p(n,n′), n,n′ ∈ S.
Markovian Queues and Stochastic Networks 13 / 40
Continuous-time Markov chain – 7I From the Chapman-Kolmogorov equations
P(n,n′′; t + s) =∑n′∈S
P(n,n′; t)P(n′,n′′; s)
two systems of differential equations for the transitionprobabilities can be obtained:
I Conditioning on the first jump of the Markov chain in (0, t ]yields the so-called Kolmogorov backward equations (9),whereas conditioning on the last jump in (0, t ] gives theKolmogorov forward equations (10), for n,n′ ∈ S, t ≥ 0,
dP(n,n′; t)dt
=∑
n′′∈S
q(n,n′′)P(n′′,n′; t), (9)
dP(n,n′; t)dt
=∑
n′′∈S
P(n,n′′; t)q(n′′,n′). (10)
Markovian Queues and Stochastic Networks 14 / 40
Continuous-time Markov chain – 8
I Derivation Kolmogorov forward equations (regular)
P(n,n′; t + h) =∑n′′
P(n,n′′; t)P(n′′,n′; h)
P(n,n′; t + h)− P(n,n′; t) =∑
n′′ 6=n′P(n,n′′; t)P(n′′,n′; h) + P(n,n′; t)[P(n′,n′; h)− 1]
=∑
n′′ 6=n′
{P(n,n′′; t)P(n′′,n′; h)− P(n,n′; t)P(n′,n′′; h)
}dP(n,n′; t)
dt=
∑n′′ 6=n′
{P(n,n′′; t)q(n′′,n′)− P(n,n′; t)q(n′,n′′)}
Markovian Queues and Stochastic Networks 15 / 40
Explosion in a pure birth process
I Consider the Markov chain at state space S = N0 withtransition rates
q(n,n′) =
q(n), if n′ = n + 1,−q(n), if n′ = n,0, otherwise,
with initial distribution P(N(0) = n) = δ(n,0).I Let ξ(n) denote the time spent in state n; ξ =
∑∞n=0 ξ(n)
I Let q(n) = 2n, then
E{ξ} =∞∑
n=0
E{ξ(n)} =∞∑
n=0
2−n = 2
As E{ξ}
Continuous-time Markov chain – 9Theorem (1.1.2)For a conservative, stable, regular, continuous-time Markovchain the forward equations (10) and the backwardequations (9) have the same unique solution{P(n,n′; t), n,n′ ∈ S, t ≥ 0}. Moreover, this unique solution isthe transition matrix of the Markov chain.
I The transient distribution P(n, t) = P {N(t) = n} can beobtained from the Kolmogorov forward equations forn ∈ S, t ≥ 0,
dP(n, t)dt
=∑n′ 6=n
{P(n′, t)q(n′,n)− P(n, t)q(n,n′)
},
P(n,0) = P0(n).
Markovian Queues and Stochastic Networks 17 / 40
Continuous-time Markov chain – 10I A measure m = (m(n), n ∈ S) such that 0 ≤ m(n) 0 for some n ∈ S is called astationary measure if for all n ∈ S, t ≥ 0,
m(n) =∑n′∈S
m(n′)P(n′,n; t),
and is called an invariant measure if for all n ∈ S,∑n′ 6=n
{m(n)q(n,n′)−m(n′)q(n′,n)
}= 0.
I {N(t)} is ergodic if it is positive-recurrent with stationarymeasure having finite mass
I Global balance; interpretation
Markovian Queues and Stochastic Networks 18 / 40
Continuous-time Markov chain – 11Theorem (1.1.4 Equilibrium distribution)Let {N(t), t ≥ 0} be a conservative, stable, regular, irreduciblecontinuous-time Markov chain.
(i) If a positive finite mass invariant measure m exists thenthe Markov chain is positive-recurrent (ergodic). In thiscase π(n) = m(n)
[∑n∈S m(n)
]−1, n ∈ S, is the uniquestationary distribution and π is the equilibrium distribution,i.e., for all n,n′ ∈ S,
limt→∞
P(n′,n; t) = π(n).
(ii) If a positive finite mass invariant measure does not existthen for all n,n′ ∈ S,
limt→∞
P(n′,n; t) = 0.
Markovian Queues and Stochastic Networks 19 / 40
Markovian Queues and Stochastic NetworksLecture 1
Richard J. BoucherieStochastic Operations Research
The birth-death process – 1I A birth-death process is a Markov chain {N(t), t ∈ T},
T = [0,∞), or T = R, with state space S ⊆ N0 andtransition rates for λ, µ : S → [0,∞)
q(n,n′) =
λ(n) if n′ = n + 1, (birth rate)µ(n)1(n > 0), if n′ = n− 1, (death rate)−λ(n)− µ(n), if n′ = n, n > 0,−λ(n), if n = 0.
I Kolmogorov forward equationsdP(n, t)
dt= P(n− 1, t)λ(n− 1)− P(n, t)[λ(n) + µ(n)] + P(n + 1, t)µ(n + 1),
n > 0,dP(n, t)
dt= −P(n, t)λ(n) + P(n + 1, t)µ(n + 1), n = 0.
Markovian Queues and Stochastic Networks 21 / 40
The birth-death process – 2I Global balance equations
0 = π(n− 1)λ(n− 1)− π(n)[λ(n) + µ(n)] + π(n + 1)µ(n + 1), n > 0,0 = −π(0)λ(0) + π(1)µ(1).
I π satisfies the detailed balance equations
π(n)λ(n) = π(n + 1)µ(n + 1), n ∈ S.
Theorem (2.1.1)Let {N(t)} be a birth-death process with state space S = N0,birth rates λ(n) and death rates µ(n). If
π(0) :=[∑∞
n=0∏n−1
r=0λ(r)
µ(r+1)
]−1> 0,
then the equilibrium distribution is
π(n) = π(0)n−1∏r=0
q(r, r + 1)q(r + 1, r)
= π(0)n−1∏r=0
λ(r)µ(r + 1)
, n ∈ S.
Markovian Queues and Stochastic Networks 22 / 40
Example: The M|M|1 queueI Customers arrive to a queue according to a Poisson
process (the arrival process) with rate λ.I A single server serves the customers in order of arrival.I Customers’ service times have a negative-exponential
distribution with mean µ−1 and are independent of eachother and of the arrival process.
I {N(t), t ∈ T}, T = [0,∞) recording number of customersin the queue is a birth-death process at S = N0 with
q(n,n′) =
{λ(n) = λ if n′ = n + 1, (birth rate)µ(n) = µ1(n > 0), if n′ = n− 1, (death rate)
and equilibrium distribution
π(n) = (1− ρ)ρn, n ∈ S,
provided that the queue is stable: ρ :=λ
µ< 1.
Markovian Queues and Stochastic Networks 23 / 40
Example: The M|M|1|c queueI M|M|1 queue, but now with finite waiting room that may
contain at most c − 1 customers.I {N(t), t ∈ T}, T = [0,∞) recording the number of
customers in the queue is a birth-death process atS = {0,1,2, . . . , c} with
q(n,n′) =
{λ(n) = λ1(n < c) if n′ = n + 1, (birth rate)µ(n) = µ1(n > 0), if n′ = n− 1, (death rate).
I Detailed balance equations are truncated at state cI The equilibrium distribution is truncated to S:
π(n) = π(0)ρn, n ∈ {0,1, . . . , c},with
π(0) =
[c∑
n=0
ρn
]−1=
1− ρ1− ρc+1
.
Markovian Queues and Stochastic Networks 24 / 40
Detailed balance – 1Definition (2.2.1 Detailed balance)A Markov chain {N(t)} at state space S with transition ratesq(n,n′), n,n′ ∈ S, satisfies detailed balance if a distributionπ = (π(n), n ∈ S) exists that satisfies for all n,n′ ∈ S thedetailed balance equations:
π(n)q(n,n′)− π(n′)q(n′,n) = 0.
Theorem (2.2.2)If the distribution π satisfies the detailed balance equationsthen π is the equilibrium distribution.
I The detailed balance equations state that the probabilityflow between each pair of states is balanced.
Markovian Queues and Stochastic Networks 25 / 40
Detailed balance – 2Lemma (2.2.3, 2.2.4 Kolmogorov’s criterion){N(t)} satisfies detailed balance if and only if for all r ∈ N andany finite sequence of states n1,n2, . . . ,nr ∈ S, nr = n1,
r−1∏i=1
q(ni ,ni+1) =r−1∏i=1
q(nr−i+1,nr−i).
If {N(t)} satisfies detailed balance, then
π(n) = π(n′)q(n1,n2)q(n2,n3)q(n2,n1)q(n3,n2)
· · · q(nr−1,nr )q(nr ,nr−1)
,
for arbitrary n′ ∈ S for all r ∈ N and any path n1,n2, . . . ,nr ∈ Ssuch that n1 = n′, nr = n for which the denominator is positive.
I Direct generalisation of the result for birth-death process.
Markovian Queues and Stochastic Networks 26 / 40
Detailed balance – 3Theorem (2.2.5 Truncation)Consider {N(t)} at state space S with transition rates q(n,n′),n,n′ ∈ S, and equilibrium distribution π. Let V ⊂ S.Let r > 0. If the transition rates are altered from q(n,n′) torq(n,n′) for n ∈ V, n′ ∈ S \ V, then the resulting Markov chain{Nr (t)} satisfies detailed balance and has equilibriumdistribution (G is the normalizing constant)
πr (n) =
{Gπ(n), n ∈ V ,Grπ(n), n ∈ S \ V ,
If r = 0 then the Markov chain is truncated to V and
π0(n) = π(n)
[∑n∈V
π(n)
]−1, n ∈ V .
I Direct generalisation of the result for M|M|1|c from M|M|1.Markovian Queues and Stochastic Networks 27 / 40
Example: Network of parallel M|M|1 queues – 1I Network of two M|M|1 queues in parallel.I Queue j has arrival rate λj and service rate µj , j = 1,2.I {Nj(t)}, j = 1,2, are assumed independent.I {N(t) = (N1(t),N2(t))}, state space S = N20, n = (n1,n2),I Transition rates, for n,n′ ∈ S, n′ 6= n,
q(n,n′) =
{λj if n′ = n + ej , j = 1,2,µj , if n′ = n− ej , j = 1,2.
I Random variables Nj := Nj(∞) recording the equilibriumnumber of customers in queue j are independent.
π(n) =2∏
j=1
πj(nj), n ∈ S,
πj(nj) = (1− ρj)ρnjj , nj ∈ N0, provided ρj :=
λjµj< 1.
Markovian Queues and Stochastic Networks 28 / 40
Example: Network of parallel M|M|1 queues – 2I Common capacity restriction n1 + n2 ≤ c.I Customers arriving to the network with c customers
present are discarded.I The Markov chain {N(t) = (N1(t),N2(t))} has state space
Sc = {(n1,n2) : nj ≥ 0, j = 1,2, n1 + n2 ≤ c} andtransition rates truncated to Sc .
I Invoking Truncation Theorem:
π(n) = G2∏
j=1
ρnjj , n ∈ Sc ,
with normalising constant
G =
c∑n1=0
c−n1∑n2=0
2∏i=1
ρnii
−1 .Markovian Queues and Stochastic Networks 29 / 40
Markovian Queues and Stochastic NetworksLecture 1
Richard J. BoucherieStochastic Operations Research
Reversibility – 1
Definition (Stationary process)A stochastic process {N(t), t ∈ R} is stationary if(N(t1),N(t2), . . . ,N(tk )) has the same distribution as(N(t1 + τ),N(t2 + τ), . . . ,N(tk + τ)) for all k ∈ N,t1, t2, . . . , tk ∈ T , τ ∈ T
Definition (2.4.1 Reversibility)A stochastic process {N(t), t ∈ R} is reversible if(N(t1),N(t2), . . . ,N(tk )) has the same distribution as(N(τ − t1),N(τ − t2), . . . ,N(τ − tk )) for all k ∈ N,t1, t2, . . . , tk ∈ R, τ ∈ R.
Theorem (2.4.2)If {N(t)} is reversible then {N(t)} is stationary.
Markovian Queues and Stochastic Networks 31 / 40
Reversibility – 2
Theorem (2.4.3 Reversibility and detailed balance)Let {N(t), t ∈ R} be a stationary Markov chain with transitionrates q(n,n′), n,n′ ∈ S. {N(t)} is reversible if and only if thereexists a distribution π = (π(n), n ∈ S) that satisfies thedetailed balance equations. When there exists such adistribution π, then π is the equilibrium distribution of {N(t)}.
Markovian Queues and Stochastic Networks 32 / 40
Example: Departures from the M|M|1 queueI Arrival process to the M|M|1 queue is a Poisson process
with rate λ.I If λ < µ departure process from M|M|1 queue has rate λ.
I {N(t)} recording the number of customers in M|M|1 witharrival rate λ and service rate µ satisfies detailed balance.
I Markov chain {N r (t)} in reversed time has Poissonarrivals at rate λ and service rate µ.
I Therefore {N r (t)} is the Markov chain of an M|M|1 queuewith Poisson arrivals at rate λ and negative-exponentialservice at rate µ.
I Epochs of the arrival process for the reversed queuecoincide with the epochs of the arrival process for theoriginal queue, it must be that the departure process fromthe M|M|1 queue is a Poisson process with rate λ.
Markovian Queues and Stochastic Networks 33 / 40
Reversibility – 3
Theorem (2.4.3 Reversibility and detailed balance)Let {N(t), t ∈ R} be a stationary Markov chain with transitionrates q(n,n′), n,n′ ∈ S. {N(t)} is reversible if and only if thereexists a distribution π = (π(n), n ∈ S) that satisfies thedetailed balance equations. When there exists such adistribution π, then π is the equilibrium distribution of {N(t)}.
Proof. If {N(t)} is reversible, then for all t ,h ∈ R, n,n′ ∈ S:
P(N(t + h) = n′, N(t) = n) = P(N(t) = n′, N(t + h) = n).
{N(t), t ∈ R} is stationary. Let π(n) = P(N(t) = n), t ∈ R.P(N(t + h) = n′|N(t) = n)
hπ(n) =
P(N(t + h) = n|N(t) = n′)h
π(n′).
Letting h→ 0 yields the detailed balance equations.
Markovian Queues and Stochastic Networks 34 / 40
Proof continuedAssume π = (π(n), n ∈ S) satisfies detailed balance.Consider {N(t)} for t ∈ [−H,H]. Suppose {N(t)} moves alongthe sequence of states n1, . . . ,nk and has sojourn time hi in ni ,i = 1, . . . , k − 1, and remains in nk for at least hk until time H.With probability π(n1) = P(N(−H) = n1) {N(t)} starts in n1.Probability density with respect to h1, . . . ,hk for this sequence
π(n1)q(n1)e−q(n1)h1 p(n1,n2) · · · q(nk−1)e−q(nk−1)hk−1 p(nk−1,nk )e−q(nk )hk ,
Kolmogorov’s criterion implies that
π(n1)q(n1,n2) · · · q(nk−1,nk ) = π(nk )q(nk ,nk−1) · · · q(n2,n1),
probability density equals the probability density for thereversed path that starts in nk at time H.Thus (N(t1),N(t2), . . . ,N(tk )) ∼ (N(−t1),N(t2), . . . ,N(−tk ))Stationarity completes the proof,.
Markovian Queues and Stochastic Networks 35 / 40
Burke’s theorem and feedforward networks – 1Theorem (2.5.1 Burke’s theorem)Let {N(t)} record the number of customers in the M|M|1queue with arrival rate λ and service rate µ, λ < µ. Let {D(t)}record the customers’ departure process from the queue. Inequilibrium the departure process {D(t)} is a Poisson processwith rate λ, and N(t) is independent of {D(s), s < t}.Proof. M|M|1 reversible: epochs at which {N(−t)} jumps upform Poisson process with rate λ.If {N(−t)} jumps up at time t∗ then {N(t)} jumps down at t∗.Departure process forms a Poisson process with rate λ.{N(t)} reversible: departure process up to t∗ and N(t∗) havesame distribution as arrival process after −t∗ and N(−t∗).Arrival process is Poisson process: arrival process after −t∗independent of N(−t∗).Hence, the departure process up to t∗ independent of N(t∗).
Markovian Queues and Stochastic Networks 36 / 40
Burke’s theorem and feedforward networks – 2
I Tandem network of two M|M|1 queuesI Poisson λ arrival process to queue 1, service rates µi .I Provided ρi = λ/µi < 1, marginal distributionsπi(ni) = (1− ρi)ρnii , ni ∈ N0.
I Burke’s theorem: departure process from queue 1 beforet∗ and N1(t∗), are independent.
I Hence, in equilibrium, the at time t∗ the random variablesN1(t∗) and N2(t∗) are independent:
π(n) =2∏
i=1
πi(ni), n ∈ S = N20.
Markovian Queues and Stochastic Networks 37 / 40
Burke’s theorem and feedforward networks – 3I Customer leaving queue j can route to any of the queues
j + 1, . . . , J, or may leave the network.I pij fraction of customers from queue i to queue j > i ,
pi0 fraction leaving the network.I Arrival process is Poisson process with rate µ0.I Fraction p0j of these customers is routed to queue j .I The service rate at queue j is µj .I Burke’s theorem implies that all flows of customers among
the queues are Poisson flows.I Arrival rate λj of customers to queue j is obtained from
superposition and random splitting of Poisson processes:
λj = µ0p0j +j−1∑i=1
λipij , j = 1, . . . , J,
I traffic equations: the mean flow of customers.
Markovian Queues and Stochastic Networks 38 / 40
Burke’s theorem and feedforward networks – 4
Theorem (2.5.4 Equilibrium distribution)Let {N(t) = (N1(t), . . . ,NJ(t))} at state space S = NJ0, wheren = (n1, . . . ,nJ) and nj the number of customers in queue j,j = 1, . . . , J, record the number of customers in thefeedforward network of J M|M|1 queues described above. Ifρj = λj/µj < 1, with λj the solution of the traffic equations,j = 1, . . . , J, then the equilibrium distribution is the product ofthe marginal distributions of the queues:
π(n) =J∏
j=1
(1− ρj)ρnjj , nj ∈ N0, j = 1, . . . , J. (11)
I Next time: networks of M|M|1 queues.
Markovian Queues and Stochastic Networks 39 / 40
Markovian Queues and Stochastic NetworksLecture 1
Richard J. BoucherieStochastic Operations Research