Flows and Networks (158052)
Richard BoucherieStochastische Operations Research -- TW
wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/158052.html
Introduction to theorie of flows in complex networks: both stochastic and deterministic apects
Size5 ECTS
16 lectures : 8 by R.J. Boucherie focusing on stochastic
networks 8 by W. Kern focusing on deterministic
networks
Common problemHow to optimize resource allocation so as to
maximize flow of items through the nodes of a complex network
Material: handouts / downloads
Exam: exercises / (take home) exam
References: see website
Motivation and main question
Motivation
Production / storage system
C:\Flexsim Demo\tutorial\Tutorial 3.fsm
Internet Thomas Bonalds's animation of TCP.htm (www-sop.inria.fr/mistral/personnel/Thomas.Bonald/tcp_eng.html)
http://www.warriorsofthe.net/
trailer
Main questions
How to allocate servers / capacity to nodes orhow to route jobs through the systemto maximize system performance, such as throughput, sojourn time, utilization
QUESTIONS
Aim: Optimal design of Jackson network
• Consider an open Jackson network
with transition rates
• Assume that the service rates and arrival rates
are given
• Let the costs per time unit for a job residing at queue j be
• Let the costs for routing a job from station i to station j be
• (i) Formulate the design problem (allocation of routing
probabilities) as an optimisation problem.
• (ii) Provide the solution to this problem
kk
jjj
jkjjk
pnTnq
pnTnq
pnTnq
000
00
))(,(
))(,(
))(,(
ja
jkb
0j
Flows and network: stochastic networks
Contents
1. Introduction; Markov chains
2. Birth-death processes; Poisson process, simple queue;reversibility; detailed balance
3. Output of simple queue; Tandem network; equilibrium distribution
4. Jackson networks;Partial balance
5. Sojourn time simple queue and tandem network
6. Performance measures for Jackson networks:throughput, mean sojourn time, blocking
7. Application: service rate allocation for throughput optimisationApplication: optimal routing
Today:
• Introduction / motivation course• Discrete-time Markov chain• Continuous-time Markov chain• Next• Exercises
Today:
• Introduction / motivation course• Discrete-time Markov chain• Continuous-time Markov chain• Next• Exercises
• AEX• Continuous, per minute, per day• Random process: reason increase / decrease ?
• Probability level 300 of 400 dec 2004 ? • Given level 350 : buy or sell ?
• Markov chain : random walk
Gambler’s ruin
• Gambling game: on any turn– Win €1 w.p. p=0.4– Lose €1 w.p. 1-p=0.6 – Continue to play until €N – If fortune reaches €0 you must stop
– Xn= amount after n plays
– For
– Xn has the Markov property: conditional probability that given the entire history depends only on
– Xn is a discrete time Markov chain
4.0
),,...,|1( 00111
iXiXiXiXP nn
001111 ,...,, iXiXiXiX nnn
)|1( 1 iXiXP nn
0011 ...,, iXiXiX nnn
jX n 1
iX n
Markov chain
• Xn is time-homogeneous
• Transition probability• State space : all possible states
• For gambler’s ruin
• For N=5: transition matrix
• Property
)|(),( 1 iXjXPjip nn
},...,1,0{ NS
1),(,1)0,0(
0;6.0)1,(;4.0)1,(
NNpp
Niiipiip
0.100000
4.006.0000
04.006.000
004.006.00
0004.006.0
000000.1
P
Nijip
NjijipN
j
0,1),(
,0,0),(
0
Markov chain : equilibrium distribution
• n-step transition probability
• Evaluate:
• Chapman-Kolmogorov equation
• n-step transition matrix
• Initial distribution
• Distribution at time n
• Matrix form
),()|()|( 0 jiPiXjXPiXjXP nnmnm
),(),(),(1
2 jkpkipjiPN
k
),(),(),(1
jkPkiPjiP mn
N
kmn
),)((),( jiPjiPPP nn
nn
)()( 0 iqiXP ))(),..,1(( Nqqq
),()()()(1
jkPkqjXPjp n
N
knn
nnnn PNpp qp ))(),..,1((
Markov chain: classification of states
• j reachable from i if there exists a path from i to j• i and j communicate when j reachable from i and
i reachable from j • State i absorbing if p(i,i)=1• State i transient if there exists j such that j
reachable from i and i not reachable from j • Recurrent state i process returns to i infinitely
often = non transient state• State i periodic with period k>1 if k is smallest
number such that all paths from i to i have length that is multiple of k
• Aperiodic state: recurrent state that is not periodic
• Ergodic Markov chain: alle states communicate, are recurrent and aperiodic (irreducible, aperiodic)
2.08.0000
1.04.05.000
07.03.000
0005.05.0
0006.04.0
P
Markov chain : equilibrium distribution
• Assume: Markov chain ergodic
• Equilibrium distribution
independent initial state
stationary distribution
• normalising
interpretation probability flux
)(),(lim)|(lim 0 jjiPiXjXP nn
nn
)(),(),(1 jjiPjiP nn
),(),(),(1
1 jkpkiPjiP n
N
kn
),()()(1
jkpkjN
k
n
nPP qππ
lim
1)(1
kN
k
SjjjtXP
SjjjXP
),())((
),())0((
Discrete time Markov chain: summary
• stochastic process X(t) countable or finite state space S
Markov property
time homogeneous
independent t
irreducible: each state in S reachable from any other state in S
transition probabilities
Assume ergodic (irreducible, aperiodic) global balance equations (equilibrium eqns)
solution that can be normalised is equilibrium distributionif equilibrium distribution exists, then it is unique and is limiting distribution
))(|)((
))(,...,)(|)((
11
1111
nnnn
nnnn
jtXjtXP
jtXjtXjtXP
))(|)(( jtXktXP
))(|)1((),( jtXktXPkjp
1),(
kjpSk
).()()( jkpkjSk
)())0(|)((lim kjXktXPt
Random walk
http://www.math.uah.edu/stat/
• Gambling game over infinite time horizon: on any turn– Win €1 w.p. p– Lose €1 w.p. 1-p – Continue to play
– Xn= amount after n plays
– State space S = {…,-2,-1,0,1,2,…}
– Time homogeneous Markov chain
– For each finite time n:
– But equilibrium?
piip
iXjXPpiip nn
1)1,(
)|()1,( 1
)|( 0 iXjXP n
Today:
• Introduction / motivation course• Discrete-time Markov chain• Continuous-time Markov chain• Next• Exercises
Continuous time Markov chain• stochastic process X(t)
countable or finite state space S
Markov property
transition probability
irreducible: each state in S reachable from any other state in S
Chapman-Kolmogorov equation
transition rates or jump rates
))(|)((
))(...,)(,)(|)(( 11
itXjstXP
jtXjtXitXjstXP nn
))0(|)((),( iXjtXPjiPt
),(),(),( jkPkiPjiP stk
st
jih
jiPjiq h
h
),(lim),(
0
)(),(),( hohjiqjiPh
Continuous time Markov chain
• Chapman-Kolmogorov equation
transition rates or jump rates
• Kolmogorov forward equations: (REGULAR)
Global balance equations
),(),(),( jkPkiPjiP stk
st
jih
jiPjiq h
h
),(lim),(
0
)],()(),()([0
)],(),(),(),([),('
)],(),(),(),([
]1),()[,(),(),(),(),(
),(),(),(
kjqjjkqk
kjqjiPjkqkiPjiP
kjPjiPjkPkiP
jjPjiPjkPkiPjiPjiP
jkPkiPjiP
jk
ttjk
t
hthtjk
hthtjk
tht
htk
ht
Continuous time Markov chain: summary
• stochastic process X(t) countable or finite state space S
Markov property
transition rates
independent t
irreducible: each state in S reachable from any other state in S
Assume ergodic and regular global balance equations (equilibrium
eqns)
π is stationary distribution
solution that can be normalised is equilibrium distributionif equilibrium distribution exists, then it is unique and is limiting distribution
)],()(),()([0 kjqjjkqkjk
)())0(|)((lim kjXktXPt
))(|)((
))(...,)(,)(|)(( 11
itXjstXP
jtXjtXitXjstXP nn
jih
jiPjiq h
h
),(lim),(
0
Today:
• Introduction / motivation course• Discrete-time Markov chain• Continuous-time Markov chain• Next• Exercises
Next time:
• [R+SN] section 1.1 – 1.3
• Continuous – time Markov chains:
Birth-death processes; Poisson process, simple queue;reversibility; detailed balance;
Today:
• Introduction / motivation course• Discrete-time Markov chain• Continuous-time Markov chain• Next• Exercises
Exercises:
• [R+SN] 1.1.2, 1.1.4, 1.1.5
• Give proof of Chapman-Kolmogorov equation
• For random walk, letDetermine the possible states for N=10, and compute for all feasible j
• Consider the random walk with reflecting boundary, that has transition probabilities similar to random walk, except in state 0. When the process attempts to jump to the left in state 0, it stays at 0. The transition probabilities are
Show that a solution of the global balance equations is
For which values of p is this an equilibrium distribution?
1)0( 0 XP
)0|( 010 XjXP
pppp
ipiippiip
1)0,0(,)1,0(
,...3,2,1,1)1,(,)1,(
i
p
pci
1
)(