How to model a TCP/IP network i l 20 tusing only 20 parameters
K. Mills (NIST), E. Schwartz (CMU) & J. Yuan (Tsinghau U)
visual hash of MesoNet source code from http://www.wordle.net/
( ), ( ) ( g )Winter Simulation Conference – Dec 8, 2010
1
Outline
• Goal – Problem – Solution• Scale Reduction: Theory and Practicey• Overview of the 20 MesoNet Parameters• Parameter Explanations in 5 Categories
– Network (4 parameters)Network (4 parameters)
– Sources & Receivers (4 parameters)
– User Behavior (6 parameters)
– Protocols (3 parameters)( p )
– Simulation & Measurement Control (3 parameters)
• Describe Sample Use of Model• Discuss Simulation Resource RequirementsDiscuss Simulation Resource Requirements• Conclusions
2
Goal – Problem – Solution• Goal – compare proposed Internet congestion control
algorithms under a wide range of controlled, repeatable conditionsrepeatable conditions
• Problem – real network not controllable and bl b d l ll repeatable; test beds currently too small; most
network simulation models have large search space and require infeasible memory and processing
f l f k bl resources for large, fast networks; more tractable fluid-flow simulators are currently inaccurate
• Solution – design a reduced scale network simulation model (MesoNet) that is easy to configure and tractable to computetractable to compute
3
Simulating large, fast networks across many conditions and congestion control algorithms requires scale reduction in both model parameters & responses
Scale Reduction: Theory & Practice
algorithms requires scale reduction in both model parameters & responses
y1, …, yz = f(x1|[1,…,l] …, xp|[1,…,l])
Response State‐Space Stimulus State‐Space
(232)1000 O(109633) [ 1080 = atoms in visible universe]
Parameter ReductionMultidimensional ResponseReduction (2 )
Discard parameters not germane to study – reduce by 944 parameters
O(10 ) [ ]
(232)56 O(10539)
20Group related remaining parameters– reduce by 36 parameters
22 Responses
CorrelationAnalysis &Clustering
PrincipalComponentsAnalysis
This Talk
Use experiment design theory to reducebi i 256
220
(232)20 O(10192)
O(106)
Model ReductionSelect only 2 values for each parameter
Level Reduction
g
7 Responses 4 ResponsesDomainAnalysis
parameter combinations to 256
Use sensitivity analysisto identity six mostsignificant parameters
220‐12 256ExperimentDesign Theory
26‐1 32
SensitivityAnalysis7 Responses
Talk given Dec. 6 @ 2:00 PMTalk given Dec. 6 @ 1:30 PM
4
Use experiment design theory again to reduceparameter combinations to 32
g
Model Reduction for MesoNet Simulator
• Need to identify and retain only parameters germane to topic being studied (we identified 56 such parameters)
• Need to examine retained parameters to identify groups ofrelated parameters defining aspects of same macro-parameter (after grouping we identified 20 parametersparameter (after grouping we identified 20 parametersrelevant to a study of Internet congestion control)
For a full explanation of our reasoning and our entire study report see NIST Special Publication 500-282:y p pStudy of Proposed Internet Congestion Control MechanismsAvailable online at http://www.nist.gov/itl/antd/Congestion_Control_Study.cfm
5
MesoNet – a TCP/IP network model using only 20 parametersusing only 20 parameters
Category Identifier Name
NetworkX1 Topology X2 Propagation DelayNetwork
Configuration X2 Propagation Delay X3 Network Speed X4 Buffer Provisioning
Sources & X5 Number of Sources & Receivers X6 Distribution of Sources
Receivers X7 Distribution of Receivers X8 Source & Receiver Interface Speeds X9 Think Time
X10 Patience X11 Web Object Size for BrowsingUser
Behavior
X11 Web Object Size for Browsing
X12 Proportion & Size of Larger File Downloads
X13 Selected Spatiotemporal Congestion X14 Long-lived Flows g
Protocols X15 Congestion Control Algorithms X16 Initial Congestion Window Size X17 Initial Slow Start Threshold
Simulation & M t
X18 Measurement Interval Size X19 Si l i D iMeasurement
Control X19 Simulation Duration X20 Startup Pattern
6
Parameter X1 is the Topology = Routers + Links + Routes + Propagation Delaysp g y
AK
A1
A1b
A1c
A1a
A2b A2cA2a
C1
C1b
C1c
C1a H1bH1cH1a
H1d
H1e
H1f
K1
K2K0a
K1a
K1bK1c K1d
K2a K2b
K2c
K2d
J1b J1c
I2
I2a
I1
II0a
I1a
I1b I1c I1d
I2gI2f
I2e
I2dI2cI2b
B2d
A2
C
J
K
L
C1
C2
C2bC2c
C2a
H2b
H2c
H2a
H1H1f
H2fH2e
H2d
L2
L1
L1a L1bL1c
L1d
L2a
L2b
L0aL0b
J1
J2
J1aJ1b J1c
J1d
J1e
J1f
J2a
J2b
J2c
J2dJ2e J2f
B2
B2a
B2b
B2cB2d
B2eB2f
B2g
H
H2
D
M
B
L
N
L2 L2b
L2c
L2dN1aN1b
N1c
N1d
N1e
N1fN2
M2b
M2cM2d M2e
M2f
M2
M2g
M2aM1a
M1M1b
M1cM1dM1e M1f M1g
F2aF1
F1a
F1b F1c F1d
F2b
D2
D2a
D2gD2f
D2dD2cD2b
D2eB0aB1B1a
B1b
B1c B1dN1
E
G
FO
E1
E1b
E1a
G1a
N1f
N2aN2b
N2c N2dN2e
N2fM1f g
O0a
O1aO1b
O1cO1c
O1
O2a
O2
O2b O2c O2d
O2e
O2f
O2g
F2
F2g F2f
F2d
F2c
F0a
F2e
D1D1a
D1b
D1c D1d
D0a
E2
PE1b
E1c
E2bE2cE2a
G1b
G1c
G1eG1d
G2b G2cG2a
G2e
G2dG1f
G2fP1 P2
P1a
P1b
P1c
P1dP2a
P2bP2c P2d
P2e
P2f
P2gG1 G2
3 Router Tiers: Backbone Point of Presence (PoP) Access3 Router Tiers: Backbone – Point of Presence (PoP) – Access3 Access Router Classes: Typical – Fast – Directly Connected1 ingress/egress path from access routers to backbone routers 7
Topology Link Characteristics and Scaling Propagation Delay with Parameter X2
A
C
D
E
G
F
J
K
M
B
L
N
O
A1
A1b
A1c
A1a
A2b A2cA2a
C1
C1b
C1c
C1a
C2
C2bC2c
C2a
H2b
H2c
H2a
H1
H1bH1cH1a
E1E1a
H1d
H1e
H1f
H2fH2e
H2d
G1a
K1
K2K0a
K1a
K1bK1c K1d
K2a K2b
K2c
K2d
L2
L1
L1a L1bL1c
L1d
L2a
L2b
L2c
L2d
L0aL0b
J1
J2
J1aJ1b J1c
J1d
J1e
J1f
J2a
J2b
J2c
J2dJ2e J2f
N1aN1b
N1c
N1d
N1eN1f
N2
N2aN2b
N2c N2dN2e
N2f
M2b
M2cM2d M2e
M2f
M2
M2g
M2aM1a
M1M1bM1c
M1dM1e M1f M1g
O0a
O1aO1b
O1cO1c
O1
O2a
O2
O2b O2c O2d
O2e
O2f
O2g
I2
I2a
I1
II0a
I1a
I1b I1c I1d
I2gI2f
I2e
I2dI2cI2b
F2
F2aF1
F1a
F1b F1c F1d
F2g F2f
F2d
F2c
F2b
F0a
F2e
D2
D2a
D1D1a
D1b
D1c D1d
D2gD2f
D2dD2cD2b
D2e
D0a
B0aB1B1a
B1b
B1c B1d
B2
B2a
B2b
B2cB2d
B2eB2f
B2g
H
E2
H2
A2
N1
p g yP
E1bE1c
E2bE2cE2a
G1b
G1c
G1eG1d
G2b G2cG2a
G2e
G2dG1f
G2fP1 P2
P1a
P1b
P1c
P1dP2a
P2bP2c P2d
P2e
P2f
P2gG1 G2
Link# Endpoints Cost Metric Prop. Delay (ms) X2 = 0.5 X2 = 2
• Packets incur propagation delay when transiting a link
p p y ( )1 A-B 50 21 10.5 42 2 B-C 10 25 12.5 50 3 B-D 50 8 4 16 4 B-L 223 75 37.5 150 5 C-H 100 12 6 24 6 D-E 10 10 5 20 7 D F 108 33 16 5 66
• Cost metric used to compute routes from source backbone router to destination backbone
7 D-F 108 33 16.5 668 E-G 100 33 16.5 66 9 F-G 10 7 3.5 1410 F-H 50 12 6 24 11 F-I 55 22 11 4412 G-O 104 23 11.5 46 13 G-P 110 19 9 5 38 router to destination backbone
router13 G P 110 19 9.5 3814 I-H 10 14 7 28 15 I-J 50 8 4 16 16 I-K 147 22 11 4417 J-L 60 20 10 40 18 K-L 50 7 3.5 1419 L-M 50 12 6 24 20 L-N 39 6 3 1221 L-O 10 14 7 28 22 M-O 10 6 3 12 23 N-O 10 8 4 16 24 O-P 10 14 7 28
8
Defined Speed Relationships among Router Classes used to Scale Router Speeds with Parameter X3used to Scale Router Speeds with Parameter X3
(MesoNet simplification – only routers have speeds)
Parameter Value Speed Relationships Speed Scaling with X3 s1 X3 Router Class Speed X3 = 800 X3 = 1600 s2 4 Backbone s1 x BBspeedup 1600 3200s2 4 Backbone s1 x BBspeedup 1600 3200s3 10 PoP s1/ s2 400 800 BBspeedup 2 N-Class s1/ s2/ s3 40 80 Bfast 2 F-Class s1/ s2/ s3 x Bfast 80 160 Bdirect 10 D-Class s1/ s2/ s3 x Bdirect 400 800Bdirect 10 D-Class s1/ s2/ s3 x Bdirect 400 800
P t X4 l t th B ff P i i iParameter X4 selects the Buffer ProvisioningAlgorithm, which generally interacts with networkspeed and propagation delayspeed and propagation delay
9
Three Parameters Determine Number (X5) and Distribution of Sources (X6) and Receivers (X7)Distribution of Sources (X6) and Receivers (X7)
Combination of parameters X5, X6, and X7 determine distribution of fl i th t l d i th i l tiflows in the topology during the simulation
Sample Computation of Number and Distribution of Sources and Receivers (given Topology on Slide 7 and base # Sources = 100, X5 = 3, probNs = 0.1, probNsf = 0.6, probNr = 0.8, probNrf = 0.1 )
Class #routers srcs/router #srcs %srcs rcvrs/router #rcvrs %rcvrs Flow class %flows
N-class 122 90 10,980 31.6 960 117,120 95.3 NN-flows 30.1 FN-flows 60.5
F-class 40 540 21,600 62.2 120 4,800 3.9 FF-flows 2.4 DN-flows 6 1DN-flows 6.1
D-class 8 270 2,160 6.2 120 960 0.8 DF-flows 0.74 DD-flows 0.05
Parameter X8 defines the probability that sources and receivers connect to the topology at 1 Gbps or 100 Mbps
10
User Behavior Represented via Sources^
Too Slow (reactive sources only)
Select Think Time(Exponential Distribution)
Select Think Time(Exponential Distribution)
Too Long(reactive sources only)
Select Think Time(E ti l
Think Time Expired
Select Receiver &File Size (Pareto Distribution)
Select Receiver &File Size (Pareto Distribution)
(Exponential Distribution)
Finished
Select Think Time
File Size (Pareto Distribution)
Parameter x9 specifies averageThink Time
Parameter x10 specifies User Patience(probability source is reactive)
Select Think Time(Exponential Distribution)
^Note: this simplified diagram omits a flow connection phase that occurs before sending and also the potential for the connection phase to fail – after which source enters Thinking11
User Traffic CharacterizationParameter X11 characterizes Web Objects ( on, )
Size of Web Objects
on average size (packets)
shape of Pareto distributionProbability of Web Object
(1 – Fp – Sp – Mp)
charact r z s j ( , )
shape of Pareto distribution
Larger File Size Multipliers Larger File Probabilities
( p p p)
Parameter x12 characterizes Larger Files [(Fx, Fp), (Sx, Sp), (Mx, Mp)]
Fx documents
Sx software downloads
Mx movies
Fp documents
Sp software downloads
Mp movies
Jumbo File Characteristics
Jx size multiplier for jumbo files
Parameter X13 characterizes Jumbo Files (Jx, Jon, Joff)
p j
Jon fraction of simulated time after which jumbo file transfers begin
Joff fraction of simulated time after which jumbo file transfers end
Parameter X14 characterizes number, location and start and stop times for Long-Lived Flows 12
Assignment of Three Protocol ParametersP t 15 ifiParameter x15 specifies (prTCP, prHSTCP, prCTCP, prSCALABLE, prFAST, prHTCP, prBICTCP)
Congestion Control Algorithm Identifier Probability of Source Implementation Transmission Control Protocol (TCP) 1 prTCP High Speed TCP (HSTCP) 2 prHSTCPHigh Speed TCP (HSTCP) 2 prHSTCPCompound TCP (CTCP) 3 prTCP Scalable TCP (STCP) 4 prSTCP FAST AQM Scalable TCP (FAST) 5 prFAST Hamilton TCP (HTCP) 6 prHTCP Bi I C ti (BIC) 7 BICBinary Increase Congestion (BIC) 7 prBIC
160
180
Parameter x16 specifies initial
80
100
120
140
160
cwnd
initial sst linear increase
Parameter x16 specifies initial congestion window (cwnd)
0
20
40
60
80c
initial cwnd
exponential increase Parameter x17 specifies initial
slow start threshold (sst)0
0 10 20 30 40time
13
Simulation Measurement & Control
Parameter x18 specifies measurementinterval size5000
6000
s
2000
3000
4000
endi
ng F
low
s
Parameter x19 specifies number ofmeasurement intervals to simulate
0
1000
2000Se
Parameter x20 specifies startup patternfor sources
0 50 100 150 200 250Time
Count of Flows in the Sending State Measured every M = 200 ms for MI = 250 intervals – Simulation Duration (.2 s x 250 =) 50 s
0 25 % f t ti t 0 0 08 % f t ti– 0.25 % of sources starting a t=0, 0.08 % of sources startingafter an average delay 33 % of think time, 0.17 % of sources starting after an average delay 66 % of think time and remainingsources starting after average delay of think time
14
Combining MesoNet Parameters with2-Level Orthogonal Fractional Factorial (OFF) Experiment Design
Global Local
X5
+X2X3
+
_
_
+
X5
+
X1
X2X3
+
__
_
+
X5
+X2X3
+
__
_
+
X5
+X2X3
+
__
_
+
X7
Example ComparingOFF Design
vs
Global Local
X4X1 +
_
_ + X4X1 +
_ +
X5
+
+
X5
+
+
X4X1 +
_ + X4X1 +
_ +
X5
+
+
X5
+
+
OFF Design 1-FAT Design
X5
X7 vs.Factor-at-a-Time (FAT)Design for 7 parameters
X4
+
X1
X2X3
+__
_
_
+ X4
+
X1
X2X3
+__
_
_
+ X4
+
X1
X2X3
+___
_
_
+ X4
+
X1
X2X3
+___
_
_
+
X4
X6
Comparing 7 Congestion Control Algorithms with 2-Level design for, 9 MesoNetParameters requires (29 x 7 =) 3584 runs
At 28 processor hours per run and with 48 available processors, theseAt 28 processor hours per run and with 48 available processors, theseruns would require about 2090 hours (87 days)
Adopting a 29-4 OFF experimental design would reduce the resource requirement to only (32 x7) = 224 runs, which could be completed in about 130 hours (1 week)to only (3 x7) runs, wh ch could be completed n about 30 hours ( week)
Cost: misses 29 - 25 parameter combinations 15
Two Sample Experiments using 29-4 Orthogonal Fractional Factorial Design
Definition of the 32 Parameter Configurations used to Simulate a Modest Size, Moderate Speed Network in Experiment #1. F E i #2 d l f X3 d # S l i li d b 10 Si l
One Experiment Design – Two Experiments
Factor-> X2 X3 X4 X5 X7 X9 X11 X12 X15Condition -- -- -- -- -- -- -- -- --
1 1 800 0.5 3 0.7 5000 100 0.04/0.004/0.0004 0.72 1 1600 0.5 2 0.3 5000 100 0.04/0.004/0.0004 0.3
Values of 11 Fixed Parameters
For Experiment #2 red values for X3 and # Sources were multiplied by 10 to Simulate a Larger, Faster Network.
3 2 800 0.5 2 0.7 5000 100 0.02/0.002/0.0002 0.34 2 1600 0.5 3 0.3 5000 100 0.02/0.002/0.0002 0.75 1 800 1 2 0.3 5000 100 0.02/0.002/0.0002 0.76 1 1600 1 3 0.7 5000 100 0.02/0.002/0.0002 0.37 2 800 1 3 0.3 5000 100 0.04/0.004/0.0004 0.38 2 1600 1 2 0.7 5000 100 0.04/0.004/0.0004 0.79 1 800 0.5 3 0.3 7500 100 0.02/0.002/0.0002 0.310 1 1600 0.5 2 0.7 7500 100 0.02/0.002/0.0002 0.711 2 800 0 5 2 0 3 7500 100 0 04/0 004/0 0004 0 7
Parameter Assigned Value X1 Abilene Topology (Backbone: 11 routers and 14 links; 22 PoP routers; 139 Access routers) X6 probNs = 0.1, probNsf = 0.6 X7 probNr = 0.6, probNrf = 0.2
X10 0 (all users have infinite patience) X13 Jon = 1; Joff = 1; Jx = 1 (no explicit spatiotemporal congestion)11 2 800 0.5 2 0.3 7500 100 0.04/0.004/0.0004 0.7
12 2 1600 0.5 3 0.7 7500 100 0.04/0.004/0.0004 0.313 1 800 1 2 0.7 7500 100 0.04/0.004/0.0004 0.314 1 1600 1 3 0.3 7500 100 0.04/0.004/0.0004 0.715 2 800 1 3 0.7 7500 100 0.02/0.002/0.0002 0.716 2 1600 1 2 0.3 7500 100 0.02/0.002/0.0002 0.317 1 800 0.5 2 0.3 5000 150 0.02/0.002/0.0002 0.318 1 1600 0.5 3 0.7 5000 150 0.02/0.002/0.0002 0.719 2 800 0.5 3 0.3 5000 150 0.04/0.004/0.0004 0.7
X13 Jon 1; Joff 1; Jx 1 (no explicit spatiotemporal congestion)X14 no long-lived flows X16 initial cwnd = 2 (default Microsoft WindowsTM value) X17 initial sst = 231/2 (arbitrary large value) X18 M = 200 ms X19 MI = 18,000 (x .2 M =) 3600 s X20 prON = 0.25, prONsecond = 0.08, prONthird = 0.1719 2 800 0.5 3 0.3 5000 150 0.04/0.004/0.0004 0.7
20 2 1600 0.5 2 0.7 5000 150 0.04/0.004/0.0004 0.321 1 800 1 3 0.7 5000 150 0.04/0.004/0.0004 0.322 1 1600 1 2 0.3 5000 150 0.04/0.004/0.0004 0.723 2 800 1 2 0.7 5000 150 0.02/0.002/0.0002 0.724 2 1600 1 3 0.3 5000 150 0.02/0.002/0.0002 0.325 1 800 0.5 2 0.7 7500 150 0.04/0.004/0.0004 0.726 1 1600 0.5 3 0.3 7500 150 0.04/0.004/0.0004 0.327 2 800 0.5 3 0.7 7500 150 0.02/0.002/0.0002 0.3
X20 prON 0.25, prONsecond 0.08, prONthird 0.17
Each of the 32 parameter combinationswere run against 7 congestion control protocols
28 2 1600 0.5 2 0.3 7500 150 0.02/0.002/0.0002 0.729 1 800 1 3 0.3 7500 150 0.02/0.002/0.0002 0.730 1 1600 1 2 0.7 7500 150 0.02/0.002/0.0002 0.331 2 800 1 2 0.3 7500 150 0.04/0.004/0.0004 0.332 2 1600 1 3 0.7 7500 150 0.04/0.004/0.0004 0.7
– requiring 7 x 32 = 224 simulations
16
Is MesoNet Computationally Tractable?32 bit SLX 64 bit SLX
Experiment #1 Experiment #2 CPU hours (224 runs) 5,857.18 94,355.28 Avg. CPU hours/Run 26.15 421.23
32-bit SLX 64-bit SLX
Avg. CPU hours/Run 26.15 421.23Min. CPU hours/Run 12.58 203.04 Max. CPU hours/Run 43.97 739.04 Avg. Memory Usage (Mbytes) 196.56 2,392.41
Required 35 Required 11 Required 35 processor weeks
Required 11 processor years
Parallel simulation of configurations reduced this to:
E i t #1 Sl S ll N t k E i t #2 L F t N t k
1 week using48 processors
31 days using48 processors
Parallel simulation of configurations reduced this to
Experiment #1 – Slow, Small Network Experiment #2 – Large, Fast NetworkStatistic Flows Completed Data Packets Sent Flows Completed Data Packets Sent Avg./Run 11,466,429 3,414,017,482 116,317,093 33,351,040,358 Min./Run 7,258,056 2,138,998,764 72,944,797 21,069,357,409 Max./Run 17,390,781 5,048,119,166 175,947,632 50,932,067,100Max./Run 17,390,781 5,048,119,166 175,947,632 50,932,067,100Total All Runs 2,568,480,122 764,739,915,978 26,055,028,851 7,470,633,040,199
17
Comparing MesoNet with RossNet* Parallel Network Simulator(Throughput/Latency Tradeoff)
MesoNet limited to 1 processor per simulation, using sequential SLX simulator
Simulation Experiment Event Rate (events/second)
RossNet can use 2-4 processors per simulationSimulation Experiment Event Rate (events/second)MesoNet Experiment #1 – 32-bit SLX – 1 processor per simulation 725,359 MesoNet Experiment #2 – 64-bit SLX – 1 processor per simulation 439,864 RossNet Simulation of small topologies – 2 to 4 processors per simulation 256,244 RossNet Simulation of AT&T topology – 2 to 4 processors per simulation 150,720
RossNet speedups from parallel simulation averaged just under 1.7 (max. 3.2) when using 4 processors
Given 48 processors, MesoNet can run 48 simulations in parallel, while RossNet simulations using 4 processors can run only 12 simulations in parallel
RossNet requires a speedup of 4 to equal the throughput of MesoNetRossNet requires a speedup of 4 to equal the throughput of MesoNet
If sufficient processors exist to run all RossNet simulations in parallel, thenRossNet might provide superior latency to MesoNet*Yaun, G., D. Bauer, H. Bhutada, C. Carothers, M. Yukel and S. Kalyanaraman. 2003. Large-Scale Network Simulation Techniques: Examples of TCP and OSFP Models. In SIGCOM Computer Communications Review, 33:3, 27-41.
18
Conclusions• Defined a concise TCP/IP model using only 20
parameters
• Showed how the model can be combined with 2-level orthogonal fraction factorial techniques to design efficient experimentsefficient experiments
• Demonstrated how to carefully explore a parameter y p pspace using parallel instances of a sequential simulator
• Found our model and approach competitive with a • Found our model and approach competitive with a parallel TCP/IP simulator, which required additional processors to achieve the same throughput
19
Related Work
• More Parallel Simulators – (throughput/latency tradeoff)
Riley, G., M. Ammar, F. Fujimoto, A. Park, K. Perumalla and D. Xu. 2004. A Federated Approach to Distributed N t k Si l ti I ACM T ti M d li d C t Si l ti 14 2 116 148Network Simulation. In ACM Transactions on Modeling and Computer Simulation, 14:2, 116-148.
Zeng. X., R. Bagrodia and M. Gerla. 1998. GloMoSim: a Library for Parallel Simulation of Large-scale Wireless Networks. In Proceedings of the 12th Workshop on Parallel and Distributed Simulations, 154-161.
• Fluid-Flow Simulators – (inaccurate)
Towsley, D., V. Misra and W. Gong. 2000. Fluid-based analysis of a network of AQM routers supporting TCP flows with an application to RED. In Proceedings of SIGCOMM, 30:4, 151,160.pp g
Yi, Y. and S. Shakkottai. 2007. FluNet: A hybrid internet simulator for fast queue regimes, In Computer Networks: The International Journal of Computer and Telecommunications Networking, 51:18, 4919-4937.
• Hybrid Continuous-Time/Discrete-Event Simulators– (promising)Hy r ont nuous m /D scr t E nt S mu ators (prom s ng)
Lee, J., S. Bohacek, J. Hespanha and K. Obraczka. 2007. Modeling Communication Networks with Hybrid Systems. In IEEE/ACM Transactions on Networking, 15:3, 630-643.
20