Date post: | 17-Jan-2016 |
Category: |
Documents |
Upload: | derick-george |
View: | 215 times |
Download: | 0 times |
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation
Dong (Kevin) Jin
2
Overview and Motivation
Gigabit Ethernet Widely used to build large-scale networks
with many applications
High bandwidth, low latency
Packet delay and packet loss is now
mainly caused in switch
3
Overview and Motivation
Use simulation to study applications running on large-scale Gigabit Ethernet
Need an efficient switch model in RINSE
Control stationData aggregator
RTU/Relays
RINSE Simulator
Expand the network
Explore different architectures
4
Existing Switch Models Detailed models (OPNET, OMNet++)
Different models for different types of switches High computational cost Require constant update and validation
Simple Queuing Model (Ns-2, DETER) Simple FIFO queue One model for everything
Queuing model based on data collected from real switch [Roman2008] [Nohn2004]
Device-independent Model parameters derived from experimental data
5
Model Requirements
Fast simulation speed
No internal details
Accurate packet delay and loss Device-independent
Parameters derived from experimental data on real switches
Same model derivation process
Expected Model
Simple Queue Model
Detailed Model
Queue model based on experiments
Simple Queue Model
Detailed Model
Queue model based on experiments
Expected Model
Accurate packet delay and packet lossLess accurate More Accurate
Simulation SpeedSlow Fast
Black-box Switch Model}
6
Model Design Approach
Perform Experiments
on real switch
Build Analytical
model
Build RINSE model
Evaluate Simulation Speed and Accuracy
7
Experiment
Data to collect One-way packet delay sequence in switch Packet loss sequence in switch
Challenge in Gigabit Environment High bit rate - 1Gb/s Low latency in switch - s
8
Experiment Difficulties
Accurate timestamp for one-way delay One Way Delay = transmission delay + wire propagation
delay + delay in switch + delay in end host
Software Timestamp at NIC driver, s resolution
Large delay at end hosts at high bit rate (>500Mb/s)
Have to use hardware timestamp (NetFPGA) 4 on-board Gigabit Ethernet ports
10 ns resolution
Eliminating end-host delay
Processing/timestamping packets on the card
9
NetFPGACard
1 2 3 4
Experiment Setup Constant Bit Rate UDP flows
packet size
sending rate,
#background flows
Time_2 - Time_4 = delay per
packet inside switch
Problem: capture only about
2000 packets without a miss at
1Gb/s
Input pcap
Time 2 Time 4
10
Experimental Results - Packet Delay (Low Load)
Single flow
Delay not dependent on sending rate
Sufficient processing power to handle single flow up to 1Gb/s
Model packet delay as
a constant
Packet Delay Vs Sending Rate (packet size = 100 Bytes)
11
Experimental Results - Packet Delay (High Load)Mean Delay Vs Sending Rate (packet size =
100 Bytes) 3 extra non-cross interface UDP
flows, 950Mb/s each
NetGear
Low delay with small variance
Sufficient processing power to
handle 4 flows
3COM Use processor-sharing scheduling
Assign weight to a flow according to
its bit rate
12
Experimental Results- Packet Loss
0 - received 1 - lost
A Packet Loss Sample Pattern
3COM
Loss rate NetGear 0.4% 3COM 0.6%
Strong autocorrelation exists among neighboring packets
13
Model Design Approach
Perform Experiments
on real switch
Build Analytical
model
Build RINSE model
Evaluate Simulation Speed and Accuracy
14
0 - received 1 - lost
state 1 state 2
state 3
Packet Loss Model
1 3 2
Kth order Markov Chain
2^K states
Our Model - State Space state 1 - long burst of 0s state 2 - short burst of 0s state 3 – burst of 1s
Next state depends on Current state #successive packets in the
current state
15
Conclusion
Experimental results justified our approach
as necessary
Approach: Building models based on experimental
data on real switches
Created a packet loss model based on
experimental data
16
Ongoing Work Experiment
Collect long data traces with Endace DAG card
Cross-interface traffic
Model
Create a packet delay model
Use Copula to model both distribution and autocorrelation of delay
Study correlation between packet delay and loss
Evaluation
Compare simulation speed with the simple output queue model
Compare simulation-generated trace with real data traces
17
Thank You
18
Experimental Results - Packet Delay (High Load)
Packet Delay at Beginning of experiment under differenet sending rate (Mb/s)
3COM - Processor Sharing No idea about bit rate until sufficient packets passed Assign max weight at beginning Passed packets bit rate dertermined weight delay
19
Experimental Results - Packet Delay (Low Load)
Single flow
Delay NOT depends on sending rate
Sufficient processing power to handle 1Gb/s single flow
Model packet delay as
a constant
20
Experiment Setup I
Host
NIC 1
NIC 2
traffic sender
traffic receiver
timestamp
Packet capture
Switch1
2
3
4
5
6
7
8
send to self timestamp at NIC driver NIC to NIC overhead
21
RINSE - Architecture
• Scalable, parallel and distributed simulations
• Incorporates hosts, routers, links, interfaces, protocols, etc
• Domain Modeling Language (DML)
• A range of implemented network protocols
• Emulation support
DML Configuration
SSFNet
configure
SSF [Simulation Kernel]
enhance
SSF Standard/API
implements
Protocol Graph
Interface 1
MAC
PHY
Interface N
MAC
PHY
IPV4
ICMP
Emulation
Socket
TCP UDPDNP3
MODBUS
BGPOSPF
…
22
RINSE - Switch Model
Switch Layer black-box model Simple output queue
model Flip-coin model -
random delay and packet loss
Simulation Time:complex queuing model > simple output queuing model > our black-box model ≥ coin model
Switch
Ethernet MAC
Ethernet PHY
Switch
IP
Ethernet MAC
Ethernet PHY
Host A
UDP
APP
IP
Ethernet MAC
Ethernet PHY
Host B
UDP
APP
23
Outline Overview and Motivation Our Approach Measurement Experimental Results and Model Conclusion and Ongoing Work
24
Our Approach
Black-Box Switch Model Focus on packet delay and packet loss
No detailed architecture, no queues
Explore the statistical relation between data-
in and data-out
Paramters derived from data collected on
real swtiches
25
Monitor traffic on every port
Long trace (one day) Synchronized clock Real ISP network
Model based on experiment, no assumption on traffic and internals
26
Trace-driven Traffic Model Basic question:
– How to introduce different traffic sources into the simulation, while retaining the end-to-end congestion control
Trace-driven– Problem: Rate adaptation from end-to-end congestion control
causes shaping– Example: a connection observed on a high-speed unloaded link
might still send packages at a rate much lower than what the link could sustain because somewhere else along the path insufficient resources are available.
– Solution: Trace-driven source-level simulation preferable to trace-driven packet-level because data volume and the application-level pattern are NOT shaped by the network’s current property
27
Latency expectations on wired Ethernet End-to-end latency in switched Ethernet
a function of the scheduling and call admission procedure in use
not specific to Ethernet hard guarantee
the Guaranteed Service specs (RFC 2212) statistical guarantees
earliest-deadline-first virtual clock service curve based schedulers (INFOCOM 2000
paper by Liebeherr)
28
Black-Box Testing RFCs 2544 and 2889 - Guidelines
describe the steps to determine the capabilities of a router
use homogeneous traffic for profiling, and not discuss creating models based on
measurements
[Shaikh 2001] Experience in black-box OSPF measurement focus on measuring its reaction times to OSPF routing messages
[Hohn 2004] Bridging router performance and queuing theory simple queuing models, no loss events, and ignored interactions among ports
12 DAG cards synchronized by GPS
[Roman 2007] A black-box router profiler Software testbed (ns2, Click modular router)
Focus on single UDP flow and multiple TCP flows