An Evaluation of Fairness Among Heterogeneous TCP Variants Over 10Gbps High-speed Networks

Post on 23-Feb-2016

46 views 0 download

description

An Evaluation of Fairness Among Heterogeneous TCP Variants Over 10Gbps High-speed Networks Lin Xue*, Suman Kumar', Cheng Cui* and Seung-Jong Park* *School of Electrical Engineering and Computer Science, Center for Computation & Technology, Louisiana State University, USA - PowerPoint PPT Presentation

transcript

An Evaluation of Fairness Among Heterogeneous TCP Variants Over 10Gbps High-speed Networks

Lin Xue*, Suman Kumar', Cheng Cui* and Seung-Jong Park**School of Electrical Engineering and Computer Science, Center for Computation & Technology, Louisiana State University, USA

'Department of Computer Science, Troy University, USA

CRON (http://www.cron.loni.org)Cyber-infrastructure of Reconfigurable Optical NetworksExperimental networking testbed for 10Gbps high-speed networks

10Gbps Testbed Setup

References [1] P. Yang, W. Luo, L. Xu, J. Deogun, and Y. Lu, “Tcp congestion avoidance

algorithm identification,” in Distributed Computing Systems (ICDCS), 2011 31st International Conference on. IEEE, 2011, pp. 310–321.

[2] Ao Tang; Xiaoliang Wei; Low, S.H.; Mung Chiang; , "Equilibrium of Heterogeneous Congestion Control: Optimality and Stability," Networking, IEEE/ACM Transactions on , vol.18, no.3, pp.844-857, June 2010

Introduction and Background

ComponentsHardware

Cisco N5000 switch with 48 X 10Gbps ports

High-end servers with 10GE NICs10Gbps hardware emulators

Software Emulab-based interface & controller 10Gbps software emulators

(optimized 10Gbps Dummynet )

Experimental DesignDumbbell Topology

2 10Gbps Linux Router (4X10Gbps NICs)

3 X 10Gbps links (120ms delay)3 senders and 3 receivers running

heterogeneous TCP flows

Evaluation of Fairness Among Heterogeneous TCP Variants

Heterogeneous TCP variants Traditional network is rapidly evolving into a heterogeneous one. According to a recent study on 5000 most popular web servers[1]:

TCP variants Percentage of web server

AIMD 16.85 ~ 25.58%

CUBIC/BIC 44.51%HSTCP/CTCP 10.27 ~ 19%

Fairness Problem for heterogeneous TCP flows Every TCP employs its own congestion control mechanism. Fairness among heterogeneous TCP variants depends on router

parameters such as queue management schemes and buffer size[2]

SoftwareOS: Optimized Ubuntu 64bit

& FreeBSD 64bitMeasurement S/W : Zero-

copy IperfTCP variants: TCP-SACK,

HSTCP, CUBIC, etc.Patched Queue

Management schemes: Drop-tail, RED, CHOKe, etc.

High-speed TCP congestion control variantsAlgorithm Detect Probing/Back off ParametersRENO Loss AI/MD α = 1, β = 0.5

CUBIC Loss Concave-convex AI/MD α = f(t), β = 0.2

HSTCP Loss Convex AI/MD α = f(W), β = f(W)

Queue Management SchemesAlgorithm Drop SchemeDrop-tail When queue is fullRED Randomly drops packet early according to the queue length.CHOKe Extends RED to compare random packets, and drops packets

Homogeneous TCP vs. Heterogeneous TCP Fairness index for buffer size = 20% BDP, RTT = 120ms

Drop-tail RED CHOKeCUBIC 0.988 0.994 0.991HSTCP 0.978 0.987 0.990SACK 0.936 0.977 0.970Heterogeneous TCP 0.681 0.732 0.747

1 TCP-SACK, 1 CUBIC, and 1 HSTCP flow 10 TCP-SACK, 10 CUBIC, and 10 HSTCP flow 10 TCP-SACK, 10 CUBIC, and 10 HSTCP flow with short-lived TCP flows

Fairness for Heterogeneous TCP flows Three scenarios 1 TCP-SACK, 1 CUBIC, and 1 HSTCP flow 10 TCP-SACK, 10 CUBIC, and 10 HSTCP flow 10 TCP-SACK, 10 CUBIC, and 10 HSTCP flow with short-lived TCP flows

Active queue management schemes (e.g. RED, CHOKe) perform better fairness than Drop-tail for large buffer sizes

All three queue management schemes show same fairness behavior for small buffer sizes

Short-lived TCP flows improve fairness in heterogeneous TCP networks

Throughput for Heterogeneous TCP flows Tradeoff between fairness and throughput Drop-tail performs the best among three queue management schemes

Loss Synchronization Effect AQM schemes make de-synchronization of TCP flows Very small buffer sizes (e.g. 1% BDP) create TCP loss synchronization

Buffer Size = 20%de-synchronization

Buffer Size = 1%synchronization