Improving the Performance of TCP Vegas and TCP SACK: Investigations and Solutions

Post on 11-Jan-2016

33 views 2 download

description

Improving the Performance of TCP Vegas and TCP SACK: Investigations and Solutions. By Krishnan Nair Srijith Supervisor: A/P Dr. A.L. Ananda School of Computing National University of Singapore. Outline. Research Objectives Motivation Background Study Transmission Control Protocol (TCP) - PowerPoint PPT Presentation

transcript

Improving the Performance of TCP Vegas and TCP SACK:

Investigations and Solutions

By

Krishnan Nair Srijith

Supervisor: A/P Dr. A.L. Ananda

School of Computing

National University of Singapore

Outline

Research Objectives Motivation Background Study

Transmission Control Protocol (TCP) TCP SACK

Section 1- TCP variants over satellite links

Outline (Cont.)

Section 2 - Solving issues of TCP Vegas (TCP Vegas-A)

Section 3 - Improving TCP SACK’s performance

Conclusion

Research Objectives

Study performance of TCP over satellite links.

Study TCP Vegas and suggest mechanisms to overcome limitations.

Study TCP SACK and suggest mechanisms to overcome limitations.

Motivation

TCP is the most widely used transport control protocol.

TCP SACK was proposed to solve issues with New Reno when multiple packets are lost in a window.

However under some conditions SACK too perform badly.

Overcoming this can enhance SACK’s efficiency.

Motivation (Cont.)

TCP Vegas is very different from New Reno, the most commonly used variant of TCP.

Vegas shows greater efficiency, but there are several unresolved issues.

Solving these issues could produce a better alternative to New Reno.

Transmission Control Protocol

The most widely used transport protocol, used in applications like FTP, Telnet etc.

It is a connection oriented, reliable, byte stream service on top of IP layer.

Uses 3 way handshake to establish connections. Each byte of data is assigned a unique sequence

number which has to be acknowledged.

TCP (Cont.)

Major control mechanisms of TCP: Slow Start

Used to estimate the bandwidth available by a new connection

Congestion AvoidanceUsed to avoid losing packets and if and when packets

are lost, to deal with the situation

TCP SACK

Was proposed to overcome problems when multiple packet are lost by New Reno within a single window.

In SACK, TCP receiver informs the sender of packets that are successfully received.

It thus allows selective retransmission of lost packets alone.

Section 1

Studied performance of TCP New Reno and SACK over satellite link.

Paper:- “Effectiveness of TCP SACK, TCP HACK and TCP Trunk over

Satellite Links” - Proceedings of IEEE International Conference on Communications (ICC 2002), Vol.5, pp. 3038 - 3043, New York, April 28 - May 2, 2002.

TCP over Satellite

There are several factors that limit the efficiency of TCP over satellite links. Long RTT

• Increase time in slow start mode,decreases throughput.

Large Bandwidth-delay product• Small window sizes causes under utilization.

High Bit Error Rates• TCP assumes congestion and decreases window.

Experimental Setup

Experiment testbed 1

Client 1 ServerError/Delay Box Router

Client 2

Experimental Setup (Cont.)

Router

Satellite Link

Client 1

Client 2

Server

Experiment testbed 2

Results - SACK

Emulator setup with no corruption RTT of 510 ms was introduced by the

error/delay box to simulate the long latency of the satellite link of 10Mbps bandwidth.

TCP maximum window size was varied from 32 KB through 1024KB.

Files of different size were sent from client to server.

Results- SACK (Contd.)

50

80

110

140

170

200

230

260

290

32 64 128 512 1024Window size (KB)

Go

od

pu

t (K

By

tes/

s) new reno - 10MB

sack - 10MB

new reno - 1MB

sack - 1MB

Goodput for 1MB and 10MB file transfers for different window sizes - no corruption

Results – SACK (Contd.)

Goodput generally increases with increase in window size.

However for the window size of 1024KB, the goodput decreases in both cases, but more in the New Reno case.

This is because, when the window size is set larger than the bandwidth-delay product of the link (652.8KB), congestion sets in and the goodput falls.

Results – SACK (Contd.)

Emulator setup with corruption Packet errors of 0.5%,1.0% and 2% were

introduced. RTT was kept at 510ms. Files transfers of size 1MB and 10MB were

carried out with varying window sizes.

Results – SACK (Contd.)

24

25

26

27

28

29

30

31

32

32 64 128 512 1024

Window size (KB)

Good

put (

KByte

s/s)

new reno - 1MB

sack - 1MB

new reno - 10MB

sack - 10MB

17

22

27

32

37

42

32 64 128 512 1024Window size (KB)

Good

put (K

Bytes

/s)

new reno - 2%

sack - 2%

new reno - 1%

sack - 1%

new reno - 0.5%

sack - 0.5%

Goodput at 1% corruption Goodput for 10MB file at different corruption

Results – SACK (Contd.)

Again, the 10MB file transfer goodput decreases when window size is increased beyond 652.8KB because of the presence of congestion in addition to corruption.

SACK is able to handle this situation better and provides a better goodput.

Result - SACK (Contd.)

The goodput increases as window size is increased, as long as the window size is kept less than the bandwidth-delay product.

SACK performs better than New Reno for both the file sizes as well as for all the window sizes used.

  1MB New Reno

1MB SACK

10MB New Reno

10MB SACK

64KB 13 14 16.5 17.6

128KB

13.75 15 16.5 18.5

256KB

12.5 13 15.75 17.75

Goodput in KBps for 1MB and 10MB file transfers for varying window size – satellite link

Satellite Link

Summary

The performance of TCP SACK was compared with New Reno in a GEO satellite environment.

It was shown that SACK performs better than New Reno unless the level of corruption is very high.

Section 2

Studied the limitations of TCP Vegas and proposed changes to overcome them (TCP Vegas-A).

Paper:- “TCP Vegas-A: Solving the Fairness and Rerouting Issues of TCP

Vegas” - accepted for Proceedings of 22nd IEEE International Performance, Computing, and Communications Conference (IPCCC) 2003, Phoenix, Arizona, 9 - 11 April, 2003.

TCP Vegas

Proposed by Brakmo et al. as a different form of TCP congestion mechanism.

It uses a different bandwidth estimation scheme based on fine-grained measurement of RTTs.

The increment of cwnd in TCP Vegas is governed by the following algorithm:

TCP Vegas (Cont.)

Calculate: Expected_rate = cwnd/base_rtt Actual_rate = cwnd/rtt Diff = expected_rate – actual rate

cwnd =

cwnd +1, if diff < αcwnd –1, if diff > βcwnd, otherwise

α=1 β=3

Issues with TCP Vegas

Fairness Vegas uses a conservative scheme, while New

Reno is more aggressive. New Reno thus attains more bandwidth than

Vegas when competing against it. Furthermore, New Reno aims to fill up the link

space, which Vegas interprets as sign of congestions and reduces cwnd.

Issues with Vegas (Cont.)

Vegas+ was proposed by Hasegawa et al. to tackle this issue.

However, this method assumes that an increase in RTT is always due to presence of competing traffic.

Furthermore, it introduces another parameter count(max), whose chosen value is not explained.

Issues with TCP Vegas (Cont.)

Re-routing Vegas calculates the expected_rtt using the

smallest RTT of that connection (baseRTT). When routes change during the connection, this

value can change, but Vegas cannot adapt if this new smallest RTT value is more than the original one, since it cannot know whether the change is due to congestion or route change.

Issues with Vegas (Cont.)

Vegas assumes RTT increase is due to congestion and decreases cwnd, just opposite of what it should be doing.

La et al. proposed a modification to Vegas to counter this problem, but their solutions adds more variables K,N,L,δ and γ, whose optimum value is still open to debate.

Issues with Vegas (Cont.)

Unfair treatment of old connections It has been shown that Vegas is inherently unfair

towards older connection. The critical window size that triggers a reduction

in cwnd is smaller in older connection and larger in newer connection.

Similarly, critical cwnd that triggers an increase in congestion window is lesser for newer connections.

Vegas-A: Solving Vegas’ Problems

To solve these issues, a modification to the algorithm is proposed, named Vegas-A.

The main idea is to make the values of the parameters α and β adaptive and not fixed at 1 and 3.

The modified algorithm is as follows:

Vegas-A algorithm

if β > diff > α {

if Th(t) > Th(t-rtt) { cwnd = cwnd +1, α= α+1, β= β+1}

else (i.e if Th(t) <= Th(t-rtt)) {no update of cwnd, α, β}

}

else if diff < α {

if α >1 and Th(t) > Th(t-rtt) {cwnd = cwnd +1}

else if α >1 and Th(t) < Th(t-rtt) {cwnd = cwnd –1, α= α-1, β= β-1}

else if α =1 {cwnd = cwnd+1}

}

Vegas-A Algorithm (Cont.)

else if diff > β {

cwnd= cwnd-1, α= α-1, β= β-1

}

else {

no update of cwnd, α, β

}

Simulation of Vegas vs. Vegas-A

Simulations used Network Simulator (NS 2) Wired and satellite (GEO and LEO) links

were simulated. NS 2 Vegas agent was modified to work as

Vegas-A agent.

Wired link simulation

S1

Sx

Sn

D1

Dx

Dn

R2R1

Simulated wired network topology

Wired simulation (Cont.)

Re-routing condition Route change was simulated by changing RTT of

S1-R1 from 20ms to 200ms after 20s into the simulation.

Bandwidth of S1-R1, R1-R2 and R2-D1 was 1Mbps and RTTs of R1-R2 and R2-D1 were 10ms.

Simulation run for 200 seconds.

Re-routing simulation

Vegas Vegas-A Diff. % diff

Throughput

(bps)

217320 940240 772920 +333%

cwnd variation for Vegas and Vegas-A due to RTT change

Throughput variation for Vegas due to RTT change

Throughput variation for Vegas-A due to RTT change

Bandwidth sharing with New Reno

S1 uses Vegas/Vegas-A while S2 uses New Reno.

S1-R1 and S2-R1=8Mbps, 20ms (RTT) R2-D1 and R2-D2=8Mbps, 20ms (RTT) R1-R2 = 800Kbps, 80ms(RTT) S1 started at 0s and S2 at 10s.

Throughput of TCP New Reno and Vegas over congested link

Throughput of TCP New Reno and Vegas-A connections over congested link

Competing against New Reno

When 3 Vegas/Vegas-A connections and New Reno were used, Vegas-A was again found to obtain a fairer share of the bandwidth compared to Vegas.

New Reno/Vegas New Reno/Vegas-A

5.33 3.17

Old vs. New Vegas/Vegas-A

5 Vegas/Vegas-A connections were simulated starting at intervals of 50 seconds.

Source Vegas(bps) Vegas-A(bps)

S1 218531 221447

S2 191533 199760

S3 206176 247431

S4 247585 229577

S5 266913 234662

Std. Deviation 33217.1 17711.1

Bias against high BW flows

It has been shown that Vegas is biased against connections with higher bandwidth.

Simulations conducted to check if Vegas-A fares better.

3 sources – S1,S2,S3. S1-R1=128Kbps, S2-R1=256Kbps,

S3-R1=512Kbps, R1-R2 = 400Kbps

High BW flows bias (Cont.)

The table below shows that Vegas-A does indeed perform better than Vegas.

S1(Kbps) S2(Kbps) S3(Kbps)

Expected 57.14 114.29 228.57

Vegas 123.34 146.85 120.46

Vegas-A 98.90 134.54 158.25

Retaining properties of Vegas

While trying to overcome the problems of Vegas, Vegas-A should not lose properties of Vegas.

One Vegas/Vegas-A connection simulated S1-R1=1Mbps, 45ms RTT R1-R2=250Kbps, 45ms RTT R2-D1=1Mbps, 10ms RTT

Retaining properties of Vegas (Cont.)

Source 5MB file 10MB file

Avg. Queue Rtx. Pkts. Avg. Queue Rtx. Pkts.

New Reno 11.29 59 23.0 62

Vegas 0.82 0 1.63 0

Vegas-A 4.85 0 10.84 0

Comparison of New Reno, Vegas and Vegas-A connections over a 100ms RTT link

Retaining properties of Vegas(Cont.)

The effect of changing buffer size on the performance of New Reno, Vegas and Vegas-A was studied next.

RTT was set to 40ms and bottleneck link BW was set to 500Kbps.

Retaining properties of Vegas(Cont.)

Buffer Size

(packets)

10 15 20 25 30

New Reno 106 74 61 56 55

Vegas 0 0 0 0 0

Vegas-A 2 1 1 0 0

Comparison of New Reno, Vegas and Vegas-A connections with different router buffer queue size

Vegas-A on satellite links

Geo Satellite links Uplink and downlink were 1.5Mbps each. Terminals at New York and San Francisco. Different PERs were simulated on the link.

Vegas-A on GEO Satellite

PER New Reno Vegas Vegas-A

Thrpt. Retx. Thrpt. Retx. Thrput. Retx.

0 1.15M 163 1.37M 0 1.37M 0

.0005 659.5K 189 1.04M 53 1.04M 52

.005 257.7K 112 398.1K 149 398.2K 149

.05 63.3K 239 100.7K 371 100.1K 371

Performance on a GEO link

Vegas-A on GEO satellite

0.0005 PER Throughput Goodput Lost

New Reno vs New Reno New Reno 526491 523993 187

New Reno 660461 658807 122

Vegas vs New Reno Vegas 552440 552146 22

New Reno 734386 732285 155

Vegas-A vs New Reno Vegas-A 592854 592520 25

New Reno 724678 722997 124

Vegas-A on LEO

Simulated using NS 2 780Km altitude, orbital period = 6206.9s Interstellar separation=32.72 degree Terminal at Berkeley and Boston

Vegas-A on LEO links

0.0 PER Throughput Lost Packets

Vegas 1348086 0

Vegas-A 1459432 52

New Reno 1455693 189

RTT changes over LEO satellite link

Summary

Vegas-A was proposed to mitigate problems associated with Vegas.

It was shown that Vegas-A performs better than Vegas when competing with New Reno.

Vegas-A is able to overcome re-routing limitation of Vegas.

Summary

Vegas-A does not suffer from unfairness against old and high bandwidth connections issues.

Vegas-A performs better than Vegas in LEO and GEO satellite link.

At the same time, Vegas-A retains all good properties of Vegas.

Section 3

Studied the worst case limitation of TCP SACK and proposed change in the packet format to overcome the problem.

Paper:- “Worst-case Performance Limitation of TCP SACK and a Feasible

Solution” - Proceedings of 8th IEEE International Conference on Communications Systems (ICCS), Singapore, 25 - 28 November 2002.

Limitation of SACK

TCP Options field can have a maximum length of 40 bytes.

This limits the number of SACK blocks whose information the receiver can send, to 4.

Under certain error scenarios this limitation of TCP SACK leads to retransmission of successfully received packets.

Data Packets ACK packets Sender Reaction1 Normal ACK - 2 Send 12

2 (lost)

3 DUPACK-2,3-3 No Action

4 DUPACK-2,3-4 No Action

5 DUPACK-2,3-5 Retrx. Pkt. 2

6 DUPACK-2,3-6(lost)

7 (lost)

8 DUPACK-2,8-8,3-6(lost)

9 (lost)

10 DUPACK-2,10-10,8-8,3-6(lost)

11 (lost)

12 DUPACK-2,12-12,10-10,8-8 Retrx. Pkt. 6

Example

Left Edge of 1st Block

Right Edge of 1st Block

Left Edge of nth Block

Right Edge of nth Block

Kind=5 Length

Present SACK option format

Send 32-bit sequence number for only the right edge of the 1st (A)

Represent each edge as offset from edge A. We denote them O12, O21, O22… On1, On2, where O12 is the offset of the left edge of first block from A, O21 & O22 are respectively the right and left edges of the second block, and so on.

Find out the biggest number among these offsets (denote it by Omax). Let X be log2 (Omax) (where x is the smallest integer larger than x).

The proposal

Thus, we can represent all the offsets using 'X' bits.

This number 'X' needs to be sent to the data sender within the SACK option fields.

The sequence numbers range from 0 to 232-1, the maximum value that X can take is 32.

Need 5 bits to send the value of X. To keep it simple, we allocate 1 byte for this purpose. This is the extra byte that the new format has after the ‘Length’ field, labeled 'X'.

The proposal (Cont.)

The first field after 'X' will be the right edge of the 1st block - a 32-bit sequence number.

The next field (O12) is the offset of the left edge of the 1st block with respect to the right edge. We represent this number using X bits instead of the usual 32 bits.

All the offsets are computed with respect to the right edge of the 1st block, as this is the only absolute 32-bit sequence number that will be sent to the data sender.

The proposal (Cont.)

Right Edge of 1st Block

Offset for Left Edge of 1st Block (O12)

Offset for Right Edge of nth Block (On1)

Offset for Left Edge of nth Block (On2)

Kind=5 Length X

The proposal (Cont.)

The scenario explained earlier was simulated using NS and the ‘List’ error model.

30

32

34

36

38

40

42

44

46

48

50

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8Time(seconds)

Sequ

ence

Num

ber

Packets

Dropped packets

ACK

Dropped ACK

Simulation 1

303234363840424446485052

0.9 1.1 1.3 1.5 1.7

Time (seconds)

Sequ

ence

Num

ber

PacketsDropped PacketsACKDropped ACK

Simulation 1 (Cont.)

The two-state Markov error model of NS was used to simulate the second scenario.

The values of the Markov matrix used are:

Simulation 2

Variable Source to Destination Destination to Source

t1 25.0 25.0

T2 5.0 15.0

p 0.05 0.45

q 0.55 0.45

0.05 0.05

0.25 0.75

The results above shows the throughput of SACK connections, when using the present and the proposed implementation.

Simulation 2 (Cont.)

40s 60s 120s 300s 600s

Present

(Kbps)

162.64 141.5 72.4 93.44 56.64

Proposed

(Kbps)

169.84 156.0 124.2 94.48 72.72

Current SACK implementation has the limitation of being able to send a maximum of only 3 or 4 SACK blocks with each ACK.

In this paper we proposed an alternate representation for the SACK blocks in the option field of the TCP segment for TCP SACK implementation to overcome this limitation.

Using examples and simulations, we showed that the modified implementation of SACK produces better TCP performance in terms of the throughput obtained.

Summary

Conclusions

Analyzed performance of TCP New Reno and SACK over satellite links.

Studied and suggested mechanisms to overcome limitation of TCP Vegas.

Analyzed performance of Vegas-A and showed that it works better than Vegas in wired and satellite links.

Conclusion (Cont.)

Studied and proposed mechanism to overcome SACK limitation.

Analyzed the new mechanism and proved that it does perform better than SACK.

Thank You

Questions?