+ All Categories
Home > Documents > Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Date post: 31-Dec-2015
Category:
Upload: liliana-boone
View: 216 times
Download: 1 times
Share this document with a friend
Popular Tags:
21
Transport Layer 1 Flow and Congestion Control Ram Dantu (compiled from various text books)
Transcript
Page 1: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 1

Flow and Congestion Control

Ram Dantu (compiled from various text books)

Page 2: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 2

TCP Flow Control

receive side of TCP connection has a receive buffer:

speed-matching service: matching the send rate to the receiving app’s drain rate app process may be

slow at reading from buffer

sender won’t overflow

receiver’s buffer bytransmitting too

much, too fast

flow control

Page 3: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 3

TCP Flow control: how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesn’t overflow

Page 4: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 4

Principles of Congestion Control

Congestion: informally: “too many sources sending too

much data too fast for network to handle” different from flow control! manifestations:

lost packets (buffer overflow at routers) long delays (queueing in router buffers)

a top-10 problem!

Page 5: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 5

Congestion: A Close-up View

knee – point after which throughput increases very

slowly delay increases fast

cliff – point after which throughput starts to

decrease very fast to zero (congestion collapse)

delay approaches infinity

Note (in an M/M/1 queue) delay = 1/(1 – utilization)

Load

Load

Th

rou

ghp

ut

De

lay

knee cliff

congestioncollapse

packetloss

Page 6: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 6

Congestion Control vs. Congestion Avoidance

Congestion control goal stay left of cliff

Congestion avoidance goal stay left of knee

Right of cliff: Congestion collapse

Load

Th

rou

ghp

ut knee cliff

congestioncollapse

Page 7: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 7

Congestion Collapse: How Bad is It?

Definition: Increase in network load results in decrease of useful work done

Many possible causes Spurious retransmissions of packets still in flight Undelivered packets

• Packets consume resources and are dropped elsewhere in network

Fragments• Mismatch of transmission and retransmission units

Control traffic• Large percentage of traffic is for control

Stale or unwanted packets• Packets that are delayed on long queues

Page 8: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 8

i

i

If information about i , and is known in a central location where control of i or can be effected with zero time delays, the congestion problem is solved!

Capacity () cannot be provisioned very fast => demand must be managed

Perfect callback: Admit packets into the network from the user only when the network has capacity (bandwidth and buffers) to get the packet across.

Solution Directions….

1

n

CapacityDemand

•Problem: demand outstrips available capacity

Page 9: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 9

Causes/costs of congestion: scenario 3 four senders multihop paths timeout/retransmit

in

Q: what happens as and increase ?

in

finite shared output link buffers

Host Ain : original data

Host B

out

'in : original data, plus retransmitted data

Page 10: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 10

Causes/costs of congestion: scenario 3

Another “cost” of congestion: when packet dropped, any “upstream transmission capacity

used for that packet was wasted!

Host A

Host B

o

u

t

Page 11: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 11

Approaches towards congestion control

End-end congestion control:

no explicit feedback from network

congestion inferred from end-system observed loss, delay

approach taken by TCP

Network-assisted congestion control:

routers provide feedback to end systems single bit indicating

congestion (SNA, DECbit, TCP/IP ECN, ATM)

explicit rate sender should send at

Two broad approaches towards congestion control:

Page 12: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 12

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission: LastByteSent-LastByteAcked

CongWin Roughly,

CongWin is dynamic, function of perceived network congestion

How does sender perceive congestion?

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms: AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytes/sec

Page 13: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 13

TCP AIMD (Additive increase and multiplicative decrease)

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease: cut CongWin in half after loss event

additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing

Long-lived TCP connection

Page 14: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 14

TCP Slow Start

When connection begins, CongWin = 1 MSS Example: MSS = 500

bytes & RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be >> MSS/RTT desirable to quickly

ramp up to respectable rate

When connection begins, increase rate exponentially fast until first loss event

Page 15: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 15

TCP Slow Start (more)

When connection begins, increase rate exponentially until first loss event: double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary: initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

Page 16: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 16

Refinement After 3 dup ACKs:

CongWin is cut in half window then grows linearly

But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly

• 3 dup ACKs indicates network capable of delivering some segments• timeout before 3 dup ACKs is “more alarming”

Philosophy:

Page 17: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 17

Refinement (more)Q: When should the

exponential increase switch to linear?

A: When CongWin gets to 1/2 of its value before timeout.

Implementation: Variable Threshold At loss event, Threshold

is set to 1/2 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

Page 18: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 18

Summary: TCP Congestion Control

When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.

When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.

When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold.

When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

Page 19: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 19

Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

Page 20: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 20

Why is TCP fair?

Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputConnect

ion 2

th

roughput

congestion avoidance: additive increaseloss: decrease window by factor of 2

congestion avoidance: additive increaseloss: decrease window by factor of 2

Page 21: Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)

Transport Layer 21

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP: pump audio/video at

constant rate, tolerate packet loss

Research area: TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel connections between 2 hosts.

Web browsers do this Example: link of rate R

supporting 9 cnctions; new app asks for 1 TCP,

gets rate R/10 new app asks for 11 TCPs,

gets R/2 !


Recommended