Date post: | 21-Dec-2015 |
Category: |
Documents |
View: | 218 times |
Download: | 2 times |
1
Computer Networks
Transport layer (Part 1)
2
Chapter 3: Transport Layer
• provide logical communication between app’ processes running on different hosts
• transport protocols run in end systems
• transport vs network layer services:
• network layer: data transfer between end systems
• transport layer: data transfer between processes – relies on, enhances, network
layer services
application
transportnetworkdata linkphysical
application
transportnetworkdata linkphysical
networkdata linkphysical
networkdata linkphysical
networkdata linkphysical
networkdata linkphysicalnetwork
data linkphysical
logical end-end transport
3
Transport layer outline
• Transport layer functions
• Specific Internet transport layers
4
Transport Layer Functions
• Demux to upper layer• Quality of service• Security• Delivery semantics• Flow control• Congestion control• Reliable data transfer
5
applicationtransportnetwork
MP2
applicationtransportnetwork
TL: Demux to upper layer (application)
Recall: segment - unit of data exchanged between transport
layer entities – aka TPDU: transport
protocol data unitreceiver
HtHn
Demultiplexing: delivering received segments to correct app layer processes
segment
segment Mapplicationtransportnetwork
P1M
M MP3 P4
segmentheader
application-layerdata
6
TL: Quality of service
• Provide predictability and guarantees in transport layer– Operating system issues
• Protocol handler scheduling
• Buffer resource allocation
• Process/application scheduling
• Support for signaling (setup, management, teardown)
– L4 (transport) switches, L5 (application) switches, and NAT devices
• Issues in supporting QoS at the end systems and end clusters
7
TL: Security
• Provide at the transport level– Secrecy
• No eavesdropping
– Integrity• No man-in-the-middle attacks
– Authenticity• Ensure identity of source
• What is the difference between transport layer security and network layer security?
• Does the end-to-end principle apply?
8
TL: Delivery semantics
• Reliable vs. unreliable• Unicast vs. multicast• Ordered vs. unordered• Any others?
9
TL: Flow control
• Do not allow sender to overrun receiver’s buffer resources
• Similar to data-link layer flow control, but done on an end-to-end basis
10
TL: Congestion control
Congestion:• informally: “too many sources sending too much data too fast
for network to handle”
• sources compete for resources inside network
• different from flow control!
• manifestations:
– lost packets (buffer overflow at routers)
– long delays (queueing in router buffers)
11
TL: Congestion
• Why is it a problem?– Sources are unaware of current state of resource
– Sources are unaware of each other
– In many situations will result in < 1.5 Mbps of “goodput” (congestion collapse)
10 Mbps
100 Mbps
1.5 Mbps
12
TL: Congestion Collapse
• Increase in network load results in decrease of useful work done– Spurious retransmissions of packets still in flight
• Classical congestion collapse• Solution: better timers and congestion control
– Undelivered packets• Packets consume resources and are dropped elsewhere in network• Solution: congestion control for ALL traffic
– Fragments• Mismatch of transmission and retransmission units• Solutions:
– Make network drop all fragments of a packet (early packet discard in ATM)– Do path MTU discovery
– Control traffic• Large percentage of traffic is for control• Headers, routing messages, DNS, etc.
– Stale or unwanted packets• Packets that are delayed on long queues• “Push” data that is never used
13
TL: Causes/costs of congestion: scenario 1
• two senders, two receivers
• one router, infinite buffers
• no retransmission
• large delays when congested
• maximum achievable throughput
14
TL: Causes/costs of congestion: scenario 2
• one router, finite buffers
• sender retransmission of lost packet
• network with bottleneck link capacity C
15
TL: Causes/costs of congestion: scenario 2 • “perfect” retransmission only when loss:
• “imperfect” retransmission
– Retransmission of delayed (not lost) packets
– Duplicate packets travel over links and arrive at receiver
– Goodput decreases
– More work for given “goodput” = “costs” of congestion
=in1
out1
> in1
= C
=in2
out1
> in1
= C out2
> = in2
16
TL: Causes/costs of congestion: scenario 3
• four senders• multihop paths• timeout/retransmit• worst-case goodput = 0
in
Q: what happens as and increase ?
in
17
TL: Causes/costs of congestion: scenario 3
Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity used for that packet was wasted!• congestion collapse• known as “the cliff”
18
TL: Preventing Congestion Collapse
• End-host vs. network controlled– Trust hosts to do the right thing
• Hosts adjust rate based on detected congestion (TCP)
– Don’t trust hosts and enforce within network• Network adjusts rates at congestion points
– Scheduling– Queue management
• Hard to prevent global collapse conditions locally
• Implicit vs. explicit rate control– Infer congestion from packet loss or delay
• Increase rate in absence of loss, decrease on loss (TCP Tahoe/Reno)• Increase rate based on delay behavior (TCP Vegas, Packet pair)
– Explicit signaling from network• Congestion notification (DECbit, ECN)• Rate signaling (ATM ABR)
19
TL: Goals for congestion control mechanisms
• Use network resources efficiently– 100% link utilization, 0% packet loss, Low delay– Maximize network power: (throughput/delay) – Efficiency/goodput: Xknee = xi(t)
• Preserve fair network resource allocation– Fairness: (xi)2/n(xi
2)– Max-min fair sharing
• Small flows get all of the bandwidth they require• Large flows evenly share leftover
– Example• 100Mbs link• S1 and S2 are 1Mbs streams, S3 and S4 are infinite greedy streams• S1 and S2 each get 1Mbs, S3 and S4 each get 49Mbs
• Convergence and stability• Distributed operation• Simple router and end-host behavior
20
TL: Congestion Control vs. Avoidance
• Avoidance keeps the system performing at the knee/cliff
• Control kicks in once the system has reached a congested state
Load
Throughput
Load
Delay
21
TL: Basic Control Model
• Of all ways to do congestion, the Internet chooses….– Mainly end-host, window-based congestion control
• Only place to really prevent collapse is at end-host
• Reduce sender window when congestion is perceived
• Increase sender window otherwise (probe for bandwidth)
– Congestion signaling and detection• Mark/drop packets when queues fill, overflow
• Will cover this separately in later lecture
• Given this, how does one design a windowing algorithm which best meets the goals of congestion control?
22
TL: Linear Control
• Many different possibilities for reaction to congestion and probing– Examine simple linear controls
– Window(t + 1) = a + b Window(t)
– Different ai/bi for increase and ad/bd for decrease
• Supports various reaction to signals– Increase/decrease additively
– Increase/decrease multiplicatively
– Which of the four combinations is optimal?
23
TL: Phase plots
• Simple way to visualize behavior of competing connections over time
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
24
TL: Phase plots
• What are desirable properties?• What if flows are not equal?
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2Optimal point
Overload
Underutilization
25
TL: Additive Increase/Decrease
T0
T1
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
• Both X1 and X2 increase/decrease by the same amount over time– Additive increase improves fairness and additive decrease
reduces fairness
26
TL: Muliplicative Increase/Decrease
• Both X1 and X2 increase by the same factor over time– Extension from origin – constant fairness
T0
T1
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
27
TL: Convergence to Efficiency & Fairness
• From any point, want to converge quickly to intersection of fairness and efficiency lines
xH
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
28
TL: What is the Right Choice?
• Constraints limit us to AIMD– Can have multiplicative term in increase
– AIMD moves towards optimal point
x0
x1
x2
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
29
TL: Reliable data transfer
• Error detection, correction• Retransmission• Duplicate detection• Connection integrity
30
TL: Principles of Reliable data transfer
• important in app., transport, link layers
• characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)
31
TL: Reliable data transfer: getting started
sendside
receiveside
rdt_send(): called from above, (e.g., by app.). Passed data to deliver to receiver upper layer
udt_send(): called by rdt,to transfer packet over unreliable channel to
receiver
rdt_rcv(): called when packet arrives on rcv-side of channel
deliver_data(): called by rdt to deliver data to
upper
32
TL: Reliable data transfer: getting started
We’ll:
• incrementally develop sender, receiver sides of reliable data transfer protocol (rdt)
• consider only unidirectional data transfer– but control info will flow on both directions!
• use finite state machines (FSM) to specify sender, receiver
state1
state2
event causing state transitionactions taken on state transition
state: when in this “state” next state uniquely determined by next
event
eventactions
33
TL: Rdt1.0: reliable transfer over a reliable channel
• underlying channel perfectly reliable– no bit erros
– no loss of packets
• separate FSMs for sender, receiver:– sender sends data into underlying channel
– receiver read data from underlying channel
34
TL: Rdt2.0: channel with bit errors
• underlying channel may flip bits in packet
• the question: how to recover from errors:– acknowledgements (ACKs): receiver explicitly tells sender that pkt
received OK
– negative acknowledgements (NAKs): receiver explicitly tells sender that pkt had errors
– sender retransmits pkt on receipt of NAK
• new mechanisms in rdt2.0 (beyond rdt1.0):– error detection
– receiver feedback: control msgs (ACK,NAK) rcvr->sender
35
TL: rdt2.0: FSM specification
sender FSM receiver FSM
36
TL: rdt2.0: in action (no errors)
sender FSM receiver FSM
37
TL: rdt2.0: in action (error scenario)
sender FSM receiver FSM
38
TL: rdt2.0 has a fatal flaw!
What happens if ACK/NAK corrupted?
• sender doesn’t know what happened at receiver!
• can’t just retransmit: possible duplicate
What to do?• sender ACKs/NAKs receiver’s
ACK/NAK? What if sender ACK/NAK lost?
• retransmit, but this might cause retransmission of correctly received pkt!
Handling duplicates: • sender adds sequence
number to each pkt• sender retransmits current
pkt if ACK/NAK garbled• receiver discards (doesn’t
deliver up) duplicate pkt
Sender sends one packet, then waits for receiver response
stop and wait
39
TL: rdt2.1: sender, handles garbled ACK/NAKs
40
TL: rdt2.1: receiver, handles garbled ACK/NAKs
41
TL: rdt2.1: discussion
Sender:
• seq # added to pkt
• two seq. #’s (0,1) will suffice. Why?
• must check if received ACK/NAK corrupted
• twice as many states– state must “remember”
whether “current” pkt has 0 or 1 seq. #
Receiver:
• must check if received packet is duplicate– state indicates whether 0 or 1
is expected pkt seq #
• note: receiver can not know if its last ACK/NAK received OK at sender
42
TL: rdt2.2: a NAK-free protocol
• same functionality as rdt2.1, using NAKs only
• instead of NAK, receiver sends ACK for last pkt received OK– receiver must explicitly
include seq # of pkt being ACKed
• duplicate ACK at sender results in same action as NAK: retransmit current pkt
senderFSM
!
43
TL: rdt3.0: channels with errors and loss
New assumption: underlying channel can also lose packets (data or ACKs)– checksum, seq. #, ACKs,
retransmissions will be of help, but not enough
Q: how to deal with loss?– sender waits until certain data
or ACK lost, then retransmits
– yuck: drawbacks?
Approach: sender waits “reasonable” amount of time for ACK
• retransmits if no ACK received in this time
• if pkt (or ACK) just delayed (not lost):
– retransmission will be duplicate, but use of seq. #’s already handles this
– receiver must specify seq # of pkt being ACKed
• requires countdown timer
44
TL: rdt3.0 sender
45
TL: rdt3.0 in action
46
TL: rdt3.0 in action
47
TL: Performance of rdt3.0
• rdt3.0 works, but performance stinks
• example: 1 Gbps link, 15 ms e-e prop. delay, 1KB packet:
Ttransmit=8kb/pkt
10**9 b/sec= 8 microsec
Utilization = U = =8 microsec
30.016 msecfraction of time
sender busy sending = 0.00015
– 1KB pkt every 30 msec -> 33kB/sec thruput over 1 Gbps link– network protocol limits use of physical resources!
48
TL: Pipelined protocols
Pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged pkts– range of sequence numbers must be increased
– buffering at sender and/or receiver
• Two generic forms of pipelined protocols: go-Back-N, selective repeat
49
TL: Go-Back-N
Sender:• k-bit seq # in pkt header
• “window” of up to N, consecutive unack’ed pkts allowed
• ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”
– may deceive duplicate ACKs (see receiver)
• timer for each in-flight pkt
• timeout(n): retransmit pkt n and all higher seq # pkts in window
50
TL: GBN: sender extended FSM
51
TL: GBN: receiver extended FSM
receiver simple:• ACK-only: always send ACK for correctly-received pkt
with highest in-order seq #– may generate duplicate ACKs– need only remember expectedseqnum
• out-of-order pkt: – discard (don’t buffer) -> no receiver buffering!– ACK pkt with highest in-order seq #
52
TL: GBN in action
53
TL: Selective Repeat
• receiver individually acknowledges all correctly received pkts– buffers pkts, as needed, for eventual in-order delivery to upper
layer
• sender only resends pkts for which ACK not received– sender timer for each unACKed pkt
• sender window– N consecutive seq #’s
– again limits seq #s of sent, unACKed pkts
54
TL: Selective repeat: sender, receiver windows
55
TL: Selective repeat
data from above :• if next available seq # in
window, send pkt
timeout(n):• resend pkt n, restart timer
ACK(n) in [sendbase,sendbase+N]:
• mark pkt n as received
• if n smallest unACKed pkt, advance window base to next unACKed seq #
senderpkt n in [rcvbase, rcvbase+N-1]
• send ACK(n)
• out-of-order: buffer
• in-order: deliver (also deliver buffered, in-order pkts), advance window to next not-yet-received pkt
pkt n in [rcvbase-N,rcvbase-1]
• ACK(n)
otherwise: • ignore
receiver
56
TL: Selective repeat in action
57
TL: Selective repeat: dilemma
Example: • seq #’s: 0, 1, 2, 3
• window size=3
• receiver sees no difference in two scenarios!
• incorrectly passes duplicate data as new in (a)
Q: what relationship between seq # size and window size?