Date post: | 17-Jan-2016 |
Category: |
Documents |
Upload: | brittany-fox |
View: | 215 times |
Download: | 0 times |
N. Hu (CMU) L. Li (Bell labs) Z. M. Mao. (U. Michigan)
P. Steenkiste (CMU) J. Wang (AT&T)
Infocom 2005
Presented ByMohammad Malli
PhD student seminar
Planete Project
A Measurement Study of Internet Bottlenecks
November 28, 2005 2
Goals Recently, many active probing tools have been
developed for measuring and locating bandwidth bottlenecks, but
Q1. How persistent are the Internet bottlenecks?– Important for measurement frequency
Q2. Are bottlenecks shared by end users within the same prefix?– Useful for path bandwidth inference
Q3. What relationship exists between bottleneck and packet loss and queuing delay?
– Useful for congestion identification
Q4. What relationship exists between bottleneck and router and link properties?
– Important for traffic engineering
November 28, 2005 3
Related Work• Persistence of Internet path properties
– Zhang [IMW-01], Paxson [TR-2000], Labovitz [TON-1998, Infocom-1999]
• loss, delay, pkt ordering,..• The persistence of the bottleneck location does not
considered
• Congestion points sharing – Katabi [TR-2001], Rubenstein [Sigmetrics-2000]
• Flows-based study and not e2e paths-based
• Correlation among Internet path properties – Paxson [1996]
• e2e level and not at the location level
• Correlation between router and link properties– Agarwal [PAM 2004]
November 28, 2005 4
Data collection
• Probing– Source: a CMU host– Destinations: 960 IP addresses– 10 continuous probings for each destination
(1.5 minutes)• Repeat for 38 days (for persistence study)
S
DDDDDDD
CMU 960 InternetDestinations
Day-1
Day-2
Day-38
…
November 28, 2005 5
Pathneck
• An active probing tool that can detect Internet bottleneck location– For details, refer to
“Locating Internet Bottlenecks: Algorithms, Measurements, and Implications” [SIGCOMM’04]
– Source code: www.cs.cmu.edu/~hnn/pathneck
• Pathneck characteristics– Low overhead (i.e., in order of 10s-100s KB)– Single-end control (sender only)
• Pathneck output used in this work– Bottleneck link location– Route
November 28, 2005 6
Recursive Packet Train (RPT) in Pathneck
Load packets
60 pkts, 500 B
TTL
255255255255
measurement packets
measurement packets
30 pkts, 60 B 30 pkts, 60 B
2 130301 2
Load packets are used to measure available bandwidth Measurement packets are used to obtain location information
UDP packets
November 28, 2005 7
Gap value
RouterSender
Packet train
Time axis
November 28, 2005 8
RouterSender
Drop m. packetSend ICMP
Gap value
November 28, 2005 9
RouterSender
Drop m. packetSend ICMP
Recv ICMP
Gap value
November 28, 2005 10
RouterSender
Drop m. packetSend ICMP
Recv ICMP
Drop m. packetSend ICMP
Gap value
November 28, 2005 11
RouterSender
Drop m. packetSend ICMP
Recv ICMP
Drop m. packetSend ICMP
Recv ICMP
Gap value
Gap valueRPT probing is repeated 10 times for each pair of nodes
November 28, 2005 12
Terminology
Persistent probing set is the probing set where all n probings follow the same route
November 28, 2005 13
Route Persistence
• Route change is very common and must be considered for bottleneck persistence analysis– Consistent with the results from Zhang, et. al. [IMW-01] on route
persistence
AS level
Location level
over 9 days
November 28, 2005 14
Bottleneck Persistence
• Persistence of a bottleneck R
• Bottleneck Persistence of a path Max(Persist(R)) for all bottlenecks R
• Two views:1. End-to-end view ― per (src, dst) pair
– Includes the impact of route change2. Route-based view ― per route
– Removes the impact of route change
Persist(R) =# of persistent probing sets R is bottleneck
# of persistent probing sets R appears
November 28, 2005 15
Bottleneck Persistence
1. Bottleneck persistence in route-based view is higher than end-to-end view
2. AS-level bottleneck persistence is very similar to that from location level
3. 20% bottlenecks have perfect persistence in end-to-end view, and 30% for route-based view
3
2
2
1
November 28, 2005 16
Results summary• Only 20-30% Internet bottlenecks have perfect persistence
– Application should be ready for bottleneck location change
• Bottleneck locations have a strong (60%) correlation with packet loss locations (2 hops away)
– Bottleneck and loss detections should be used together for congestion detection
• Only less than 10 % of the destinations in a prefix cluster share a bottleneck more than half of the time
– End users can not assume common bottlenecks
• Bottleneck has no clear relationship with link capacity, router CPU load, and memory usage
• A clear correlation between bottlenecks and link loads– Network engineers should focus on traffic load to eliminate
bottlenecks
November 28, 2005 17
Limitations Interesting study but ..
• How much the obtained statistics are representative for the whole Internet, since
– the few sources used for probing are a CMU node, 8 Planetlab nodes, and 13 RON nodes
– the number of probed destinations are 960 • <<< # of Internet paths
• Pathneck limitations– Load pkts are larger than what the firewalls permit
• only forward the 60 byte UDP packets – Anyway, Pathneck is not able to measure the pkt train length on
the last link due to ICMP rate limiting • theoricaly, the destination must send a ‘destination port
unreachable’ for each pkt
November 28, 2005 18
Thank you for your listening
November 28, 2005 19
Backup
November 28, 2005 20
Bottleneck vs. loss | delay
• Possible congestion indication– Large queuing delay– Packet loss– Bottleneck
• They do not always occur together– Packet scheduling algorithm large queuing delay– Traffic burstiness or RED packet loss– Small link capacity bottleneck
• Bottleneck ? link loss | large link delay
November 28, 2005 21
Trace
• Collected on the same set of 960 paths, but independent measurements
1. Detect bottleneck location using Pathneck2. Detect loss location using Tulip
– Only use the forward path results3. Detect link queuing delay using Tulip
– medianRTT – minRTT• [ Tulip was developed in University of Washington, SOSP’03 ]
• The analysis is based on the 382 paths for which both bottleneck location and packet loss are detected
November 28, 2005 22
Bottleneck Packet Loss
November 28, 2005 23
Bottleneck Queueing Delay