+ All Categories
Home > Documents > Deriving IP Traffic Demands for an ISP Backbone Network

Deriving IP Traffic Demands for an ISP Backbone Network

Date post: 06-Jan-2016
Category:
Upload: laban
View: 53 times
Download: 3 times
Share this document with a friend
Description:
Deriving IP Traffic Demands for an ISP Backbone Network. Prepared for EECE565 – Data Communications. Difficulties in configuring a large IP backbone network. Engineering a large IP backbone network without an accurate view of the traffic demand is difficult - PowerPoint PPT Presentation
Popular Tags:
24
April 4th, 2002 George Wai Wong 1 Deriving IP Traffic Demands for an ISP Backbone Network Prepared for EECE565 – Data Communications
Transcript
Page 1: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 1

Deriving IP Traffic Demands for an ISP Backbone Network

Prepared for EECE565 – Data Communications

Page 2: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 2

Difficulties in configuring a large IP backbone network

Engineering a large IP backbone network without an accurate view of the traffic demand is difficult

Significant and sudden fluctuation in load can be caused by

– Shift in user behavior– Changes in routing polices– Failure of network elements

Page 3: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 3

Difficulties in configuring a large IP backbone network (cont.)

IP network engineers do not have end-to-end control of the path from source to destination

The majority of traffic in an ISP network travels across multiple administrative domain

ISP 1

ISP 2

ISP 3

Page 4: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 4

Point-to-multipoint IP traffic demand

A given destination network address is typically reachable from multiple edge routers

Edge router

Page 5: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 5

Point-to-multipoint IP traffic demand (cont.)

IP traffic demands are naturally modeled as point-to-multipoint volumes

Connection-oriented networks, such as Frame Relay, are modeled as point-to-point volumes

Page 6: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 6

Traffic flows in an ISP backbone

ACCESS LINKS

PEERING LINKS

BACKBONELINKS

Inbound traffic

transit flow

transit flow

Outbondtraffic

internalflow

Page 7: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 7

Peering links in BC

Obtained from http://www.bc.net/oran_arch.htm

Page 8: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 8

Tracing the route on Monday 12:32am March 18, 2002

ug15{georgew}104: traceroute www.hku.hk

traceroute to www.hku.hk (147.8.145.50), 30 hops max, 40 byte packets

1 ugrad-route (137.82.58.1) 6.171 ms 2.244 ms 2.240 ms

2 turing (137.82.52.142) 0.355 ms 0.361 ms 0.334 ms

3 ee-gw (137.82.52.254) 20.087 ms 33.088 ms 33.379 ms

4 142.103.204.153 (142.103.204.153) 3.297 ms 29.817 ms 33.154 ms

5 anguhub9.net.ubc.ca (142.103.204.50) 0.665 ms 0.653 ms 0.533 ms

6 c7507-a9.BC.net (207.23.240.6) 2.952 ms 2.669 ms 3.019 ms

7 ATM9-0-101.PEERB-VANCBC.IP.GROUPTELECOM.NET (216.18.31.201) 4.066 ms 3.058 ms 3.296 ms

8 GE3-0.WANB-VANCBC.IP.GROUPTELECOM.NET (66.59.190.13) 4.027 ms 3.707 ms 3.775 ms

9 GE4-0.PEERA-VANCBC.IP.GROUPTELECOM.NET (66.59.190.18) 3.903 ms 3.991 ms 3.618 ms

10 300.ATM3-0.GW3.VAN1.ALTER.NET (157.130.158.29) 3.897 ms 4.422 ms 4.005 ms

11 103.ATM2-0.XR1.VAN1.ALTER.NET (152.63.136.226) 3.215 ms 3.443 ms 4.358 ms

12 0.so-5-0-0.XL1.VAN1.ALTER.NET (152.63.138.65) 4.371 ms 3.457 ms 4.244 ms

13 0.so-7-0-0.TL1.VAN1.ALTER.NET (152.63.138.74) 4.657 ms 3.905 ms 4.754 ms

14 0.so-2-0-0.TL1.SAC1.ALTER.NET (152.63.8.1) 26.107 ms 27.551 ms 26.434 ms

15 0.so-7-0-0.XL1.PAO1.ALTER.NET (152.63.54.133) 29.844 ms 30.562 ms 30.478 ms

16 POS1-0.XR1.PAO1.ALTER.NET (152.63.54.74) 30.616 ms 32.291 ms 30.690 ms

Page 9: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 9

Tracing the route on Monday 12:32am March 18, 2002 (cont.)17 189.ATM6-0.GW10.PAO1.ALTER.NET (152.63.53.17) 29.963 ms 38.238 ms 31.828 ms

18 opentransit2-gw.customer.ALTER.NET (157.130.196.202) 37.560 ms 28.455 ms 28.250 ms

19 P0-1.TKYBB2.Tokyo.opentransit.net (193.251.241.254) 144.375 ms 145.050 ms 143.684 ms

20 P2-0.TKYBB1.Tokyo.opentransit.net (193.251.129.217) 145.109 ms 144.514 ms 144.840 ms

21 P0-0.HKGBB1.Hong-kong.opentransit.net (193.251.241.181) 193.580 ms 193.375 ms 193.330 ms

22 EquantHongKong.GW.opentransit.net (193.251.250.110) 195.168 ms 195.596 ms 194.291 ms

23 202.167.149.18 (202.167.149.18) 195.619 ms 195.109 ms 197.044 ms

24 192.245.196.9 (192.245.196.9) 197.480 ms 199.032 ms 196.964 ms

25 192.245.196.238 (192.245.196.238) 252.414 ms 234.571 ms 244.382 ms

26 147.8.239.1 (147.8.239.1) 242.594 ms 256.155 ms 241.645 ms

27 147.8.240.156 (147.8.240.156) 231.184 ms 147.8.240.155 (147.8.240.155) 219.269 ms *

28 147.8.240.169 (147.8.240.169) 271.172 ms 252.399 ms 290.202 ms

29 147.8.235.205 (147.8.235.205) 201.058 ms 199.361 ms 256.002 ms

30 www.hku.hk (147.8.145.50) 270.098 ms 287.309 ms 233.531 ms

Page 10: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 10

Tracing the route on Monday 2:36am March 18, 2002

ug15{georgew}105: traceroute www.hku.hk

traceroute to www.hku.hk (147.8.145.50), 30 hops max, 40 byte packets

1 ugrad-route (137.82.58.1) 2.410 ms 2.245 ms 2.313 ms

2 turing (137.82.52.142) 0.346 ms 0.363 ms 0.339 ms

3 ee-gw (137.82.52.254) 12.882 ms 33.443 ms 33.013 ms

4 142.103.204.153 (142.103.204.153) 1.458 ms 26.628 ms 1.717 ms

5 anguhub9.net.ubc.ca (142.103.204.50) 0.560 ms 0.648 ms 0.490 ms

6 c7507-a9.BC.net (207.23.240.6) 2.336 ms 3.641 ms 2.635 ms

7 ATM9-0-101.PEERB-VANCBC.IP.GROUPTELECOM.NET (216.18.31.201) 3.583 ms 3.613 ms 4.118 ms

8 GE3-0.WANB-VANCBC.IP.GROUPTELECOM.NET (66.59.190.13) 3.148 ms 3.055 ms 3.762 ms

9 GE4-0.PEERA-VANCBC.IP.GROUPTELECOM.NET (66.59.190.18) 2.747 ms 3.625 ms 2.677 ms

10 300.ATM3-0.GW3.VAN1.ALTER.NET (157.130.158.29) 3.966 ms 4.187 ms 3.539 ms

11 103.ATM2-0.XR1.VAN1.ALTER.NET (152.63.136.226) 2.769 ms 3.281 ms 3.695 ms

12 0.so-5-0-0.XL1.VAN1.ALTER.NET (152.63.138.65) 3.436 ms 4.477 ms 3.452 ms

13 0.so-7-0-0.TL1.VAN1.ALTER.NET (152.63.138.74) 2.762 ms 3.804 ms 4.127 ms

14 0.so-2-0-0.TL1.SAC1.ALTER.NET (152.63.8.1) 27.003 ms 36.187 ms 25.512 ms

15 0.so-7-0-0.XL1.PAO1.ALTER.NET (152.63.54.133) 31.086 ms 30.262 ms 31.233 ms

16 POS1-0.XR1.PAO1.ALTER.NET (152.63.54.74) 30.319 ms 31.317 ms 29.795 ms

Page 11: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 11

Tracing the route on Monday 2:36am March 18, 2002 (cont.)

17 189.ATM6-0.GW10.PAO1.ALTER.NET (152.63.53.17) 29.648 ms 30.832 ms 30.059 ms

18 opentransit2-gw.customer.ALTER.NET (157.130.196.202) 27.518 ms 27.803 ms 28.989 ms

19 P0-1.TKYBB2.Tokyo.opentransit.net (193.251.241.254) 143.800 ms 144.467 ms 144.476 ms

20 P2-0.TKYBB1.Tokyo.opentransit.net (193.251.129.217) 144.234 ms 143.530 ms 142.893 ms

21 P0-0.HKGBB1.Hong-kong.opentransit.net (193.251.241.181) 194.279 ms 193.948 ms 194.374 ms

22 EquantHongKong.GW.opentransit.net (193.251.250.110) 195.428 ms 194.704 ms 195.365 ms

23 202.167.149.18 (202.167.149.18) 194.290 ms 193.906 ms 195.206 ms

24 192.245.196.5 (192.245.196.5) 195.671 ms 196.055 ms 195.359 ms

25 192.245.196.238 (192.245.196.238) 202.629 ms 205.343 ms 202.853 ms

26 147.8.239.1 (147.8.239.1) 205.853 ms 203.016 ms 197.664 ms

27 147.8.240.155 (147.8.240.155) 200.551 ms 147.8.240.156 (147.8.240.156) 199.614 ms *

28 147.8.240.169 (147.8.240.169) 227.871 ms 204.711 ms 205.238 ms

29 147.8.235.205 (147.8.235.205) 218.286 ms 215.393 ms 218.853 ms

30 www.hku.hk (147.8.145.50) 201.211 ms 199.635 ms 198.119 ms

Conclusion: Internet is so unpredictable !!!

Page 12: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 12

Measurement Methodology

To compute the traffic demand– Fine-grain traffic measurements are

collected at ALL ingress links (too expensive, impractical)

– Flow-level statistics should be collected at each ingress link. The measurement can be collected directly by the incident router using Netflow (Cisco traffic measurement tool)

Page 13: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 13

Netflow Data Record

• Source IP Address• Destination IP Address• Source IP Address• Destination IP Address

• Next Hop Address• Source AS Number• Dest. AS Number• Source Prefix Mask• Dest. Prefix Mask

• Next Hop Address• Source AS Number• Dest. AS Number• Source Prefix Mask• Dest. Prefix Mask

• Input Interface Port• Output Interface Port• Input Interface Port• Output Interface Port

• Type of Service• TCP Flags• Protocol

• Type of Service• TCP Flags• Protocol

• Packet Count• Byte Count• Packet Count• Byte Count

• Start Timestamp• End Timestamp• Start Timestamp• End Timestamp

• Source TCP/UDP Port• Destination TCP/UDP Port• Source TCP/UDP Port• Destination TCP/UDP Port

Usage

QoS

Timeof Day

Application

RoutingandPeering

PortUtilization

From/To

The who, what, where, when and how much IP traffic questions are answered

Page 14: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 14

Set of egress links

The flow record collected at the ingress link has sufficient information for computing a set of egress links:– IP destination address – routing table (next-hop link(s) for a

particular prefix)

Page 15: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 15

Problems with this methodology

The routers that terminate these ingress links often vary in functionality and must perform computationally intensive access control functions such as filtering

Hence, collecting flow-level statistics at every ingress link is not always feasible in practice

Page 16: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 16

Measuring at peering links

We shall extend our methodology to measurements collected at a much smaller number of peering links.

A small number of high-end routers are used to connect to a neighboring provider

Page 17: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 17

Measuring at peering links (cont.) Monitoring both ingress and egress links

at the peering links can capture a large fraction of the traffic

– Ingress links for inbound and transit traffic

– Egress links for outbound traffic

Problems: Internal traffic is missing Ambiguous ingress point for outbound traffic Duplicate measurement of transit traffic

Page 18: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 18

Measuring at peering links (cont.)

Fortunately, an ISP typically knows the IP address of its directly connected customers.

However, for customers that connect to two or more service providers, the first methodology is more appropriate.

Page 19: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 19

Experimental results

Flow-level measurements are collected at the peering links of the AT&T IP backbones

The flow-level measurement were collected by enabling Netflow on each router that terminate peering links

Page 20: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 20

Traffic Analysis

~80% of the total traffic is shown

Note that plots are nearly linear on the log-log scale

Rank traffic demands from largest to smallest and plot the percentage of the total traffic attributable to each

Page 21: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 21

Traffic Analysis (cont.) The small number of

heavy hitters has important implication for traffic engineering

Since leading heavy hitters account for so much traffic, care in routing just these demands should provide most of the benefit

Significant variation in demand sizes at the highest ranks

Page 22: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 22

Conclusion

A model of traffic demands that capture1. the volume of data2. the entry point into the ISP network3. destination reachability information

is proposed A methodology for populating the demand

model from flow-level measurement is presented

The measured demands reveals significant variations in demand sizes and popularities by time-of-day

Page 23: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 23

References Anjia Feldmann, Albert Greenberg, Carsten Lund, Nick Reingold, and Jennifer

Rexford, “Deriving Traffic Demands for Operation IP Networks: Methodology and Experience,” IEEE/ACM Transactions on Networking, vol 3, no. 3, pp. 265-280, June 2001

Cisco Netfow (2001). [Online]. Available:http://www.cisco.com/warp/public/732/netflow/index.html

Page 24: Deriving IP Traffic Demands for an ISP Backbone Network

April 4th, 2002 George Wai Wong 24

Questions?


Recommended