+ All Categories
Home > Documents > 124.09.2012 1 Lecture 5 - Routing On the Flat Labels M.Sc Ilya Nikolaevskiy Helsinki Institute for...

124.09.2012 1 Lecture 5 - Routing On the Flat Labels M.Sc Ilya Nikolaevskiy Helsinki Institute for...

Date post: 16-Dec-2015
Category:
Upload: richard-care
View: 217 times
Download: 4 times
Share this document with a friend
Popular Tags:
28
1 24.09.2012 1 Lecture 5 Lecture 5 - Routing On the Flat Labels - Routing On the Flat Labels M.Sc Ilya Nikolaevskiy Helsinki Institute for Information Technology (HIIT) [email protected] T-110.6120 – Special Course in Future Internet Technologies
Transcript

124.09.2012 1

Lecture 5Lecture 5- Routing On the Flat Labels- Routing On the Flat Labels

M.Sc Ilya Nikolaevskiy

Helsinki Institute for Information Technology (HIIT)

[email protected]

T-110.6120 – Special Course in Future Internet Technologies

2

Routing On the Flat Labels Based on and pictures borrowed from:

Matthew Caesar, Tyson Condie, Jayanthkumar Kannan, Karthik Lakshminarayanan, and Ion Stoica. ROFL: routing on flat labels, SIGCOMM Comput. Commun. Rev. 36, 4 (August 2006)

I. Stoica, R. Morris, D. Lieben-Nowell, D. Karger, M. Kaashoek, F. Dabek, H. Balakrishnan. Chord: a scalable peer-to-peer lookup protocol for Internet applications, IEEE Transactions on Networks, 11(1) 17-32, 2003.

3

Flat Labels

Identification/location split: Mobility, Multihoming

In this architecture – no location at all (routing on names)

No network semantics in the identities – any identities may be used

=> Flat Labels

4

Advantages

All advantages of location-identity split (multihoming, mobility, …)

No new infrastructure – no additional resolving

Fate-sharing: No need to contact resolution center

Simple allocation and management

5

Reason

Does the scalable routing require structured location information in the packet header?

Prior to ROFL all FIA rely on structural location information.

6

Chord

Scalable P2P lookup protocol Given a key Chord maps it to the node.

Consistent hashing: when hashes space size changes only fraction of keys will have new hash When node leaves or arrives only

fraction of keys will be moved Hashes space is a circle with 2m points

numbered in clockwise order

7

Chord: Consistent Hashing

8

Chord: Lookup

9

Chord: Lookup Optimization

10

Chord: Enhanced Lookup

11

Chord Conclusions

Each node stores small amount of information (O(log n))

Queries are fast (O(log n)) Easy to add/remove node from the

system Recovering techniques to heal from a

node failure

12

ROFL Overview

Unique IDs for all nodes 3 types of nodes: routers, stable

hosts, ephemeral hosts Hosts are assigned to a gateway

router Same idea: all labels are organized in

the circle. Routing is performed to the closest node not overrunning destination label.

13

Source Paths

14

Intra-domain Routing

In each AS there is a separate ROFL ring Routing performed much like Chord lookup Packets are forwarded in a greedy way: to

the closest to the destination known node along the ring Search similar to longest prefix match

Source paths to successors and predecessors are saved in all intermediate nodes in Pointer Cache to optimize packets paths

15

Host Join

Host registers in a gateway router Router searches for predecessor of

the host and update its’ successor Router stores source path to the

successor of host Ephemeral hosts can not be

successors

16

Inter-domain Routing

Hierarchical structure of ASes Isolation property:

Failure isolation Policies:

Provider-consumer Peering Multihoming

17

Hierarchical Ring Merging

18

Ring Merging Rules

Idb in Ring 2 is external successor of ida in Ring 1 iff: Idb is a successor of ida in a joined ring There are no nodes with identifiers in

[ida, idb] in either AS

Merges are performed at all levels of hierarchy Each new host must be registered at all

levels

19

Packet Forwarding

Essentially the same: forward packet towards Label closest to the destination and not overrunning it.

20

Handling Policies

Peering: Virtual AS as a provider for peering ASes Bloom filters to store all nodes in peering ASes

Multihoming: Perform external join for each member of

up-hierarchy

Bloom filters storing all hosts joined below AS are used before using pointer cache

21

Virtual AS

22

Evaluation

Intra-domain: Trace based on “Rocketfuel” over 4 large

ISPs with hundreds of routers and millions of hosts in each

Used 128-bit IDs 9 Mbits cache memory in routers

Inter-domain: AS graph was derived from “Routeviews”

traces Simulation of 30,000 hosts extrapolated to

600 millions hosts

23

Evaluation: Intra-domain

Hosts typically complete join in less than 40ms with less than 45 control messages

24

Evaluation: Intra-domain (contd) Average stretch depends on pointer cache

memory: 1.2 to 2 for 9 Mbits of pointer cache

25

Evaluation: Inter-domain

Each AS is emulated by a single node Only 30,000 hosts were emulated Join across all provider requires ~445

messages Average stretch is 2.5

26

ROFL Strengths

Redesign of internet architecture location/identity split

Policy aware inter-domain routing Cryptographic identities

Spoofing attacks are impossible (on cost of cryptographic signatures)

Implicit Certificates instead of DNS

27

ROFL Weaknesses

Not really scalable Possible hash collision Needs large pointer cache Inter-domain routing requires large

Bloom filters for all hosts in ASes below How to recalculate them? Flooding?

Complicated failure recovery

28

Thank you for your attention!Questions? Comments?

[email protected]


Recommended