Post on 31-Mar-2015
transcript
Distributed Hash Tables
Parallel and Distributed ComputingSpring 2011
anda@cse.usf.edu
1
2
Distributed Hash Tables
• Academic answer to p2p• Goals
– Guaranteed lookup success– Provable bounds on search time– Provable scalability
• Makes some things harder– Fuzzy queries / full-text search / etc.
• Hot Topic in networking since introduction in ~2000/2001
3
DHT: Overview
• Abstraction: a distributed “hash-table” (DHT) data structure supports two operations:
– put(id, item);– item = get(id);
• Implementation: nodes in system form a distributed data structure
– Can be Ring, Tree, Hypercube, Skip List, Butterfly Network, ...
What Is a DHT?• A building block used to locate key-based objects over millions
of hosts on the internet
• Inspired from traditional hash table:– key = Hash(name)– put(key, value)– get(key) -> value
• Challenges– Decentralized: no central authority– Scalable: low network traffic overhead – Efficient: find items quickly (latency)– Dynamic: nodes fail, new nodes join– General-purpose: flexible naming
4
The Lookup Problem
N1
N2 N3
N6N5
N4
Publisher
Put (Key=“title”Value=file data…)
ClientGet(key=“title”)
Internet?
5
DHTs: Main Idea
N4Publisher
Client
N6
N9
N7N8
N3
N2N1
Lookup(H(audio data))
Key=H(audio data)Value={artist,
album title, track title}
6
7
DHT: Overview (2)
• Structured Overlay Routing:– Join: On startup, contact a “bootstrap” node and integrate
yourself into the distributed data structure; get a node id– Publish: Route publication for file id toward a close node id
along the data structure– Search: Route a query for file id toward a close node id. Data
structure guarantees that query will meet the publication.– Fetch: Two options:
• Publication contains actual file => fetch from where query stops• Publication says “I have file X” => query tells you 128.2.1.3 has X, use
IP routing to get X from 128.2.1.3
From Hash Tables to Distributed Hash Tables
Challenge: Scalably distributing the index space: – Scalability issue with hash tables: Add new entry => move
many items– Solution: consistent hashing (Karger 97)
Consistent hashing:– Circular ID space with a distance metric– Objects and nodes mapped onto the same space– A key is stored at its successor: node with next higher ID
8
9
DHT: Consistent Hashing
N32
N90
N105
K80
K20
K5
Circular ID space
Key 5Node 105
A key is stored at its successor: node with next higher ID
What Is a DHT?• Distributed Hash Table:
key = Hash(data)lookup(key) -> IP addressput(key, value)get( key) -> value
• API supports a wide range of applications– DHT imposes no structure/meaning on keys
• Key/value pairs are persistent and global– Can store keys in other DHT values– And thus build complex data structures
10
Approaches• Different strategies
– Chord: constructing a distributed hash table– CAN: Routing in a d-dimensional space– Many more…
• Commonalities– Each peer maintains a small part of the index
information (routing table)– Searches performed by directed message forwarding
• Differences– Performance and qualitative criteria
11
12
DHT: Example - Chord
• Associate to each node and file a unique id in an uni-dimensional space (a Ring)
– E.g., pick from the range [0...2m-1]– Usually the hash of the file or IP address
• Properties:– Routing table size is O(log N) , where N is the total number
of nodes– Guarantees that a file is found in O(log N) hops
Example 1: Distributed Hash Tables (Chord)
• Hashing of search keys AND peer addresses on binary keys of length m– Key identifier = SHA-1(key); Node identifier = SHA-1(IP address)– SHA-1 distributes both uniformly– e.g. m=8, key(“yellow-submarine.mp3")=17, key(192.178.0.1)=3
• Data keys are stored at next larger node key
peer with hashed identifier p, data with hashed identifier kk stored at node p such that p is the smallest node ID larger than k
peer with hashed identifier p, data with hashed identifier kk stored at node p such that p is the smallest node ID larger than km=8
32 keys
p
p2
p3
k
storedat
predecessor
Search possibilities?1. every peer knows every other
O(n) routing table size2. peers know successor
O(n) search cost
13
14
DHT: Chord Basic Lookup
N32
N90
N105
N60
N10N120
K80
“Where is key 80?”
“N90 has K80”
15
DHT: Chord “Finger Table”
N80
1/21/4
1/8
1/161/32
1/641/128
• Entry i in the finger table of node n is the first node that succeeds or equals n + 2i
• In other words, the ith finger points 1/2n-i way around the ring
16
DHT: Chord Join
• Assume an identifier space [0..8]
• Node n1 joins0
1
2
34
5
6
7
i id+2i succ0 2 11 3 12 5 1
Succ. Table
17
DHT: Chord Join
• Node n2 joins0
1
2
34
5
6
7
i id+2i succ0 2 21 3 12 5 1
Succ. Table
i id+2i succ0 3 11 4 12 6 1
Succ. Table
18
DHT: Chord Join
• Nodes n0, n6 join
01
2
34
5
6
7
i id+2i succ0 2 21 3 62 5 6
Succ. Table
i id+2i succ0 3 61 4 62 6 6
Succ. Table
i id+2i succ0 1 11 2 22 4 0
Succ. Table
i id+2i succ0 7 01 0 02 2 2
Succ. Table
19
DHT: Chord Join
• Nodes: n0, n1, n2, n6
• Items: f7
01
2
34
5
6
7 i id+2i succ0 2 21 3 62 5 6
Succ. Table
i id+2i succ0 3 61 4 62 6 6
Succ. Table
i id+2i succ0 1 11 2 22 4 0
Succ. Table
f7Items
i id+2i succ0 7 01 0 02 2 2
Succ. Table
20
DHT: Chord Routing
• Upon receiving a query for item id, a node:
• Checks whether stores the item locally
• If not, forwards the query to the largest successor in its successor table that does not exceed id
01
2
34
5
6
7 i id+2i succ0 2 21 3 62 5 6
Succ. Table
i id+2i succ0 3 61 4 62 6 6
Succ. Table
i id+2i succ0 1 11 2 22 4 0
Succ. Table
f7Items
i id+2i succ0 7 01 0 02 2 2
Succ. Table
query(7)
21
DHT: Chord Summary
• Routing table size?–Log N fingers
• Routing time?–Each hop expects to 1/2 the distance to the
desired id => expect O(log N) hops.
Load Balancing in Chord
Network size n=10^4
5 10^5 keys
22
Length of Search Paths
Network size n=2^12
100 2^12 keys
Path length ½ Log2(n)
23
Chord Discussion
• Performance– Search latency: O(log n) (with high probability, provable)– Message Bandwidth: O(log n) (selective routing)– Storage cost: O(log n) (routing table) – Update cost: low (like search)– Node join/leave cost: O(Log2 n) – Resilience to failures: replication to successor nodes
• Qualitative Criteria– search predicates: equality of keys only– global knowledge: key hashing, network origin– peer autonomy: nodes have by virtue of their address a specific role in
the network
24
Example 2: Topological Routing (CAN)• Based on hashing of keys into a d-dimensional space (a torus)
– Each peer is responsible for keys of a subvolume of the space (a zone)– Each peer stores the addresses of peers responsible for the neighboring
zones for routing– Search requests are greedily forwarded to the peers in the closest zones
• Assignment of peers to zones depends on a random selection made by the peer
25
Network Search and Join
Node 7 joins the network by choosing a coordinate in the volume of 126
CAN Refinements• Multiple Realities
– We can have r different coordinate spaces– Nodes hold a zone in each of them– Creates r replicas of the (key, value) pairs– Increases robustness– Reduces path length as search can be continued in the reality where the
target is closest
• Overloading zones– Different peers are responsible for the same zone– Splits are only performed if a maximum occupancy (e.g. 4) is reached– Nodes know all other nodes in the same zone– But only one of the neighbors
27
CAN Path Length
28
Increasing Dimensions and Realities
29
CAN Discussion• Performance
– Search latency: O(d n1/d), depends on choice of d (with high probability, provable)
– Message Bandwidth: O(d n1/d), (selective routing)– Storage cost: O(d) (routing table) – Update cost: low (like search)– Node join/leave cost: O(d n1/d)– Resilience to failures: realities and overloading
• Qualitative Criteria– search predicates: spatial distance of multidimensional keys– global knowledge: key hashing, network origin– peer autonomy: nodes can decide on their position in the key space
30
Comparison of some P2P Solutions
02* *( 1)
TTL
i
iC C
Search Paradigm
Overlay maintenance costs
Search Cost
Gnutella Breadth-first on search graph
O(1)
Chord Implicit binary search trees
O(log n) O(log n)
CAN d-dimensional space
O(d) O(d n1/d)
DHT ApplicationsNot only for sharing music anymore…
– Global file systems [OceanStore, CFS, PAST, Pastiche, UsenetDHT]
– Naming services [Chord-DNS, Twine, SFR]– DB query processing [PIER, Wisc]– Internet-scale data structures [PHT, Cone, SkipGraphs]– Communication services [i3, MCAN, Bayeux]– Event notification [Scribe, Herald]– File sharing [OverNet]
32
33
DHT: Discussion
• Pros:– Guaranteed Lookup– O(log N) per node state and search scope
• Cons:– No one uses them? (only one file sharing app)– Supporting non-exact match search is hard
34
When are p2p / DHTs useful?
• Caching and “soft-state” data– Works well! BitTorrent, KaZaA, etc., all use peers
as caches for hot data
• Finding read-only data– Limited flooding finds hay– DHTs find needles
• BUT
35
A Peer-to-peer Google?
• Complex intersection queries (“the” + “who”)– Billions of hits for each term alone
• Sophisticated ranking– Must compare many results before returning a subset to
user
• Very, very hard for a DHT / p2p system– Need high inter-node bandwidth– (This is exactly what Google does - massive clusters)
36
Writable, persistent p2p
• Do you trust your data to 100,000 monkeys?• Node availability hurts
– Ex: Store 5 copies of data on different nodes– When someone goes away, you must replicate the data
they held– Hard drives are *huge*, but cable modem upload
bandwidth is tiny - perhaps 10 Gbytes/day– Takes many days to upload contents of 200GB hard drive.
Very expensive leave/replication situation!
Research Trends: A Superficial History Based on Articles in IPTPS
• In the early ‘00s (2002-2004):– DHT-related applications, optimizations, reevaluations…
(more than 50% of IPTPS papers!)– System characterization– Anonymization
• 2005-…– BitTorrent: improvements, alternatives, gaming it– Security, incentives
• More recently:– Live streaming– P2P TV (IPTV)– Games over P2P
37
What’s Missing?
• Very important lessons learned– …but did we move beyond vertically-integrated
applications?• Can we distribute complex services on top of p2p
overlays?
38
39
P2P: Summary
• Many different styles; remember pros and cons of each– centralized, flooding, swarming, unstructured and structured
routing• Lessons learned:
– Single points of failure are very bad– Flooding messages to everyone is bad– Underlying network topology is important– Not all nodes are equal– Need incentives to discourage freeloading– Privacy and security are important– Structure can provide theoretical bounds and guarantees