Date post: | 11-Apr-2017 |
Category: |
Software |
Upload: | james-salter |
View: | 52 times |
Download: | 0 times |
An Efficient Reactive Model for Resource Discovery in DHT-Based
Peer-to-Peer NetworksJames Salter
Supervisor: Dr Nick Antonopoulos
2 October 2006
Outline
Introduction and Background
ROME Architecture
Node Processes
Evaluation
Conclusions and Future Work
Client/Server Networks
Clients send requests to/via a central server Small #messages Single point of failure
Peer-to-Peer Networks
No central server Node to node connections Resilient Large #messages
File sharing, distributed computing, instant messaging
Napster
Gnutella
Chord
Structured Peer-to-Peer architecture Well-known in the research field Combine advantages of Napster and
Gnutella-like architectures: An index of resources (Napster) distributed over multiple nodes (Gnutella)
Based on Distributed Hash Tables Simple lookup mechanism
Given a key, it will return associated value(s)
10, 16, 28, 34
Distributed Hash Tables
Hash functions map data keys to buckets Keys are stored in buckets in index table
012345
f(x) = x mod 6
f(15) = 15 mod 6 = 3
12, 24, 3019, 6120, 26, 563, 9
23, 35, 65
f(36) = 36 mod 6 = 0
Distributed Hash Tables Each node hosts part of the index (one or
more buckets)
Chord
20
0 1
8
12
15
2 3 4 5 6 7 8
Chord
log2(n) hops worst case½log2(n) hops average
ROME Concept
Message cost is proportional to number of nodes in network (n)
Reduce n, reduce message cost Goal: Keep the ring “just big enough”
Must always support current workload Not unnecessarily large
Workload should determine ring size, not number of nodes in the network
Adding functionality to Chord
ROME Architecture
ROMEChordLowerLayers
TrafficAnalyser
NodeActions
ROME Data
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
Node Workload Monitoring
Zero LimitLowerThreshold
Target UpperThreshold
Node Workload Monitoring
Zero LimitLowerThreshold
Target UpperThreshold
Underloaded
Node Workload Monitoring
Zero LimitLowerThreshold
Target UpperThreshold
Underloaded Normal
Node Workload Monitoring
Zero LimitLowerThreshold
Target UpperThreshold
Underloaded Normal Overloaded
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
Replace Action
Overloadedv0.2 5 cur_wl, targetA
Replace Action
Search node pool to find a node with:NPNode.LowerThreshold < OLNode.Workload AND NPNode.UpperThreshold > OLNode.Workload
Break ties by selecting best quality node Percentage of heartbeats
received from node since initial registration
Measure of node reliability
Replace Action
Overloaded
v0.2 6 replace, node=Bbs
v0.2 7 chordID=21A
Replace Action
v0.2 1 tgt, thresholdsA
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
Add Action
No nodes found to replace overloaded node
Add a node to share enough workload to restore overloaded node to normal workload
NPNode.LowerThreshold < (OLNode.Workload -
OLNode.Target)ANDNPNode.UpperThreshold >
(OLNode.Workload - OLNode.Target)
Add Action
Overloadedv0.2 6 add, node=C, C.tgt, C.thresholdsbs
Add Action
16 17 18 19 2015 21
ID: 21ID: 14
wor
kloa
d
ID: ??
Add Action
Overloaded
v0.2 7 chordID=18A
Add Action
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
ROME Node Process
MonitorWorkload
Attempt toReplace Node
Attempt to Add New Node
Attempt toRemove Node
Pause
Overloaded Underloaded
Normal
SuccessFailure
Remove Action
Underloaded
v0.2 8 current_wlG
Remove Action
If node is removed, successor becomes responsible for its portion of keyspace (+ workload)
Must check successor will not become overloaded: prevent chain reactions
Succ.UpperThreshold >Succ.Workload + ULNode.Workload
Remove Action
Underloaded
v0.2 9 Accept/DeclineH
Remove Action
v0.2 1 tgt, thresholdsA
Additional Issues
Node Locking Prevent chain reactions/overcompensation
Workload from a single key Add action not attempted: assumption that
keys are atomic Ring collapse vulnerability
Monitoring interval to stop too many concurrent changes
High volume of replaces Switch off replace action if few nodes in pool
Dynamic Ring Performance
Reduction in hop count by controlling network size based on theoretical work: Max hops per lookup = log2(n) Mean hops per lookup = ½log2(n) If ring A < ring B, then log2(A) < log2(B) - in a static ring with correct routing information
Does this hold true in more realistic dynamic scenarios, with nodes joining/leaving or failing?
Simulation: ROME vs Chord
Two Chord rings, one running ROME 1000 available nodes Node capacity: 100-400 workload units 10 node joins per time tick 10 node failures per tick 500 lookups per tick ROME Thresholds: 5% Lower, 95% Upper Target Workload: 50% Standard Chord maintenance/update routines
run every clock tick
Workload Variation
0
50000
100000
150000
200000
250000
0 2000 4000 6000 8000 10000 12000
time
wor
kloa
d
0
1
2
3
4
5
6
0 2000 4000 6000 8000 10000 12000
time
mea
n m
essa
ges
per s
ucce
ssfu
l que
ry
Messages per Query
ROME Chord
Cumulative Message Savings
0
1
2
3
4
5
6
7
8
0 2000 4000 6000 8000 10000 12000
time
look
up m
essa
ge s
avin
gs (c
umul
ativ
e m
illio
ns)
Number of Nodes in Each Ring
ROME Chord
0
100
200
300
400
500
600
700
800
900
1000
0 2000 4000 6000 8000 10000 12000
time
node
s in
ring
Query Success Rate
0%
20%
40%
60%
80%
100%
0 2000 4000 6000 8000 10000 12000
time
% s
ucce
ssfu
l que
ries
ROME Chord
Maintenance Cost
0
1000
2000
3000
4000
5000
6000
7000
8000
0 2000 4000 6000 8000 10000 12000
time
mai
nten
ance
mes
sage
s
ROME Chord
Thresholds
Thresholds
LT=5%, UT=95%
LT=25%, UT=75%
LT=45%, UT=55%
0%
20%
40%
60%
80%
100%
0 2000 4000 6000 8000 10000 12000
time
% s
ucce
ssfu
l que
ries
Churn Rate (Failure is Good!)
0
200
400
600
800
1000
0 2000 4000 6000 8000 10000 12000
time
node
s in
ring
0 fail/tick1 fail/tick5 fail/tick10 fail/tick20 fail/tick30 fail/tick
Churn Rate (Failure is Good!)
0
1
2
3
4
5
6
7
0 2000 4000 6000 8000 10000 12000
time
mea
n m
essa
ges
per s
ucce
ssfu
l que
ry
0 failures/tick1 failure/tick5 failures/tick10 failures/tick20 failures/tick30 failures/tick
Churn Rate (Failure is Good!)
0%
20%
40%
60%
80%
100%
0 2000 4000 6000 8000 10000 12000
time
% s
ucce
ssfu
l que
ries
0 failures/tick1 failure/tick5 failures/tick10 failures/tick20 failures/tick30 failures/tick
Limitations of ROME
Requires workload to be less than node capacity May not be appropriate if likely to be near-equal for
majority of network’s lifetime Optimisations occur at the node level
Always yields a globally optimum solution? Based on Chord and DHT architectures
Any issues found (probably) inherited by ROME Increasing workload, failed bootstrap server
No more nodes will be added – these would be present in standard Chord ring, so ROME ring would drop more workload
Potential Applications
Similar to Chord applications DNS lookup services File storage systems Simple databases Messaging Service Discovery Distributed Processing
Where workload is likely to be lower than capacity offered by connected nodes
Future Work
Remove reliance on single server Share node pools between multiple bootstrap
servers using G-ROME Combinations of actions Apply ROME concepts to other P2P
networks Use in unstructured networks?
Applications in other domains Wireless/ad-hoc networks with dynamic
machine joins/leaves
Conclusions
Proposed ROME, a layer running on top of Chord Chord routes messages in O(log2 n) hops,
where n is number of nodes in ring ROME controls size of underlying Chord ring Simple set of actions to add/remove nodes
Simulations show ROME can reduce lookup cost vs standard Chord ring
Platform for further work Enhance ROME, use as building block for new
services, apply to other domains
Publications ListJ Salter and N Antonopoulos, “An Optimised Two-Tier P2P Architecture for Contextualised
Keyword Searches”, Future Generation Computer Systems, 2007.G Exarchakos, J Salter and N Antonopoulos, “G-ROME: A Semantic Driven Model for Capacity
Sharing Among P2P Networks”, to appear in Internet Research.G Exarchakos, J Salter and N Antonopoulos, “Semantic Cooperation and Node Sharing Among
P2P Networks”, Sixth International Network Conference (INC 2006).J Salter and N Antonopoulos, “The CinemaScreen Recommender Agent: A Film Recommender
Combining Collaborative and Content-Based Filtering”, IEEE Intelligent Systems, 2006.N Antonopoulos, J Salter and R Peel, “A Multi-Ring Method for Efficient Multi-Dimensional Data
Lookup in P2P Networks”, 2005 International Conference on Foundations of Computer Science (FCS ’05).
J Salter, N Antonopoulos and R Peel, “ROME: Optimising Lookup and Load Balancing in DHT-based P2P Networks”, 2005 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA ’05).
J Salter and N Antonopoulos, “ROME: Optimising DHT-based Peer-to-Peer Networks”, Fifth International Network Conference (INC 2005).
N Antonopoulos and J Salter, “Efficient Resource Discovery in Grids and P2P Networks”, Internet Research, 2004.
J Salter and N Antonopoulos, “An Efficient Fault Tolerant Approach to Resource Discovery in P2P Networks”, UniS Computing Sciences Report CS-04-02, 2004.
N Antonopoulos and J Salter, “Improving Query Routing Efficiency in Peer-to-Peer Networks”, UniS Computing Sciences Report CS-04-01, 2004.
J Salter and N Antonopoulos, “An Efficient Mechanism for Adaptive Resource Discovery in Grids”, Fourth International Network Conference (INC 2004).
N Antonopoulos and J Salter, “Towards an Intelligent Agent Model for Resource Discovery in Grid Environments”, IADIS International Conference Applied Computing 2004.