2010 .12.20Seungmi Choi
PlanetLab
- Overview, History, and Future Directions- Using PlanetLab for Network Research:
Myths, Realities, and Best Practices
Contents
2
IntroductionConcept of PlanetlabArchitectureMyths, realities, and best practicesConclusionQ & A
Concept of Planetlab
3
Concept:Planetary scale overlay networkTestbed for developing/accessing network servicesReal world experienceCurrent(2010.12.20) : 1129 nodes at 517 sites
Node Architecture Goals
4
Provide a virtual machine for each service running on a node
Isolate virtual machinesAllow maximal control over virtual machinesFair allocation of resources
Network, CPU, memory, disk
Node Architecture
5
Virtual machine(VM)Available to run several test on a node
Vserver 1.9 (Planetlab version 3.0)Illusion of multiple servers on single machineHave its own superuser with safeLinux 2.6
Hardware
VMM(Ring 0)
VM
VM
VM
VM
Hardware
OS(Ring 0)
Node Architecture
6
Network Architecture
7
Node manger (one per node)Create slices for service managers
When service managers provide valid ticketsAllocate resources for vservers
Resource Monitor (one per node)Track node’s available resourcesTell agents about available resources
Agent
Service Manager
BrokerResource Monitor
Network Architecture
8
Agents (centralized)Track nodes’ free resourcesAdvertise resources to resource brokersIssue tickets to resource brokers
Tickets may be redeemed with node managers to obtain the resource
Agent
Service Manager
BrokerResource Monitor
Network Architecture
9
Resource Broker (per service)Obtain tickets from agents on behalf of service managers
Service Managers (per service)Obtain tickets from brokerRedeem tickets with node managers to acquire resourcesIf resources can be acquired, start service
Agent
Service Manager
BrokerResource Monitor
Services Run in Slices
10
PlanetLab Nodes
Services Run in Slices
11
PlanetLab Nodes
Virtual Machines
Service / Slice A
Services Run in Slices
12
PlanetLab Nodes
Virtual Machines
Service / Slice A
Service / Slice B
Services Run in Slices
PlanetLab Nodes
Virtual Machines
Service / Slice A
Service / Slice B
Service / Slice C
Obtaining a Slice
14
Agent
Service Manager
Broker
Obtaining a Slice
15
Agent
Service Manager
BrokerResource Monitor
Obtaining a Slice
16
Agent
Service Manager
BrokerResource Monitor
Obtaining a Slice
17
Agent
Service Manager
Broker
ticket
Resource Monitor
Obtaining a Slice
18
Agent
Service Manager
Broker
ticket
Resource Monitor
Resource Monitor
Obtaining a Slice
19
Agent
Service Manager
Broker
ticket
Resource Monitor
Resource Monitor
ticket
ticket
Obtaining a Slice
20
Agent
Service Manager
Broker
ticket
ticket
ticket
Agent
Obtaining a Slice
21
Agent
Service Manager
Broker
ticket
ticket
ticket
Agent
Obtaining a Slice
22
Agent
Service Manager
Broker
ticket
ticket
ticket
Agent
Obtaining a Slice
23
Agent
Service Manager
Broker
ticket
ticket
ticket
Agent
Obtaining a Slice
24
Agent
Service Manager
Broker
ticket
ticket
ticketNode Manager
Node Manager
Agent
Obtaining a Slice
25
Agent
Service Manager
Broker
ticket
Agent
Obtaining a Slice
26
Agent
Service Manager
Broker
ticket
Agent
Realities
27
Describes widely-cited criticisms for PlanetLab that are entirely true
Reality1 : Results are not reproducible
28
Load on networks and on machines varies on every time scale
An experiment that runs for an hour will reflect only that network condition
Using the CoMonAlternatives : Emulab, Modelnet
Producing unexpected result in short period is not a bug
Reality2: PlanetLab nodes are not representative of peer-to-peer network nodes
29
PlanetLab is a managed infrastructure and not subject to the same churn as desktop systems
Cannot scale to millions of machines
Myths that are no longer true
30
Some who tried to use early versions of PlanetLab found challenges that are no longer problems
Myth1: PlanetLab is too heavily loaded
31
May always be under-provisionedLoad is especially high before conference deadlines
Newly determined daemonTwo brokerage services
Sirius and Bellagio
Myth1: PlanetLab is too heavily loaded
32
Myth2: PlanetLab cannot guarantee resources
33
With the release of PlanetLab ver. 3.0Resource guarantees are possible(Current : upgraded to Version 4.3 over the next few weeks)
By using Sirius and BellagioRunning slices could receive resource guarantees
Myths falsified by best practices
34
The following three myths about PlanetLab are not true if best practices are followed
The first two myths address problems using PlanetLab for network measurement, the last, its potential for churn
Myth1:Load prevents accurate latency measurement
35
Cannot ensure that any slice will be scheduled immediately upon receiving a packet
Using in-kernel timestamping features of Linux
Myth2:Load prevents sending precise packet trains
36
Sending packets at precise times is more difficult
The desired sending times were not achieved, then data sending on PlanetLab simply requires more attempts than on unloaded systems
Myth2:Load prevents sending precise packet trains
37
11 packetsThe gap is either 1ms or 11ms
1ms or 11ms
Myth2:Load prevents sending precise packet trains
38
1ms
11ms
Myth3:PlanetLab experiences excessive churn
39
Only three times during the last two years(2004~)Dec. 2003 : all nodes were off-line for a week
A security incidentNov. 2004 : upgraded Ver. 1.0 -> Ver. 2.0Feb. 2005 : many nodes off-line for a weekend
By a kernel bug
Conclusion
40
PlanetLab is a global research network that supports the development of new network services
Help to develop new technologies for distributed storagenetwork mappingpeer-to-peer systemsdistributed hash tablesquery processing.
Q & A
41