+ All Categories
Home > Documents > Multi-Tenancy and Virtualization in Cloud Computing

Multi-Tenancy and Virtualization in Cloud Computing

Date post: 27-Jan-2015
Category:
Upload: alexandru-iosup
View: 117 times
Download: 2 times
Share this document with a friend
Description:
 
Popular Tags:
65
2012-2013 1 IN4392 Cloud Computing Multi-Tenancy, including Virtualization Cloud Computing (IN4392) D.H.J. Epema and A. Iosup Parallel and Distributed Group/
Transcript
Page 1: Multi-Tenancy and Virtualization in Cloud Computing

2012-2013

1

IN4392 Cloud ComputingMulti-Tenancy, including Virtualization

Cloud Computing (IN4392)

D.H.J. Epema and A. Iosup

Parallel and Distributed Group/

Page 2: Multi-Tenancy and Virtualization in Cloud Computing

2012-2013 2

Terms for Today’s Discussion

mul·ti-te·nan·cy noun \ˌməl-ti-ˈte-nan(t)-sē\an IT sharing model of how physical and virtual resources are used by possibly concurrent tenants

Page 3: Multi-Tenancy and Virtualization in Cloud Computing

Characteristics of Multi-Tenancy

1. Isolation = separation of services provided to each tenant(the noisy neighbor)

2. Scaling conveniently with the number and size of tenants(max weight in the elevator)

3. Meet SLAs for each tenant4. Support for per-tenant service customization5. Support for value-adding ops, e.g., backup,

upgrade6. Secure data processing and storage

(the snoopy neighbor)7. Support for regulatory law (per legislator, per

tenant)

2012-2013 3

Page 4: Multi-Tenancy and Virtualization in Cloud Computing

Benefits of Multi-Tenancy (the Promise)

• Cloud operator• Economy of scale• Market-share and branding (for the moment)

• Users• Flexibility• Focus on core expertise• Reduced cost• Reduced time-to-market

• Overall• Reduced cost of IT deployment and operation

2012-2013 4

Page 5: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 5

Page 6: Multi-Tenancy and Virtualization in Cloud Computing

Problems with Multi-Tenancy [1/5]

A List of Concerns

• Users• Performance isolation (and variability) for all resources• Scalability with the number of tenants (per resource)• Support for value-added ops for each application type• Security concerns (too many to list)

• Owners• Up-front and operational costs• Human management of multi-tenancy• Development effort and required skills• Time-to-market

• The law: think health management applications

2012-2013 6

Page 7: Multi-Tenancy and Virtualization in Cloud Computing

Problems with Multi-Tenancy [2/5]

Load Imbalance

• Overall workload imbalance: normalized daily load (5:1)

• Temporary workload imbalance: hourly load (1000:1)

2012-2013 7

Overall imbalanc

e Temporary imbalance

Page 8: Multi-Tenancy and Virtualization in Cloud Computing

Problems with Multi-Tenancy [3/5]

Practical Achievable Utilization

• Enterprise: <15% [McKinsey’12]• Parallel production environments: 60-70%

[Nitzberg’99]• Grids: 15-30% average cluster,

>90% busy clusters• Today’s clouds: ???

2012-2013 8

Iosup and Epema: Grid Computing Workloads.

IEEE Internet Computing 15(2): 19-26 (2011)

Page 9: Multi-Tenancy and Virtualization in Cloud Computing

Problems with Multi-Tenancy [4/5]

(Catastrophic) Cascading Failures

• Parallel production environments: one failure kills one or more parallel jobs

• Grids: correlated failures• Today’s clouds:

Amazon, Facebook, etc.had catastrophic failuresin the past 2-3 years

2012-2013 9

Iosup et al. : On the dynamic resource availability in

grids. GRID 2007: 26-33

Size of correlated failures

CD

F Average = 11 nodes Range = 1—339

nodes

Page 10: Multi-Tenancy and Virtualization in Cloud Computing

Problems with Multi-Tenancy [5/5]

Economics

• Up-front: a shared approach is more difficult to develop than an isolated approach; may also require expensive skills

2012-2013 10

Source: www.capcloud.org/TechGate/Multitenancy_Magic.pptx

Page 11: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in

Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 11

Page 12: Multi-Tenancy and Virtualization in Cloud Computing

2012-2013 12

Page 13: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 13

Page 14: Multi-Tenancy and Virtualization in Cloud Computing

Fairness

• Intuitively, distribution of goods (distributive justice)

• Different people, different perception of justice• Each one will pay the same vs

The rich should pay proportionally higher taxes• I only need to pay a few years later than everyone else2012-2013 14

Page 15: Multi-Tenancy and Virtualization in Cloud Computing

15

The VL-e project: application areas

Grid ServicesHarness multi-domain distributed resources

Managementof comm. & computing

Virtual Laboratory (VL)Application Oriented Services

Data Intensive Science

Bio-Diversity

Bio-Informatics

Food Informatics

Medical Diagnosis &

Imaging

Dutch Telescience

Philips UnileverIBM

Bags-of-Tasks

Page 16: Multi-Tenancy and Virtualization in Cloud Computing

16

The VL-e project: application areas

Grid ServicesHarness multi-domain distributed resources

Managementof comm. & computing

Virtual Laboratory (VL)Application Oriented Services

Data Intensive Science

Bio-Diversity

Bio-Informatics

Food Informatics

Medical Diagnosis &

Imaging

Dutch Telescience

Philips UnileverIBM

Bags-of-Tasks

Fairness for all!

Task (groups of 5, 5 minutes):discuss fairness for this scenario.

Task (inter-group discussion):discuss fairness for this scenario.

Page 17: Multi-Tenancy and Virtualization in Cloud Computing

Research Questions

2012-2013 17

What is the design space for BoT scheduling in large-scale, distributed,

fine-grained computing?

What is the performance of BoT schedulersin this setting?

Q1

Q2

Page 18: Multi-Tenancy and Virtualization in Cloud Computing

18

Scheduling Model [1/4]Overview

• System Model1. Clusters

execute jobs2. Resource managers

coordinate job execution3. Resource management architectures

route jobs among resource managers4. Task selection policies

create the eligible set5. Task scheduling policies

schedule the eligible set

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q1

Fairness for all!

Page 19: Multi-Tenancy and Virtualization in Cloud Computing

19

Scheduling Model [2/4]Resource Management Architecturesroute jobs among resource managers

Separated Clusters (sep-c)

Centralized (csp)

Decentralized (fcondor)

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q1

Page 20: Multi-Tenancy and Virtualization in Cloud Computing

20

Scheduling Model [3/4]Task Selection Policiescreate the eligible set

• Age-based:1. S-T: Select Tasks in the order of their arrival.2. S-BoT: Select BoTs in the order of their arrival.

• User priority based:3. S-U-Prio: Select the tasks of the User with the highest

Priority.

• Based on fairness in resource consumption:4. S-U-T: Select the Tasks of the User with the lowest res.

cons.5. S-U-BoT: Select the BoTs of the User with the lowest res.

cons.6. S-U-GRR: Select the User Round-Robin/all tasks for this

user.7. S-U-RR: Select the User Round-Robin/one task for this

user.

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q1

Page 21: Multi-Tenancy and Virtualization in Cloud Computing

21

Context: System Model [4/4]Task Scheduling Policiesschedule the eligible set• Information availability:

• Known• Unknown• Historical records

• Sample policies:• Earliest Completion Time (with

Prediction of Runtimes) (ECT(-P))• Fastest Processor First (FPF)• (Dynamic) Fastest Processor Largest Task ((D)FPLT)• Shortest Task First w/ Replication (STFR) • Work Queue w/ Replication (WQR)

Task Information

Reso

urc

e

Info

rmati

on

K H U

K

H

U

ECT, FPLT

FPFECT-P

DFPLT,MQD

STFRRR,

WQR

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q1

Page 22: Multi-Tenancy and Virtualization in Cloud Computing

22

Design Space Exploration [1/3]Overview

• Design space exploration: time to understand how our solutions fit into the complete system.

• Study the impact of:• The Task Scheduling Policy (s policies)• The Workload Characteristics (P characteristics)• The Dynamic System Information (I levels)• The Task Selection Policy (S policies)• The Resource Management Architecture (A policies)

s x 7P x I x S x A x (environment) → >2M design points

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q2

Page 23: Multi-Tenancy and Virtualization in Cloud Computing

23

Design Space Exploration [2/3]Experimental Setup

• Simulator: • DGSim [IosupETFL SC’07, IosupSE EuroPar’08]

• System:• DAS + Grid’5000 [Cappello & Bal CCGrid’07]• >3,000 CPUs: relative perf. 1-1.75

• Metrics:• Makespan• Normalized Schedule Length ~ speed-up

• Workloads:• Real: DAS + Grid’5000• Realistic: system load 20-95% (from workload

model)

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q2

Page 24: Multi-Tenancy and Virtualization in Cloud Computing

24

Design Space Exploration [3/3]Task Selection, including Fair Policies

• Task selection policy only for busy systems• Naïve user priority can lead to poor performance• Fairness, in general, reduces performance

S-U-Prio

S-U-*: S-U-T, S-U-BoT, …

Iosup et al.: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108

Q2

Page 25: Multi-Tenancy and Virtualization in Cloud Computing

Quincy: Microsoft’s Fair Scheduler

• Fairness in Microsoft’s Dryad data centers• Large jobs (30 minutes or longer) should not monopolize

the whole cluster (Similar: Bounded Slowdown [Feitelson et al.’97])

• A job that takes t seconds in exclusive-access run requires at most J x t seconds for J concurrent jobs in the cluster.

• Challenges1. Support fairness2. Improve data locality: use data center’s network and

storage architecture to reduce job response time2012-2013 25

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 26: Multi-Tenancy and Virtualization in Cloud Computing

Dryad Workloads

2012-2013 26

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 27: Multi-Tenancy and Virtualization in Cloud Computing

272012-2013

Dryad Workloads

Source: http://sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

Q: Worst-case scenario?

Page 28: Multi-Tenancy and Virtualization in Cloud Computing

282012-2013

Dryad Workloads

Source: http://sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

Q: Is this fair?

Page 29: Multi-Tenancy and Virtualization in Cloud Computing

292012-2013

Dryad Workloads

Source: http://sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

Q: Is this fair?

Page 30: Multi-Tenancy and Virtualization in Cloud Computing

30

Dryad Workloads

2012-2013

Source: http://sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

Q: Is this fair?

Page 31: Multi-Tenancy and Virtualization in Cloud Computing

QuincyCluster Architecture: Racks and Computers

2012-2013 31

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 32: Multi-Tenancy and Virtualization in Cloud Computing

QuincyMain Idea: Graph Min-Cost Flow

• From scheduling to Graph Min-Cost Flow• Feasible schedule = min-cost flow

• Graph construction• Graph from job tasks to computers, passing through

cluster headnodes and racks• Edges weighted by cost function (scheduling

constraints, e.g., fairness)

• Pros and Cons• From per-job (local) decisions to workload (global)

decisions• Complex graph construction• Edge weight assumes all constraints can be

normalized2012-2013 32

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 33: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [1/6]

2012-2013 33

Workflow tasks

Cluster, Racks, Computers

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 34: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [2/6]

2012-2013 34

unscheduled

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 35: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [3/6]

2012-2013 35

Weighted edges

Q: How easy to encode heterogeneous

resources?

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 36: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [4/6]

2012-2013 36

Root task gets one computer

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 37: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [5/6]Dynamic Schedule for One Job

2012-2013 37

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 38: Multi-Tenancy and Virtualization in Cloud Computing

Quincy Operation [6/6]Dynamic Schedule for Two Jobs

2012-2013 38

Q: How compute-intensive is the Quincy scheduler,for many jobs and/or

computers?

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 39: Multi-Tenancy and Virtualization in Cloud Computing

QuincyExperimental Setup

• Schedulers• Encoded two fair variants (w/ and w/o pre-emption)• Encoded two unfair variants (w/ and w/o pre-emption)• Comparison with Greedy Algorithm (Queue-Based)

• Typical Dryad jobs• Workload includes worst-case scenario

• Environment• 1 cluster• 8 racks• 240 nodes

2012-2013 39

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 40: Multi-Tenancy and Virtualization in Cloud Computing

QuincyExperimental Results [1/5]

2012-2013 40

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 41: Multi-Tenancy and Virtualization in Cloud Computing

QuincyExperimental Results [2/5]

2012-2013 41

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 42: Multi-Tenancy and Virtualization in Cloud Computing

Quincy, Experimental Results [3/5] No Fairness

2012-2013 42

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 43: Multi-Tenancy and Virtualization in Cloud Computing

Quincy, Experimental Results [4/5] Queue-Based Fairness

2012-2013 43

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 44: Multi-Tenancy and Virtualization in Cloud Computing

Quincy, Experimental Results [5/5]Quincy Fairness

2012-2013 44

Isard et al.: Quincy: fair scheduling for distributed computing

clusters. SOSP 2009: 261-276

Page 45: Multi-Tenancy and Virtualization in Cloud Computing

MesosDominant Resource Fairness

• Multiple resource types• Max-Min fairness = maximize minimum per-user

allocation

• Paper 1 in Topic 4• Dominant Resource Fairness better explained in:

2012-2013 45

Ghodsi et al., Dominant Resource Fairness: Fair Allocation of

Multiple Resources Types, Usenix NSDI 2011.

Page 46: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 46

Page 47: Multi-Tenancy and Virtualization in Cloud Computing

Virtualization

• Merriam-Webster

• Popek and Goldberg, 1974

2012-2013 47

Source: Waldspurger, Introduction to Virtual Machineshttp://labs.vmware.com/download/52/

Page 48: Multi-Tenancy and Virtualization in Cloud Computing

Characteristics of Virtualization

1. Fidelity* = ability to run application unmodified2. Performance* close to hardware ability3. Safety* = all hardware resources managed by

virtualization manager, never directly accessible to application

4. Isolation of performance, or failures, etc.5. Portability = ability to run VM on any hardware

(support for value-adding ops, e.g., migration)6. Encapsulation = ability to capture VM state

(support for value-adding ops, e.g., backup, clone)7. Transparency in operation2012-2013 48

Q: Why not do all these in the OS?

* Classic virtualization (Popek and Goldberg 1974)

Page 49: Multi-Tenancy and Virtualization in Cloud Computing

Benefits of Virtualization (the Promise)

• Simplified management of physical resources• Increased utilization of physical resources

(consolidation)• Better isolation of (catastrophic) failures• Better isolation of security leaks (?)• Support for multi-tenancy

• Derived benefit: reduced cost of IT deployment and operation

2012-2013 49

Page 50: Multi-Tenancy and Virtualization in Cloud Computing

A List of Concerns

• Users• Performance isolation

• Owners• Performance loss vs native hardware• Support for exotic devices, especially on the versatile

x86• Porting OS and applications, for some virtualization

flavors• Implement VMM—application integration?

(Loss of portability vs increased performance.)

• Install hardware with support for virtualization?(Certification of new hardware vs increased performance.)

• The Law: security, reliability, …2012-2013 50

Page 51: Multi-Tenancy and Virtualization in Cloud Computing

Depth of Virtualization

• NO virtualization (actually, virtual memory)• Most grids, enterprise data centers until 2000• Facebook now

• Single-level virtualization(we zoom into this next)

• Nested virtualization• VM embedded in a VM embedded in a VM emb …• It’s all turtles all the way down…

2012-2013 51

Ben-Yehuda et al.: The Turtles Project: Design and Implementation of Nested

Virtualization. OSDI 2010: 423-436

Q: Are all our machines virtualized anyway, by the modern OS?

Q: Why is this virtualization model useful?

Page 52: Multi-Tenancy and Virtualization in Cloud Computing

April 10, 2023

Single-Level Virtualization and The Full IaaS Stack

52

Guest OS

Virtual Resources

VM Instance

Applications

Physical Infrastructure

Virtual Infrastructure Manager

Virtual Machine Monitor

Guest OS

Virtual Resources

Virtual Machine

Applications

Virtual Machine Monitor

Guest OS

Virtual Resources

Virtual Machine

Applications

Page 53: Multi-Tenancy and Virtualization in Cloud Computing

Single-Level Virtualization

April 10, 2023 53

Virtual Machine Monitor

Host OS

MusicWave

OtherApp

OtherApp

Q: What to do now?

Guest OS

Virtual Resources

Virtual Machine

Applications

Guest OS

Virtual Resources

Virtual Machine

Applications

May not exist

Hypervisor

Page 54: Multi-Tenancy and Virtualization in Cloud Computing

Three VMM Models

2012-2013 5454

VMM

Host OS

MWave App2

VMM

MWave App2

VMMHost OS

MWave

App2I/O

VMM

Guest OS

Classic VMM* Hosted VMM Hybrid VMM

* Classic virtualization (Popek and Goldberg 1974)

Page 55: Multi-Tenancy and Virtualization in Cloud Computing

Single-Level Virtualization

Implementing the Classic Virtualization Model• General technique*, similar to

simulation/emulation• Code for computer X runs on general-purpose machine

G. • If X=G (virtualization), slowdown in software simulation

may be 20:1. If X≠G (emulation), slowdown may be 1000:1.

• If X=G (virtualization), code may execute directly on hardware

• Privileged vs user code*• Trap-and-emulate as main (but not necessary) approach• Ring deprivileging, ring aliasing, address-space compression, other niceties**

• Specific approaches for each virtualized resource***• Virtualized CPU, memory, I/O (disk, network, graphics,

…)

2012-2013 55

* (Goldberg 1974)*** (Rosenblum and Garfinkel 2005)

** (Intel 2006)

Page 56: Multi-Tenancy and Virtualization in Cloud Computing

Single-Level Virtualization

Refinements to the Classic Virtualization Model*• Enhancing VMM—guest OS interface

(paravirtualization)• Guest OS is re-coded (ported) to the VMM, for performance

gains (e.g., by avoiding some privileged operations)• Guest OS can provide information to VMM, for performance

gains• Loses or loosens “Fidelity” characteristic**• 2010 onwards: paravirtualization other than I/O seems to

wane

• Enhancing hardware—VMM interface (HW support)• New hardware execution modes for Guest OSs, so no need

for VMM to trap all privileged operations, so performance gains

• IBM’s System 370 introduced interpretive execution (1972), Intel VT-x and VT-I (2006)

• Passthrough I/O virtualization with low CPU overhead• Isolated DMA: Intel VT-d and AMD IOMMU; I/O device partitions: PCI-SIG IOV spec

2012-2013 56

* (Adams and Agesen 2006)** (Popek and Goldberg 1974)

Page 57: Multi-Tenancy and Virtualization in Cloud Computing

Single-Level Virtualization

Trap-and-Emulate

2012-2013 57

Guest OS + Application

Virtual Machine Monitor

PageFault

Undef

Instr vIRQ

MMUEmulation

CPUEmulation

I/OEmulation

Un

pri

vileg

ed

Pri

vileg

ed

Source: Waldspurger, Introduction to Virtual Machineshttp://labs.vmware.com/download/52/

Q: What are the challenges?

Q: What are the challenges for x86 architectures?

Page 58: Multi-Tenancy and Virtualization in Cloud Computing

Single-Level Virtualization

Processor Virtualization Techniques*

• Binary Translation• Static, execute guest instructions in interpreter, to prevent

unlawful access to privilege state instructions• Dynamic/Adaptive BT, detect instructions that trap

frequently and adapt their translation, to eliminate traps from non-privileged instructions accessing sensitive data (e.g., load/store in page tables)

• Hardware virtualization• Co-design VM and Hardware: HW with non-standard ISA,

shadow memory, optimization of instructions for selected applications

• Intel VT-*, AMD SVM: in-memory data structure for state, guest mode, a less privileged execution mode + vmrun, etc.2012-2013 58

* (Adams and Agesen 2006)

Page 59: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 59

Page 60: Multi-Tenancy and Virtualization in Cloud Computing

Support for Specific Services and/or Platforms

Database Multi-Tenancy [1/3]

1. Isolation = separation of services provided to each tenant

2. Scaling conveniently with the number and size of tenants

3. Meet SLAs for each tenant4. Support for per-tenant service customization5. Support for value-adding ops, e.g., backup,

upgrade6. Secure data processing and storage7. Support for regulatory law (per legislator, per

tenant)2012-2013 60

* Platform-specific (database-specific) issues

Page 61: Multi-Tenancy and Virtualization in Cloud Computing

Support for Specific Services and/or Platforms

Database Multi-Tenancy [2/3]

2012-2013 61

Source: http://msdn.microsoft.com/en-us/library/aa479086.aspx

Page 62: Multi-Tenancy and Virtualization in Cloud Computing

Support for Specific Services and/or PlatformsDatabase Multi-Tenancy [3/3]

2012-2013 62

Private tables Extension tables

Datatype-specific pivot tables

Universal table with XML document

Universal table

Rigid, shared table

Source: Bobrowksiwww.capcloud.org/TechGate/Multitenancy_Magic.pptx

Page 63: Multi-Tenancy and Virtualization in Cloud Computing

Agenda

1. Introduction2. Multi-Tenancy in Practice (The Problem)3. Architectural Models for Multi-Tenancy in Clouds4. Shared Nothing: Fairness5. Shared Hardware: Virtualization6. Sharing Other Operational Levels7. Summary

2012-2013 63

Page 64: Multi-Tenancy and Virtualization in Cloud Computing

April 10, 2023 64

Conclusion Take-Home Message

• Multi-Tenancy = reduced cost of IT• 7 architectural models for multi-tenancy

• Shared Nothing—fairness is a key challenge• Shared Hardware—virtualization is a key challenge• Other levels—optimizing for specific application is a key

challenge• Many trade-offs

• Virtualization• Enables multi-tenancy + many other benefits• 3 depth models, 3 VMM models• A whole new dictionary: hypervisor, paravirtualization, ring

deprivileging• Main trade-off: performance cost vs benefits

• Reality check: virtualization is now (2012) very popular

http://www.flickr.com/photos/dimitrisotiropoulos/4204766418/

Page 65: Multi-Tenancy and Virtualization in Cloud Computing

Reading Material• Workloads

• James Patton Jones, Bill Nitzberg: Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization. JSSPP 1999: 1-16

• Alexandru Iosup, Dick H. J. Epema: Grid Computing Workloads. IEEE Internet Computing 15(2): 19-26 (2011)• Alexandru Iosup, Mathieu Jan, Omer Ozan Sonmez, Dick H. J. Epema: On the dynamic resource availability in grids. GRID

2007: 26-33• D. Feitelson, L. Rudolph, U. Schwiegelshohn, K. Sevcik, and P. Wong. Theory and practice in parallel job scheduling. In

JSSPP, pages 1–34, 1997

• Fairness• Alexandru Iosup, Omer Ozan Sonmez, Shanny Anoep, Dick H. J. Epema: The performance of bags-of-tasks in large-scale

distributed systems. HPDC 2008: 97-108• Michael Isard, Vijayan Prabhakaran, Jon Currey, Udi Wieder, Kunal Talwar, Andrew Goldberg: Quincy: fair scheduling for

distributed computing clusters. SOSP 2009: 261-276• A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, and I. Stoica, Dominant Resource Fairness: Fair Allocation of

Multiple Resources Types, Usenix NSDI 2011.

• Virtualization• Gerald J. Popek, Robert P. Goldberg, Formal Requirements for Virtualizable Third Generation Architectures,

Communications of the ACM, July 1974.• Robert P. Goldberg, Survey of Virtual Machine Research, IEEE Computer Magazine, June 1974• Mendel Rosenblum and Tal Garfinkel, Virtual Machine Monitors: Current Technology and Future Trends, IEEE Computer

Magazine, May 2005• Keith Adams, Ole Agesen: A comparison of software and hardware techniques for x86 virtualization. ASPLOS 2006: 2-13• Gill Neiger, Amy Santoni, Felix Leung, Dion Rodgers, Rich Uhlig, Intel Virtualization Technology: Hardware Support for

Efficient Processor Virtualization. Intel Technology Journal, Vol.10(3), Aug 2006.• Muli Ben-Yehuda, Michael D. Day, Zvi Dubitzky, Michael Factor, Nadav Har'El, Abel Gordon, Anthony Liguori, Orit

Wasserman, Ben-Ami Yassour: The Turtles Project: Design and Implementation of Nested Virtualization. OSDI 2010: 423-436

2012-2013 65


Recommended