Zephyr: Live Migration in Shared Nothing Databases for Elastic
Cloud Platforms
Aaron J. Elmore, Sudipto Das, Divyakant Agrawal, Amr El Abbadi
Distributed Systems LabUniversity of California Santa Barbara
Sudipto Das {[email protected]}
Serve thousands of applications (tenants)◦ AppEngine, Azure, Force.com
Tenants are (typically)◦ Small
◦ SLA sensitive
◦ Erratic load patterns
◦ Subject to flash crowds i.e. the fark, digg, slashdot, reddit effect (for now)
Support for Multitenancy is critical Our focus: DBMSs serving these platforms
Cloud Application Platforms
Sudipto Das {[email protected]}
Multitenancy…
What the tenant wants…
What the service provider wants…
Unused resources
Cloud Infrastructure is Elastic
Static provisioning for peak is inelastic
Traditional Infrastructures Deployment in the Cloud
Demand
Capacity
Time
Reso
urc
es
Demand
Capacity
Time
Reso
urc
es
Slide Credits: Berkeley RAD Lab
Sudipto Das {[email protected]}
Elasticity in a Multitenant DB
Database tier
Sudipto Das {[email protected]}
Load Balancer
Application/Web/Caching tier
Live Database Migration
Migrate a tenant’s database in a Live system◦ A critical operation to support elasticity
Different from◦ Migration between software versions
◦ Migration in case of schema evolution
Sudipto Das {[email protected]}
VM Migration for DB Elasticity
VM migration [Clark et al., NSDI 2005]
One tenant-per-VM ◦ Pros: allows fine-grained load balancing
◦ Cons Performance overhead Poor consolidation ratio [Curino et al., CIDR 2011]
Multiple tenants in a VM◦ Pros: good performance
◦ Cons: Migrate all tenants Coarse-grained load balancing
Sudipto Das {[email protected]}
Problem Formulation Multiple tenants share the same database process◦ Shared process multitenancy
◦ Example systems: SQL Azure, ElasTraS, RelationalCloud, and may more
Migrate individual tenants VM migration cannot be used for fine-grained
migration Target architecture: Shared Nothing
◦ Shared storage architectures: see our VLDB 2011 Paper
Sudipto Das {[email protected]}
Sudipto Das {[email protected]}
How to ensure no downtime? Need to migrate the persistent database image (tens of
MBs to GBs) How to guarantee correctness during failures?
Nodes can fail during migration How to ensure transaction atomicity and durability? How to recover migration state after failure?
Nodes recover after a failure
How to guarantee serializability? Transaction correctness equivalent to normal operation
How to minimize migration cost? …
Why is Live Migration hard?
Sudipto Das {[email protected]}
Downtime ◦ Time tenant is unavailable
Service Interruption◦ Number of operations failing/transactions aborting
Migration Overhead/Performance impact◦ During normal operation, migration, and after
migration
Additional Data Transferred ◦ Data transferred in addition to DB’s persistent image
Migration Cost Metrics
Sudipto Das {[email protected]}
Migration executed in phases Starts with transfer of minimal information to destination
(“wireframe”) Source and destination concurrently execute
transactions in one migration phase Database pages used as granule of
migration Pages “pulled” by destination on-demand
Minimal transaction synchronization A page is uniquely owned by either source or destination Leverage page level locking
Logging and handshaking protocols to tolerate failures
How did we do it?
Sudipto Das {[email protected]}
For this talk◦ Small tenants
i.e. not sharded across nodes.
◦ No replication
◦ No structural changes to indices
Extensions in the paper◦Relaxes these assumptions
Simplifying Assumptions
Sudipto Das {[email protected]}
Design Overview
Owned Pages
Active transactions
Page owned by Node
Page not owned by Node
P1
P2
P3
Pn
TS1,…, TSk
Source Destination
Sudipto Das {[email protected]}
Init Mode
Owned Pages
Active transactions
Un-owned Pages
Freeze index wireframe and migrate
Page owned by Node
Page not owned by Node
P1
P2
P3
Pn
TS1,…, TSk
Source Destination
P1
P2
P3
Pn
Sudipto Das {[email protected]}
Dual ModeRequests for un-owned pages can block
Old, still active transactions
New transactions
Page owned by Node
Page not owned by Node
P1
P2
Pn
TSk+1,…, TSl
TD1,…, TDm
P3
P3 accessed by TDi
P3 pulled from
source
Source Destination
P1
P2
P3
Pn
Index wireframes remain frozen
Sudipto Das {[email protected]}
Finish ModePages can be pulled by the destination, if needed
Completed
Page owned by Node
Page not owned by Node
Pn
Source Destination
P1
P2
P3
P1, P2, … pushed
from source TDm+1,
…, TDn
Pn
P1
P2
P3
Sudipto Das {[email protected]}
Normal Operation
Page owned by Node
Page not owned by Node
Source Destination
P1
P2
P3
TDn+1,…, TDp
Pn
Index wireframe un-frozen
Sudipto Das {[email protected]}
Once migrated, pages are never pulled back by source◦ Transactions at source accessing migrated pages are
aborted
No structural changes to indices during migration◦ Transactions (at both nodes) that make structural
changes to indices abort
Destination “pulls” pages on-demand◦ Transactions at the destination experience higher
latency compared to normal operation
Artifacts of this design
Sudipto Das {[email protected]}
Only concern is “dual mode”◦ Init and Finish: only one node is executing transactions
Local predicate locking of internal index and exclusive page level locking between nodes no phantoms
Strict 2PL Transactions are locally serializable
Pages transferred only once ◦ No Tdest Tsource conflict dependency
Guaranteed serializability
Serializability (proofs in paper)
Sudipto Das {[email protected]}
Transaction recovery◦ For every database page, transactions at source
ordered before transactions at destination
◦ After failure, conflicting transactions replayed in the same order
Migration recovery◦ Atomic transitions between migration modes
Logging and handshake protocols◦ Every page has exactly one owner
Bookkeeping at the index level
Recovery (proofs in paper)
Sudipto Das {[email protected]}
In the presence of arbitrary repeated failures, Zephyr ensures:◦ Updates made to database pages are consistent
◦ A failure does not leave a page without an owner
◦ Both source and destination are in the same migration mode
Guaranteed termination and starvation freedom
Correctness (proofs in paper)
Sudipto Das {[email protected]}
Replicated Tenants Sharded Tenants Allow structural changes to the indices◦ Using shared lock managers in the dual mode
Extensions (Details in the paper)
Sudipto Das {[email protected]}
Prototyped using an open source OLTP database H2◦ Supports standard SQL/JDBC API
◦ Serializable isolation level
◦ Tree Indices
◦ Relational data model
Modified the database engine◦ Added support for freezing indices
◦ Page migration status maintained using index
◦ Details in the paper…
Tungsten SQL Router migrates JDBC connections during migration
Implementation
Sudipto Das {[email protected]}
Two database nodes, each with a DB instance running
Synthetic benchmark as load generator◦ Modified YCSB to add transactions
Small read/write transactions Compared against Stop and Copy (S&C)
Experimental Setup
Sudipto Das {[email protected]}
Experimental Methodology
Metadata
Default transaction parameters:
10 operations per transaction 80% Read,
15% Update, 5% Inserts
Hardware: 2.4 Ghz Intel Core 2 Quads, 8GB
RAM, 7200 RPM SATA HDs with 32 MB Cache
Gigabit ethernet
Workload: 60 sessions100 Transactions per
session
System Controller
Migrate
Default DB Size: 100k rows (~250 MB)
Sudipto Das {[email protected]}
Downtime (tenant unavailability)◦S&C: 3 – 8 seconds (needed to migrate,
unavailable for updates)
◦Zephyr: No downtime. Either source or destination is available
Service interruption (failed operations)◦S&C: ~100 s – 1,000s. All transactions with
updates are aborted
◦Zephyr: ~10s – 100s. Orders of magnitude less interruption
Results Overview
Sudipto Das {[email protected]}
Average increase in transaction latency (compared to the 6,000 transaction workload without migration)◦ S&C: 10 – 15%. Cold cache at destination
◦ Zephyr: 10 – 20%. Pages fetched on-demand
Data transfer◦ S&C: Persistent database image
◦ Zephyr: 2 – 3% additional data transfer (messaging overhead)
Total time taken to migrate◦ S&C: 3 – 8 seconds. Unavailable for any writes
◦ Zephyr: 10 – 18 seconds. No-unavailability
Results Overview
Sudipto Das {[email protected]}
Proposed Zephyr, a live database migration technique with no downtime for shared nothing architectures◦ The first end to end solution with safety,
correctness and liveness guarantees
Prototype implementation on a relational OLTP database
Low cost on a variety of workloads
Contributions
Back-up
Sudipto Das {[email protected]}
Either source or destination is serving the tenant◦ No downtime
Serializable transaction execution◦ Unique page ownership
◦ Local multi-granularity locking
Safety in the presence of failures◦ Transactions are atomic and durable
◦ Migration state is recovered from log Ensure consistency of the database state
Guarantees
Sudipto Das {[email protected]}
Wireframe copy Typically orders of magnitude smaller than
data Operational overhead during migration
Extra data (in addition to database pages) transferred
Transactions aborted during migration
Migration Cost Analysis
Sudipto Das {[email protected]}
Effect of Inserts on Zephyr
Failures due to attempted modification of Index structure
Sudipto Das {[email protected]}
Average Transaction Latency
Only committed transaction reported
Loss of cache for both migration types
Zephyr results in a remote fetch