Failover or not to failover

Post on 28-Jan-2015

182 views 1 download

Tags:

description

 

transcript

Failover, or not Failover,that is the questionPercona Live MySQL Conference and Expo 2013Massimo Brignoli, SkySQL Henrik Ingo, Nokia

Please share and reuse this presentation licensed under the Creative Commonse Attribution License

Agenda

● Why HA is more difficult for databases● Steps to failover● Monitoring● Automating failover● Sounds great!

What could possibly go wrong?● Amazon Dynamo● Galera and NDB

Fault tolerance = redundancy

● RAID● 2 power units per server● Cluster of servers● 2 kidneys per person● Redudancy at all levels:

Software, Hardware, Network, Electricity...

A chain is as strong as the weakest link.

Durability

"Durability is an interesting concept.If I flush a transaction to disk,

it is said to be durable.But if I then take a backup, it is even more durable."

Heikki Tuuri

Why High Availability is More Difficult for Databases

Redundancy of serverAND

Redundancy of data

WHILEPerforming thousands of write operations

per second onto the dataset

What failover?

1. Primary server2. Secondary / Standby server

for redundancy3. In case Primary fails,

Secondary server must become the new Primary

Steps to failover (theory)

1. Notice failure2. Move VIP3. Continue

Automating failoverGeneric Clustering Solutions

● Pacemaker/Corosync

● Linux Heartbeat

● Red Hat Cluster Suite

● Solaris Cluster

● Windows Server Failover Clustering

● etc...

MySQL Specific Solutions

● MMM

● PRM

● MHA

● JDBC connector

Steps to failover (DRBD)

VIPVIP

1. Have DRBD2. Notice failure3. Shutdown MySQL on primary4. Unmount disk on primary5. Mount disk on secondary6. Start MySQL on secondary7. Wait for InnoDB recovery8. Wait for InnoDB recovery9. Wait for InnoDB recovery

10. Unset VIP on primary11. Set VIP on secondary12. Continue13. Should you add a new secondary?

Steps to failover (MySQL replication)

VIPVIP

1. Have replication2. Notice failure3. Make slave writable4. Make master read-only5. Unset VIP on master6. Set VIP on slave7. Continue8. Should you add a new slave?

What if you have more than 2 servers? (MySQL replication)

VIPVIP

?

● MySQL replication failover with more than 2

servers can be a hassle.

● Which slave should become the new master?

● All slaves must be pointed to the new master.

● They must figure out where to continue

replication (binlog position)

● MySQL 5.6 GTID helps.

MHA and SkySQL...

● Combination of resource manager + scripts

● Automating failover process:○ New Master

selection○ Slaves

reconfiguration○ VIP management○ Missing binlogs

retrieval

Sounds great, what could possibly go wrong?

Sounds great, what could possibly go wrong?

VIP

1. Have replication○ Ok, is it working? What if it's not working?○ Is it replicating in the right direction?○ Does your bash script handle binlog positions correctly?○ Asynchronous?

2. Notice failure○ Polling interval○ Who is polling?○ ...and from where?○ How is he handling failure himself?○ False positives○ Is failover the right response to every failure?

3. STONITH○ Shutdown MySQL on Primary? How? It's not responding...○ Unmount disk on Primary? How? It's not responding...○ "You need a STONITH device"! Hehe, nice try...

4. Move VIP○ Unset VIP on Master/Primary? How? It's not responding...○ Set VIP on Secondary/Slave. This will work fine. Unfortunately.

5. Continue6. Add back new/same Secondary

○ Automatically of course. Even if it just failed 15 seconds ago.

VIP

Case Githubhttps://github.com/blog/1261-github-availability-this-week● MySQL replication, Pacemaker, Corosync, Percona Replication Manager● PRM health check fails due to high load during schema migration.● Failover!● New node has cold caches, so even worse performance.● Failover! (back)● Disable PRM● A slave is found outdated as replication is not happening● Enable PRM and hope it will fix it● Pacemaker segfaults, causing cluster partition● PRM selects the outdated node as master, shuts down others● All kinds of data inconsistencies● Restart PRM on all nodes● ...

Case GithubLesson learned:

Automated failover is dangerous

Cold cache is dangerous

But... Not automating is also dangerous

Baron Schwartz: 75% of replication failures are human errorshttp://www.percona.com/about-us/mysql-white-paper/causes-of-downtime-in-production-mysql-servers

80% of Aviation accidents are caused by human errorshttp://asasi.org/papers/2004/Shappell%20et%20al_HFACS_ISASI04.pdf

80% Events caused by human errors, 70% of them due to organization weaknesseshttp://www.hss.doe.gov/sesa/corporatesafety/hpc/fundamentals.html

Are we solving the right problem?

Instead of automating the problem...Eliminate the problem!

Amazon Dynamo

R + W > N

Voldemort, Cassandra, RIAK, DynamoDB, S3http://openlife.cc/blogs/2012/september/failover-evil

N=3, R=W=2

R + W > N

Eventual consistency is internal only

R + W > N

Failover?

Single node failure is a non-event!

For relational databases?

Synchronous replication isspecial case of Dynamo:

W=N & R=1

Or is there a failover after all?

Due to W=N, writers actually notice node failures! Cluster reconfiguration needed.

(Readers are ok.)

?

Example: Galera

OKTimeout

Example: MySQL NDB Cluster

What have we learned?

● Failover with DRBD is painful because it is slow.

● Failover with MySQL replication is painful because it's a mess.

● Amazon Dynamo has no failover● Galera Cluster has no failover but needs

cluster reconfiguration. Same thing...● MySQL NDB Cluster has failover but you

can't see it.