+ All Categories
Home > Documents > Mysql Wp Windows Failover Clustering

Mysql Wp Windows Failover Clustering

Date post: 21-Nov-2015
Category:
Upload: ssilva521
View: 20 times
Download: 0 times
Share this document with a friend
Description:
Mysql Wp Windows Failover Clustering
Popular Tags:
23
Copyright © 2011, Oracle and/or its affiliates. All rights reserved. MySQL with Windows Server 2008 R2 Failover Clustering Delivering High Availability with MySQL on Windows A MySQL ® White Paper September 2011
Transcript
  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    MySQL with Windows Server 2008 R2 Failover Clustering

    Delivering High Availability with MySQL on Windows

    A MySQL White Paper

    September 2011

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 2

    Table of Contents Summary ......................................................................................................... 3

    Value of MySQL on Windows ....................................................................... 3

    Approaches to High Availability with MySQL ............................................. 4MySQL Replication .................................................................................... 6

    MySQL Cluster .......................................................................................... 7

    Introduction to Windows Server 2008 R2 Failover Clustering .................. 7

    Setting up MySQL with Windows Server 2008 R2 Failover Clustering .... 8Target Configuration .................................................................................. 8

    Pre-Requisites ........................................................................................... 9

    Steps to Configure MySQL for Windows Failover Clustering .................10Step 1. Configure iSCSI in software (optional) .....................................10

    Step 2. Ensure Windows Failover Clustering is enabled .....................12

    Step 3. Install MySQL as a service on both servers ............................13

    Step 4. Migrate MySQL binaries & data to shared storage ..................13

    Step 5. Create Windows Failover Cluster ............................................15

    Step 6. Create Cluster of MySQL Servers within Windows Cluster .....17

    Step 7. (Optional) Add asynchronous replication to an external slave 19

    Step 8. Test the cluster .........................................................................19

    Step 9. MySQL Upgrades .....................................................................22

    Conclusion ....................................................................................................23

    Additional Resources ..................................................................................23

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 3

    Summary Microsoft Windows is consistently ranked as the top development platform for MySQL, and outranks any individual Linux distribution as the leading platform for MySQL deployments, according to surveys of the MySQL user community. For Windows customers, the advantages of using MySQL are clear: low Total Cost of Ownership (TCO); broad platform support and ease-of-use. With the release of MySQL 5.5, Oracle demonstrated the benefits of focused development activity for the Windows platform with significant enhancements in performance and scalability. MySQL delivered over 5x higher throughput than previous MySQL releases on Windows.1 Following the certification and support of MySQL with Windows Server Failover Clustering (WSFC), organizations can now safely deploy business-critical applications demanding high levels of availability, powered by the MySQL database. This whitepaper discusses how Windows Server Failover Clustering with MySQL provides a solution to reduce downtime and guard against data loss, and then steps a user through the processes necessary to configure, provision and run MySQL on a Windows Server 2008 R2 cluster. By the end of this paper, developers and administrators will be able to deploy business critical applications with MySQL on Windows Server, with the added reassurance of knowing that the solution has been certified and is fully supported.

    Value of MySQL on Windows The popularity of MySQL on Microsoft Windows can be attributed to the following factors: Lower Total Cost of Ownership

    Up to 90% savings over Microsoft SQL Server (see the TCO chart below).

    Broad platform support No lock-in to a single platform. MySQL runs on all leading operating system and hardware combinations including Windows, Linux, MacOS and Solaris.

    Ease of use and administration

    Using the Windows Installer, MySQL can be up and running on a Windows server in less than five minutes. MySQL offers a range of self-management capabilities and graphical tools for development, administration and management of MySQL-based environments.

    Reliability

    Proven in some of the largest and most demanding web properties, enterprises and ISVs/OEMs whose businesses depend entirely on their applications and sites being up and available 24 X 7.

    Performance and scalability

    Windows-specific enhancements released as part of MySQL 5.5 have proven to deliver over 500% higher performance than previous releases of MySQL on Windows.

    1 http://mysql.com/why-mysql/windows/

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 4

    Integration into the Windows environment A comprehensive range of drivers for Windows environments including ADO.NET and ODBC drivers, along with integration to the Microsoft Access database and now, Windows Server Failover Clustering for business critical workloads demanding high availability.

    Read this whitepaper for more detail on each of these advantages: http://mysql.com/why-mysql/white-papers/mysql_on_windows_wwh.php

    Figure 1 MySQL Delivers 90% Lower TCO than Microsoft SQL Server

    Costs were calculated using the following parameters:

    - Term: 3 Years - Users: Unlimited (web) - Servers: 4 - CPUs/Server: 4 - Hardware: Intel x86 - MySQL: MySQL Enterprise Edition - Microsoft: SQL Server Enterprise Edition

    Individual component pricing is available from this whitepaper: http://mysql.com/why-mysql/white-papers/mysql_on_windows_wwh.php

    You can model your own savings from the TCO Calculator: http://mysql.com/tcosavings/

    Approaches to High Availability with MySQL Databases are the center of modern enterprise applications, storing and protecting an organizations most

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 5

    valuable assets and running business-critical applications. Just minutes of downtime can often result in significant amounts of lost revenue and unsatisfied customers. Making database applications highly available is therefore a top priority for all organizations. MySQL provides a number of options to make a database infrastructure highly available. Selecting the high availability solution that is right for you is largely dependent on how many nines of availability you require and the type of application you are deploying. The solutions for MySQL high availability on the Windows platform covers a broad spectrum of service level requirements, as illustrated in Figure 2.

    By understanding the availability requirements for each application it is possible to map the database to the appropriate high availability architecture. Figure 3 attempts to map common application types to high availability architectures, based on best practices observed from the MySQL user base. Of course, each organization is unique, and so while the mapping below may not be appropriate to every use-case, it does serve as a reference point to begin investigating those HA architectures which can potentially best serve your requirements.

    Figure 2 Mapping High Availability Architectures to Systems Downtime

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 6

    MySQL with Windows Server Failover Clustering maps to the Clustered Systems category in Figure 3, and is the focus of this whitepaper. MySQL Replication and MySQL Cluster cover either end of the high availability spectrum. All are fully supported by Oracle when deployed with Microsoft Windows Server 2008R2.2 MySQL Replication Using MySQL Replication, organizations can cost-effectively deliver a high availability solution. Master/Slave replication enables operations to quickly fail-over to another server in the event of a hardware or software problem. In addition, with MySQL replication, organizations can incrementally scale out their infrastructure to accommodate exponentially growing capacity demands. MySQL Replication ships out of the box and is used extensively by some of the worlds most highly trafficked Web sites including Facebook, YouTube, Google, Yahoo!, flickr and Wikipedia. In MySQL 5.5, new semi-synchronous replication and replication Heart Beat improve the reliability of data replication and the speed of failover for application availability. You can learn more about MySQL Replication in the whitepaper posted as follows: http://www.mysql.com/why-mysql/white-papers/mysql-wp-replication.php MySQL replication can be used in combination with Windows Server Failover Clustering to provide an integrated solution for both high availability and scalability. The MySQL master server can be deployed in a redundant Active / Passive pair, with replication slaves attached to the master. In the event of failure of the master, the MySQL service is automatically restarted on the Passive server, and the replication slaves failover with the service, without operator intervention. In the event that there is a failure in the Windows 2 Users must escalate issues related to Windows Server and its associated clustering mechanisms directly to Microsoft.

    Figure 3 Mapping Application Types to High Availability Architectures

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 7

    Server Failover Cluster that cannot be automatically recovered (for example a corruption of the shared filsystem), one of the slaves can be promoted to be the new master.

    e

    ySQL Cluster write-scalable, shared-nothing, real-time transactional database, combining 99.999%

    ySQL Cluster's real-time design delivers predictable, millisecond response times with the ability to service

    ou can learn more about the architecture and capabilities of MySQL Cluster from the whitepaper posted at:

    MMySQL Cluster is aavailability with the low TCO of open source. With a distributed, multi-master architecture and no single point of failure, MySQL Cluster is able to scale horizontally on commodity hardware to serve read and write intensive workloads, accessed via SQL and NoSQL interfaces. Mmillions of operations per second. Support for in-memory and disk-based data, automatic data partitioning (sharding) with load balancing and the ability to add nodes to a running cluster with zero downtime allows linear database scalability to handle the most unpredictable web, telecoms and enterprise workloads. Yhttp://www.mysql.com/why-mysql/white-papers/mysql_wp_cluster7_architecture.php

    Introduction to Windows Server 2008 R2 Failover Clustering Windows Server Failover Clustering (WSFC) is a feature of the Enterprise and Datacenter editions of Windows Server 2008 R2 that can help ensure that an organizations critical applications and servicesavailable whenever they are needed. Clustering can help build redundancy into an infrastructure and reducthe number of single points of failure. This, in turn, helps reduce downtime, protects against data loss, and increases the return on investment.

    A failover cluster is a group of indepe

    are e

    ndent computers, or nodes, that are physically connected by a local-

    ns

    n will be unavailable until

    at it

    igure 4 illustrates the integration of MySQL with Windows Server Failover Clustering to provide a highly

    he following sections of the whitepaper will illustrate how to create, configure and test MySQL with Windows

    area network and programmatically connected by cluster software. The group of nodes is managed as a single system and shares a common namespace. The group usually includes multiple network connectioand data storage connected to the nodes via storage area networks (SANs). The failover cluster operates by moving resources between nodes to provide service if system components fail.

    Normally, if a server that is running a particular application crashes, the applicatiothe server is fixed. Failover clustering addresses this situation by detecting hardware or software faults and immediately restarting the application3 on another node without requiring administrative intervention - a process known as failover. Users can continue to access the service and may be completely unaware this now being provided from a different server. Favailable service to connected applications. In this architecture, MySQL is deployed in an Active / Passiveconfiguration. Failures of either MySQL or the underlying server are automatically detected and the MySQLinstance is restarted on the Passive node. Applications accessing the database, as well as any MySQL replication slaves, can automatically reconnect to the new MySQL process using the same Virtual IP address once MySQL recovery has completed and it starts accepting connections. TServer Failover Clustering.

    3 While the application is restarted immediately, there may be a delay until service is restored for example the InnoDB recovery time for InnoDB.

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 8

    Setting up MySQL with Windows Server 2008 R2 Failover Clustering Target Configuration

    MySQL with Windows Failover Clustering requires at least 2 servers within the cluster together with some shared storage (for example FCAL SAN or iSCSI disks). For redundancy, 2 LANs should be used for the cluster to avoid a single point of failure and typically one would be reserved for the heartbeats between the cluster nodes. The MySQL binaries and data files are stored in the shared storage and Windows Failover Clustering ensures that at most one of the cluster nodes will access those files at any point in time. Clients connect to the MySQL service through a Virtual IP Address (VIP) and so in the event of failover they experience a brief loss of connection but otherwise do not need to be aware that the failover has happened other than to handle the failure of any in-flight transactions. This typical configuration is illustrated in Figure 4.

    Figure 4 Typical configuration

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 9

    This white paper will step through how to set up and use a cluster such as that shown in Figure 4 and for easy reference, Figure 5 shows how this is mapped onto physical hardware and network addresses for the lab used later in this paper. In this case, iSCSI is used for the shared storage. Note that ideally there would be an extra subnet for the heartbeat connection between ws1 and ws3. Pre-Requisites

    MySQL 5.5 & InnoDB must be used for the database (note that MyISAM is not crash-safe and so failover may result in a corrupt database)

    Windows Server 2008 R2 Redundant network connections between nodes and storage WSFC cluster validation must pass iSCSI or FCAL SAN should be used for the shared storage

    Figure 5 Physical cluster used in this paper

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 10

    Steps to Configure MySQL for Windows Failover Clustering

    Step 1. Configure iSCSI in software (optional)

    This paper does not attempt to describe how to configure a highly available, secure and performant SAN but in order to implement the subsequent steps a SAN is required and so in this step we look at one way of using software to provide iSCSI targets without any iSCSI/SAN hardware (just using the servers internal disk). This is a reasonable option to experiment but probably not what youd want to deploy with for a HA application. If you already have shared storage set up then you can skip this step and use that instead. Before setting up the iSCSI target you need to retrieve the iSCSI Qualified Name of the hosts (referred to as iSCSI initiators) that will be connecting to this storage in this case ws1 and ws3. On each of those 2 hosts from the start menu run Administrative Tools -> iSCSI Initiator. Click on the Configuration tab and take a note of the Initiator Name as shown in Figure 6. The iSCSI target will be configured on ws2 using Microsoft iSCSI Software Target v3.3 which is free to use on Windows Server 2008 R2 and can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0. When installing this software, a web page will be opened simply click the link to install as shown in Figure 7. Once installed, start up the application and select iSCSI Targets and then Action -> Create iSCSI Target. For this paper, the name SAN is given as the iSCSI target name. When asked for the IQN Identifier the value retrieved from ws1 is used. Once youve completed the steps in the wizard, you still need to allow access from ws3 and so select iSCSI Targets again and then right-click on SAN and choose Properties and then the iSCSI Initiators tab and click on Add. You will then be prompted to en QN retrieved from ws3 as shown in Figure 8.

    ter the I

    Figure 7 Install iSCSI target software

    Figure 6 Fetch the IQN for both Cluster nodes

    Figure 8 Add ws3 to allowed initiators

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 11

    The next step is to create at least two virtual disks within the iSCSI target; one for the quorum file and one for the MySQL binaries and data files. The quorum file is used by Windows Failover Clustering to avoid split-brain behaviour. This can happen when the 2 clustered hosts lose contact with each other. To create the disk for the quorum file, right-click on SAN and select Create Virtual Disk for iSCSI Target. Step through the wizard until requested for a file location provide the path to a file-name ending in .vhd if the file doesnt exist then it will be created. For this example, C:\Users\Administrator\My Documents\quorum.vhd is used for the quorum disk. The quorum disk doesnt need to be large 1 Gbyte should be ample. Repeat for the MySQL disk and you should now see both virtual disks as shown in Figure 9. ws1 can now be connected to these disks by running the iSCSI Initiator tool again from that host. Select the Discovery tag and then Discover Portal and provide the name ws2 (the host for our iSCSI virtual disks) as show in Figure 10. Before clicking on OK select the Advanced button in order to select which of the IP addresses on ws1 should be used for the iSCSI connection (should be different to the one used for Cluster heartbeat & other IP traffic). As shown in Figure 11 select Microsoft iSCSI Initiator as the Local Adapter and then 192.168.5.3 as the Initiator IP. Return to the Targets tab and the iSCSI target on ws2 is now visible select it and then click the Connect button and then Advanced to again select 192.168.5.3 as the local iSCSI IP Address (and 192.168.5.1 / 3260 as the Target portal IP. The target should now show as connected. This process is repeated on ws3, again using 192.168.5.1 as the IP address for ws2 but with 192.168.5.2 as the iSCSI IP address for ws3.

    Figure 9 Two virtual iSCSI disks

    Figure 10 Look for iSCSI disks on ws2

    Figure 11 Select the local network adapter to use for iSCSI

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 12

    From just ws1, the disks must now be activated and formatted. Navigates the Server Manager tool to Storage -> Disk Management and right-click on Uknown and select Online as shown in . Then right-click and select Initialize Disk. Finally format the disk by right-clicking on the Unallocated disk area and select New Simple Volume and then accept the defaults and label the volume as Quorum. This is repeated for the second disk but the volume label is MySQL.

    Figure 12

    Step 2. Ensure Windows Failover Clustering is enabled

    To confirm that Windows Failover Clustering is installed on ws1 and ws3, open the Features branch in the Server Manager tool and check if Failover Cluster Manager is present (F

    ). igure

    13 If Failover Clustering is not installed then it is very simple to add it. Select Features within the Service Manager and then click on Add Features and then select Failover Clustering and then Next.

    Figure 13 Check for clustering

    Figure 12 Bring shared storage on-line

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 13

    Step 3. Install MySQL as a service on both servers

    If MySQL is already installed as a service on both ws1 and ws3 then this step can be skipped. This section provides a brief overview of setting up MySQL as a Windows service. If you need more details then consult A Visual Guide to Installing MySQL on Windows (http://www.mysql.com/why-mysql/white-papers/visual_guide_to_installing_mysql_windows.php) or the Installation and Upgrading MySQL chapter of the MySQL manual (http://dev.mysql.com/doc/refman/5.5/en/installing.html). The GPL Windows installer for MySQL can be downloaded from http://dev.mysql.com/downloads/mysql/ or MySQL Enterprise Edition can be found at http://edelivery.oracle.com/ The installation is very straight-forward and selecting the default options is fine. At the end of the installation, ensure that the Launch the MySQL Configuration Wizard is selected before pressing Finish. Within the MySQL installation wizard, the sticking with the defaults is fine for this exercise. When you reach the configuration step, check Create Windows Service (

    ). Figure

    14 The installation and configuration must be performed on both ws1 and ws2, if necessary. Step 4. Migrate MySQL binaries & data to shared storage

    If the MySQL Service is running on either ws1 or ws3 then stop it - open the Task Manager using ctrl-shift-escape, select the Services tab and then right-click on the MySQL service and choose Stop Service as sho

    urwn in

    Fig e 15. As the iSCSI disks were enabled on ws1 you can safely access them in order to copy across the MySQL binaries and data files. With the equipment used for this white paper, Table 1 shows the original and new locations for each of these. Note that if C:\ProgramData is not visible then unhide them from an Explorer window use alt to expose the Tools menu and from in there select Folder Options and then select the View tab and click the radio button for Show hidden files, folders and drives.

    Figure 15 Stop MySQL service

    Figure 14 Configure MySQL as a Windows Service

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 14

    Copy From Copy To C:\Program Files\MySQL\MySQL Server 5.5 F:\MySQL Server 5.5 C:\ProgramData\MySQL\MySQL Server 5.5\data

    F:\MySQL Data

    Table 1 Migrate MySQL Files

    Note that the drive letters may be different in your configuration. Also note that these folders should not scanned by any virus software, index utilities or automated backup processes and that they should not be shared with any network users. In order for MySQL Service to start using these locations, the MySQL config file must be updated by default this will be in C:\Program Files\MySQL\MySQL Server 5.5\my.ini within that file change the following parameters: basedir=F:/MySQL Server 5.5 datadir=F:/MySQL Data

    Note that if you specified an explicit folder for the InnoDB data files during the MySQL installation and configuration then those should be migrated over too. At the same time, in order to be able to add asynchronous MySQL replication in Step 7, the following parameters are also added to the [mysqld] section within my.ini: log-bin=clusterdb-bin.log server-id=1

    The same my.ini is used for both ws1 and ws3 including the same server-id4.

    4 This assumes that the same drive letters are used on the 2 servers if this is not possible then the drive letter will be different in the 2 configuration files

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 15

    Step 5. Create Windows Failover Cluster

    From the Server Manager on either ws1 or ws3 navigate to Features -> Failover Cluster Manager and then select Validate a Configuration. When prompted enter ws1 as one name and then ws3 as the other (

    ). Figure

    16 In the Testing Options select Run all tests and continue. If the tests report any errors then these should be fixed before continuing. For the configuration used in this paper (Figure 5) a warning is given that the different network connections are on the same subnet and so are likely to be using the same network infrastructure and so this is likely to represent a single point of failure and so not deliver a Highly Available system in a production system this should be avoided. Now that the system has been verified, select Create a Cluster and provide the same server names as used in the validation step. In this example, MySQL is provided as the Cluster Name and then the wizard goes on to create the luster.

    ough this network box.

    c During creation of the cluster, the wizard will have attempted to include all available network connections but any network being used for iSCSI should really be excluded. Navigate to Networks within the new cluster and then right-click on each of the networks and select properties. If the network is one that you want to use for the cluster (in this example the 192.168.2.X subnet) then ensure that Allow cluster network communication on this network is selected. Conversely, for any network being used for iSCSI (in this case subnet 192.68.5.X) ensure that Do not allow cluster network connection on this network is set (Figure 17). Additionally if there are multiple networks to be used for the clustering then one can be nominated for just the internal heart-beating by unchecking the Allow clients to connect thr

    Figure 16 Select hosts for the cluster

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 16

    Figure 17 Remove any iSCSI networks from cluster At this stage, the cluster has been created and consists of 2 servers and 2 shared disks (one to act as the Quorum and one for applications to use) but there are no applications/services within the cluster. Navigate to the storage branch to confirm that the disks are there and that the smaller one has been selected as the quorum (Figure 18).

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 17

    Figure 18 Confirm correct disks added to cluster Step 6. Create Cluster of MySQL Servers within Windows Cluster

    Adding the MySQL service to the new Cluster is very straight-forward. Right-click on Services and applications in the Server Manager tre 19) and select Configure a Service or Application. When requested by the subsequent wizard, select Generic Service from the list and then MySQL from the offered list of services. Our example name was ClusteredMySQL. Please choose an appropriate name for your cluster. The wizard will then offer the shared disk that has not already been established as the quorum disk for use with the Clustered service make sure that it is selected. There is no registry data that needs to be replicated for MySQL and so skip over that step.

    e (Figure

    Once the wizard finishes, it starts up the MySQL Service. Click on the ClusteredMySQL service branch (Figure 20) to observe that the service is up and running. You should also make a note of the Virtual IP (VIP) assigned, in this case 192.168.2.18. By default this is created using DHCP but it can be overridden right-click it and select Properties to change the value.

    Figure 19 Configure service within the cluster

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 18

    Figure 20 Confirm clustered MySQL running and find virtual IP address Test your connection to the MySQL service using the VIP of the cluster: C:\ mysql u root h 192.168.2.18 P3306 pbob

    The password (bob) was created as Step 3. By default Window Failover Clustering limits failovers to one event for every six hours.As we will be testing multiple failovers this limit should be raised. Right-click Clustered MySQL in the Server Manager tree and select Preferences and then the Failover tab and increase the Maximum failures limit as shown in Figure 21.

    Figure 21 Allow more frequent failovers

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 19

    Step 7. (Optional) Add asynchronous replication to an external slave

    The cluster that has been set up is running within a single data center. It is possible to use Windows Failover Clustering to span data centers but that is beyond the scope of this white paper instead this section describes how to set up MySQL asynchronous replication to an external database. There are a number of reasons why this might be desirable including adding geographic redundancy, recovering from local database corruptions or producing a near-real time copy of the data for complex analytics. The other reason for setting up asynchronous replication is to use the slave as an example client and observe how it behaves during cluster failover. Setting up replication from clustered MySQL is identical to the non-clustered case with the exception that when issuing the CHANGE MASTER TO command on the slave, the Virtual IP of the cluster is used rather than the IP address of either of the servers (in this example, 192.168.2.18). The steps involved to set up MySQL replication are described in http://www.clusterdb.com/mysql-cluster/get-mysql-replication-up-and-running-in-5-minutes/ (but note that the master has already been configured and started as part of the cluster).

    Step 8. Test the cluster

    As described in Step 6, the VIP should be used to connect to the clustered MySQL service: C:\ mysql u root h 192.168.2.18 P3306 pbob

    From there create a database and populate some data. mysql> CREATE DATABASE clusterdb; mysql> USE clusterdb; mysql> CREATE TABLE simples (id int not null primary key); mysql> INSERT INTO simples VALUES (1); mysql> SELECT * FROM simples; +----+ | id | +----+ | 1 | +----+

    To check that the MySQL replication is working, the table can be checked on the slave: 192.168.2.1 slave> USE clusterdb; 192.168.2.1 slave> SELECT * FROM simples; +----+ | id | +----+ | 1 | +----+

    The MySQL service was initially created on ws1 but it can be forced to migrate to ws3 by right-clicking on the service and selecting Move this service or application to another node as shown in Figure 22.

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 20

    Figure 22 Manually migrate MySQL service Once the migration has completed, the service will be show as running on ws3 (Figure 23).

    Figure 23 MySQL service has migrated As the MySQL data is held in the shared storage (which has also been migrated to ws3), it is still available and can still be accessed through the existing mysql client which is connected to the VIP: mysql> select * from simples; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: clusterdb +----+ | id | +----+ | 1 | +----+

    Note the error shown above the mysql client loses the connection to the MySQL service as part of the migration and so it automatically reconnects and complete the query. Any application using MySQL with Windows Failover Cluster should also expect to have to cope with these glitches in the connection.

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 21

    To see how the MySQL slave copes insert a new row into the clustered MySQL then check the slave. Failover again to ws1, and add another row. As you can see, the slave will automatically reconnect to the clustered MySQL and resume replication right where it left off.: mysql> INSERT INTO SIMPLES VALUES (2); No connection. Trying to reconnect... Connection id: 1 Current database: clusterdb Query OK, 1 row affected (0.13 sec) 192.168.2.1 slave> SELECT * FROM simples; +----+ | id | +----+ | 1 | +----+ 192.168.2.1 slave> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Reconnecting after a failed master event read Master_Host: 192.168.2.18 Master_User: repl_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: clusterdv-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: clusterdb-slave-relay-bin.000011 Relay_Log_Pos: 257 Relay_Master_Log_File: clusterdv-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes 192.168.2.1 slave> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.2.18 Master_User: repl_user Master_Port: 3306 Connect_Retry: 60 192.168.2.1 slave> SELECT * FROM simples; +----+ | id | +----+ | 1 | | 2 | +----+

    Clearly, not all migrations are planned and the cluster fails over a service when there are hardware or software failures. For example, if you kill the mysqld.exe process on the active host then the local MySQL service will recreate it but if you kill it a second time then the cluster will automatically migrate the service to the alternate host. The effects of hardware failures can be a little more complex to anticipate as the clustering software needs to guard against network partitioning which would be a situation where the 2 halves of the cluster become

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 22

    isolated. This is known as the split-brain problem. The default rule is that if 2 out of 3 of the 2 hosts and 1 quorum disk can access each other then theyre allowed to continue and provide the service otherwise the service is halted. As an example, if the service, as well as the quorum and MySQL disks are owned by ws1 and ws1 loses both of its network connections (one to the iSCSI storage and one to the cluster) then the service and the MySQL disk are automatically failed over to ws3 the service is allowed to continue as both ws3 and the failed-over quorum disk are still available (2 votes out of 3 is a majority). Note that when the network connections are restored, ws1 automatically rejoins the Cluster as the new passive node. The high availability of a shared-everything cluster is highly dependent on the level of availability of the shared storage. If for example, the Microsoft iSCSI Software Target tool is used to disable the MySQL Disk ( ) then the MySQL service is lost until the shared disk is enabled again. If higher levels of availability are required than provided by Windows clustering then MySQL Cluster is an alternative to consider.

    Figure 24

    Step 9. MySQL Upgrades

    Describing how to perform MySQL upgrades is outside of the scope of this white paper except to point out the interactions it has with Windows Failover Clustering. More general information on MySQL upgrades can be found at http://dev.mysql.com/doc/refman/5.5/en/upgrading-downgrading.html. Each host has its own MySQL configuration file (my.ini) that is stored locally. Each of these my.ini files indicates the path to the MySQL binaries to be used for the local MySQL service (basedir) when it is running and the data files (datadir). In normal operation the two my.ini files should be kept identical (assuming the drive letters match). The MySQL binaries and data are held in the shared storage. When upgrading MySQL, the following steps can be used if the new release doesnt require the tables and/or indexes to be rebuilt refer to http://dev.mysql.com/doc/refman/5.5/en/checking-table-incompatibilities.html to check if this applies to the planned upgrade path:

    1. Install the new version of MySQL on host X (on the local storage) where host X is the node in the cluster that currently has access to the MySQL disk

    2. Copy the newly installed MySQL directory to a new location on the shared MySQL disk 3. Edit the my.ini file on both hosts to set basedir to the new (shared) location for the new MySQL

    binaries. Leave datadir unchanged. 4. Using the Failover Cluster Manager move the MySQL service to the host Y, when the MySQL service

    is started there it will be using the new binaries. If it is necessary to rebuild the tables/indexes then the procedure described in http://dev.mysql.com/doc/refman/5.5/en/rebuilding-tables.html should be interleaved with steps 3 & 4 in order to minimize loss of service.

    Figure 24 Simulate shared storage failure

  • Copyright 2011, Oracle and/or its affiliates. All rights reserved.

    Page 23

    Conclusion More users develop and deploy and MySQL on Windows than any other single platform. Enhancements in MySQL 5.5 increased performance by over 5x compared to previous MySQL releases. With certification for Windows Server Failover Clustering, MySQL can now be deployed to support business critical workloads demanding high availability, enabling organizations to better meet demanding service levels while also reducing TCO and eliminating single vendor lock-in.

    Additional Resources

    MySQL on Windows: http://www.mysql.com/why-mysql/windows/ MySQL High Availability: http://www.mysql.com/products/enterprise/high_availability.html Windows Server 2008 R2 Failover Clustering:

    http://www.microsoft.com/windowsserver2008/en/us/failover-clustering-main.aspx

    A MySQL White PaperSummaryValue of MySQL on WindowsApproaches to High Availability with MySQLMySQL ReplicationMySQL Cluster

    Introduction to Windows Server 2008 R2 Failover ClusteringSetting up MySQL with Windows Server 2008 R2 Failover ClusteringTarget Configuration

    ConclusionAdditional Resources


Recommended