1
Part 1 Providing high availability for applications and services is one of the most critical responsibilities that IT administrators have in today’s data centers. Planned or unplanned downtime may cause businesses to lose money, customers, and reputation. Highly available systems demand the implementation of fault-‐tolerant processes and operations that minimize interruptions by eliminating single point of failures and detecting failures as they happen. This is what failover clustering is all about. Our first section dedicated to Windows Server 2012 R2 failover clustering describes the main components of a failover cluster implementation, the quorum configuration options and the shared storage preparation. Main Components of a Failover Cluster When configuring a Windows Server 2012 R2 failover cluster, it is essential to carefully consider the main components that will integrate the cluster configuration. Let’s review the most important ones:
• Nodes. These are the member servers of a failover cluster. This collection of servers communicate with each other and run cluster services, resources, and applications associated with a cluster.
• Networks. Refers to the networks that cluster nodes use to communicate with one another, the clients, and the storage. Three different networks can be configured to provide enhanced functionality to the cluster:
• Private network: Dedicated to internal cluster communication. It is used by the nodes to exchange heartbeats and interact with other nodes in the cluster. The failover cluster authenticates all internal communication.
• Public network: This network allows network clients access to cluster applications and services. It is possible to have a mixed public and private network, although it is not recommended as bottleneck and contention issues may strain the network connections.
• Storage network: These are dedicated channels to shared storage. iSCSI storage requires special attention because it uses the same IP protocol and Ethernet devices available to the other networks. However, the storage network should be completely isolated from any other network in the cluster. Configuring redundant connections on all these networks increases cluster resilience.
• Storage. This is the cluster storage system that is typically shared between cluster nodes. The failover cluster storage options on Windows Server 2012 R2 are:
• iSCSI: The iSCSI protocol encapsulates SCSI commands into data packetsthat are transmitted using Ethernet and IP protocols. Packets are sent over the network using a point-‐to-‐point connection. Windows Server 2012
3
supports implementing iSCSI target software as a feature. Once the iSCSI target is configured, the cluster nodes can connect to the shared storage using the iSCSI initiator software that is also part of the Windows Server 2012 operating system. Keep in mind that, in most production networks with high loads, system administrators will opt for hardware iSCSI host bus adapters (HBA) over software iSCSI.
• Fiber channel: Fiber channel SANs typically have better performance than iSCSI SANs, but are much more expensive. Specialized hardware and cabling are needed, with options to connect point-‐to-‐point, switched, and loop interfaces.
• Shared serial attached SCSI: Implementing shared serial attached SCSI requires that two cluster nodes be physically close to each other. You may be limited by the number of connections for cluster nodes on the shared storage devices.
• Shared .vhdx: Use with virtual machine guest clustering. A shared virtual hard drive should be located on a clustered shared volume (CSV) or scale-‐out file server cluster. From there, it can be added to virtual machines participating in a guest cluster by connecting to the SCSI interface. Vhd drives are not supported.
• Services and applications. These represent the components that the failover cluster protects by providing high availability. Clients access services and applications and expect them to be available when needed. When a node fails, failover moves services and applications to another node to ensure that those clustered services and applications continue to be available to network clients.
Server 2012 R2 Failover Clustering Quorum Quorum defines the minimum number of nodes that must participate concurrently on the cluster to provide failover protection. Each node casts a vote and if there are enough votes, the quorum can start or continue running. When there is an even number of nodes, the cluster can be configured to allow an additional witness vote from a disk or a file share. Each node contains an updated copy of the cluster configuration that includes the number of votes that are required for the cluster to function properly. There are four quorum modes in Windows Server 2012: Node majority Each node that is online and connected to the network represents a vote. The cluster operates only with a majority, or more than half of the votes. Node majority is recommended for clusters with an odd number of servers. Node and disk majority
4
Each node that is online and connected to the network represents a vote, but there is also a disk witness that is allowed to vote. The cluster runs successfully only with a majority, that means, when it has more than half of the votes. This configuration relies on the nodes being able to communicate with one another in the cluster, and with the disk witness. It is recommended for clusters with an even number of nodes. Node and file share majority Each node that is online and connected to the network represents a vote, but there is also a file share that is allowed to vote. As in previous modes, the cluster operates only with a majority of the votes. This mode works in a similar way to node and disk majority but, instead of a disk witness, the cluster uses a file share witness. No majority: disk only The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage. Only the nodes that are also in communication with that disk can join the cluster. This represents a single point of failure and it is the least desirable option. On Windows Server 2012, the installation wizard by default automatically selects the quorum mode during the installation process. Once the failover cluster installation completes, you will have either one of these two modes:
• Node majority: if there is an odd number of nodes in the cluster. • Node and disk majority: if there is an even number of nodes in the cluster.
At any time you can switch to a different mode to accommodate changes in your network and cluster arrangement. Windows Server 2012 R2 Dynamic Quorum Windows Server 2012 R2 introduces significant changes to the way cluster quorum functions. When you install a Windows Server 2012 R2 failover cluster, dynamic quorum is selected by default. This process defines the quorum majority based on the number of nodes in the cluster and configures the disk witness vote dynamically as nodes are added or remove from the cluster. If a cluster has an odd number of votes, a disk witness will not have a vote in the cluster; with an even number, a disk witness will have a vote. In other words, the cluster automatically decides whether to use the witness vote based on the number of voting nodes that are available in the cluster. Dynamic quorum allows a cluster to recalculate quorum when a node fails in order to keep the cluster running successfully, even when the number of nodes remaining in the cluster drops below 50 percent of the initial configuration. Another benefit of the dynamic quorum is that, when you add or evict nodes from the cluster, there is no need to change the quorum settings manually. The previous
5
quorum modes that require manual configuration are still available, in case you feel some nostalgia for the old methodology. Windows Server 2012 R2 also allows you to start cluster nodes that do not have a majority by using the “force quorum resilience” feature. This can be used when a cluster breaks into subsets of cluster nodes that are not aware of each other, a situation that is also known as split brain syndrome cluster scenario. Using Windows Server 2012 R2 iSCSI Target For shared storage, our demonstration lab uses the iSCSI Target feature on Windows Server 2012 R2. To verify the status of the iSCSI feature on a Windows Server 2012, from Windows PowerShell run the following command:
• Get-‐WindowsFeature FS-‐iSCSITarget-‐Server
The above figure shows that the iSCSI Target has not been installed on the server yet. To install the iSCSI target feature, run the following Windows PowerShell command:
• Install-‐WindowsFeature FS-‐iSCSITarget-‐Server
Configuring the iSCSI targets After the iSCSI has been installed, you can go to Server Manager to complete the configuration. Here are the steps:
1. In the Server Manager, in the navigation pane, click File and Storage Services.
6
2. In the File and Storage Services pane, click iSCSI.
3. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-‐
down list box, click New iSCSI Virtual Disk.
4. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk
location page, under Storage location, click drive E, and then click Next.
7
5. On the Specify iSCSI virtual disk name page, in the Name text box,
type iLUN0, and then click Next.
8
6. On the Specify iSCSI virtual disk size page, in the Size text box, type 500; in
the drop-‐down list box, if necessary switch to GB, select Dynamically expanding and then click Next.
9
7. On the Assign iSCSI target page, click New iSCSI target, and then click Next.
10
8. On the Specify target name page, in the Name box, type iSAN, and then
click Next.
11
9. On the Specify access servers page, click Add.
12
10. In the Select a method to identify the initiator dialog box, click Enter a
value for the selected type, in the Type drop-‐down list box, click IP Address, in the Value text box, type 192.168.1.200, and then click OK.
13
11. On the Specify access servers page, click Add.
14
12. In the Select a method to identify the initiator dialog box, click Enter a
value for the selected type; in the Type drop-‐down list box, click IP Address; in the Value text box, type 192.168.1.201, and then click OK.
15
13. On the Specify access servers page, confirm that you have two IP addresses.
These correspond to the two cluster nodes that will be using their iSCSI initiators to connect to the shared storage. Click Next.
16
14. On the Enable Authentication page, click Next.
17
15. On the Confirm selections page, click Create.
18
16. On the View results page, wait until creation completes, and then
click Close.
19
17. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-‐
down list box, click New iSCSI Virtual Disk.
18. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk
location page; under Storage location, click drive E, and then click Next.
20
19. On the Specify iSCSI virtual disk name page, in the Name box, type iLUN1,
and then click Next.
21
20. On the Specify iSCSI virtual disk size page, in the Size box, type 300; in the
drop-‐down list box, if necessary, switch to GB, select Dynamically expanding, and then click Next.
22
21. On the Assign iSCSI target page, click iSAN, and then click Next.
23
22. On the Confirm selection page, click Create.
24
23. On the View results page, wait until the new iSCSI virtual disk is created, and
then click Close.
25
Repeating steps 17 through 23, another 1GB iSCSI virtual hard disk has been created to be used as the disk witness in the failover cluster. The three drives are shown in the figure below.
Closing Remarks
26
Failover clustering is a critical technology to provide high availability of services and applications. This chapter introduced the Windows Server 2012 R2 failover clustering components and the quorum configuration modes. It also illustrated the implementation of the iSCSI Target feature to provide the shared storage for a failover cluster. Our next section will demonstrate step by step how to connect the servers to the shared storage and how to install and configure Windows Server 2012 R2 failover clustering.
Part 2 Our previous section in this ebook explained the main components of a Windows Server 2012 R2 failover cluster, the quorum configuration options and the shared storage preparation. This section expands on the requirements to implement failover clustering on Windows Server 2012 R2, describes the step-‐by-‐step process to connect the servers to shared storage, and the installation of a Windows Server 2012 R2 failover cluster. After the cluster is created, Windows PowerShell is used to demonstrate a generic application role configuration. Requirements and Recommendations for a Successful Failover Cluster Implementation A Windows Server 2012 R2 failover cluster can have from two to 64 servers, also known as nodes. Once configured, these computers work together to increase the availability of applications and services. However, the requirements for a failover cluster configuration are more stringent than any other Windows Server network service that you may manage. Let’s review some of the most important limitations:
• It is recommended to install similar hardware on each node. • You must run the same edition of Windows Server 2012 or Windows Server
2012 R2. The edition can be Standard or Datacenter, but they cannot be mixed in the same cluster.
• Equally important is to configure the cluster with all nodes as Server Core or Full installation but not both.
• Every node in the cluster should also have similar software updates and service packs.
• You must include matching processor architecture on each cluster node. This means that you cannot mix Intel and AMD processors families on the same cluster.
• When using serial attached SCSI or Fibre Channel storage, the controllers or host bus adapters (HBA) should be identical in all nodes. The controllers should also run the same firmware version.
• If Internet SCSI (iSCSI) is used for storage, each node should have at least one network adapter or host bus adapter committed exclusively to the cluster
27
storage. The network dedicated to iSCSI storage connections should not carry any other network communication traffic. It is recommended to use a minimum of 2 network adapters per node. Gigabit Ethernet (GigE) or higher is strongly suggested for better performance.
• Each node should have installed identical network adapters that support the same IP protocol version, speed, duplex, and flow control options.
• The network adapters in each node must obtain their IP addresses using the same consistent method, either they are all configured with static IP addresses or they all use dynamic IP addresses from a DHCP server.
• Each server in the cluster must be a member of the same Active Directory domain and use the same DNS server for name resolution.
• The networks and hardware equipment use to connect the servers in the cluster should be redundant, so that the nodes will maintain communication with one another after a single link fails, a node crashes, or a network device malfunctions.
• In order to access Microsoft support, all the hardware components in your cluster should bear the “Certified for Windows Server 2012” logo and they must pass the “Validate a Configuration” Wizard test. More on this later in the sections.
Connecting the Servers to Shared Storage Our lab for this demonstration uses two physical Windows Server 2012 R2 nodes nameServerA1, and ServerA2. Before installing the failover clustering feature, let’s connect the servers to the iSCSI target which contains the shared storage that was created in the first chapter of this series. Starting with ServerA1, here are the steps:
1. In the Server Manager, click Tools, and then click the iSCSI Initiator. If prompted, click Yes in the Microsoft iSCSI dialog box.
28
2. In the iSCSI Initiator Properties, click the Discovery tab and then
click Discover Portal.
29
30
3. In the Discover target Portal page, In the IP address or DNS name box, type192.168.1.100, and then click OK. This is the IP address of the iSCSI Target server.
4. Click the Targets tab, click Refresh, select iqn.1991-‐
05.com.microsoft:dc1-‐isan-‐target, and then click Connect.
31
32
5. In the Connect to Target box, make sure that Add this connection to the list of Favorite Targets is selected, click OK.
6. In the iSCSI Initiator Properties, verify that the Status is Connected and
click OK.
33
34
Steps 1 through 6 must also be executed on ServerA2 so that both servers can have access to the shared storage available from the iSCSI Target Server. Next, let’s configure the volumes using Disk Management on ServerA1.
1. In the Server Manager, click Tools, and then click Computer Management.
2. Expand Storage, then click Disk Management and verify that you have three
new disks that need to be configured. These are the iSCSI Target disks.
35
3. Right-‐click Disk 9, and then click Online.
36
4. Right-‐click Disk 9, and then click Initialize disk. In the Initialize Disk dialog
box, click OK.
37
5. Right-‐click the unallocated space next to Disk 9, and then click New Simple
Volume.
38
6. On the Welcome page, click Next.
39
7. On the Specify Volume Size page, click Next.
40
8. On the Assign Drive Letter or Path page, click Next.
41
9. On the Format Partition page, in the Volume Label box, type CSV. Select
thePerform a quick format check box, and then click Next.
42
10. Click Finish.
43
Repeat steps 1 through 10 for Disks 10 and 11. For disk 10 change the label to Data and for Disk 11 change the label to Witness. If you run your own lab, the disks numbers are likely to be different, but the steps are identical. Once all the steps are completed onServerA1, you need to go to ServerA2 and from Disk Management right click on each disk and bring them online. Both servers should show the disks configured as the figure below.
44
Installing the Windows Server 2012 R2 Failover Clustering Feature Now that both servers are connected to the shared storage, the next phase is to install the failover clustering feature on ServerA1 and ServerA2 using either Windows PowerShell or Server manager. The process is exactly the same on both servers, so let’s demonstrate it on ServerA1.
1. Using Windows PowerShell verify that the Failover clustering feature is not installed on the server by running the following command:
• Get-‐WindowsFeature Failover-‐Clustering | FT –Autosize
1. To install the Failover clustering feature, from PowerShell run this command: • Install-‐WindowsFeature Failover-‐Clustering –IncludeManagementTools
Validating the Servers for Failover Clustering
45
Once the failover clustering feature is installed on both servers, running the wizard to validate the servers for failover clustering allows you to generate a detailed report indicating possible areas that may need to be fixed before creating the cluster. Let’s run the Validate a Configuration Wizard from ServerA1.
1. In the Server Manager, click Tools, and then click Failover Cluster Manager.
46
47
2. In the Actions pane of the Failover Cluster Manager, click Validate Configuration.
3. In the Validate a Configuration Wizard, click Next.
48
4. In the Select Servers or a Cluster, next to the Enter Name box,
type ServerA1, and then click Add.
49
5. In the Enter Name box, type ServerA2 and then click Add,
50
6. Verify that ServerA1 and ServerA2 are shown in the Selected servers box
and clickNext.
51
7. Verify that Run all tests (recommended) is selected, and then click Next.
52
8. On the Confirmation page, click Next.
53
9. Wait for the validation tests to finish. This may take several minutes. On the
Summary page, click View Report. It is recommended that you keep this report for future references.
54
10. Verify that all tests are completed without errors. You can click on areas of
the report to find out more details on the configurations that show warnings.
55
11. On the Summary page, click to remove the checkmark next to Create the
cluster now using the validated nodes, and click Finish.
56
Creating the Failover cluster Even though there were some warnings, the servers did pass the validation test, so we can proceed to create our cluster now. The following steps will be executed using Failover Cluster manager on ServerA1, but either node would be fine to complete this process.
1. In the Failover Cluster Manager, in the center pane, under Management, clickCreate Cluster.
57
2. On the Before You Begin page of the Create Cluster Wizard, read the
information and click Next.
58
3. In the Enter server name box, type ServerA1, ServerA2 and then click Add.
59
4. Verify the entries, and then click Next.
60
5. In Access Point for Administering the Cluster, in the Cluster Name box,
typeClusterA. Under Address, type 192.168.1.210, and then click Next.
61
6. In the Confirmation dialog box, verify the information, and then click Next.
62
7. On the Summary page, confirm that the cluster was successfully created and
clickFinish to return to the Failover Cluster Manager.
63
After the Create Cluster Wizard is done, you can verify that a computer object with the cluster’s name has been created in Active Directory. See figure below.
64
Also, a host name is automatically registered in DNS for the new cluster. See figure below.
65
The failover cluster feature predefines specific roles that can be configured for failover protection, including DFS Namespace server, DHCP Server, File Server, iSCSI Target Server, WINS Server, Hyper-‐V Replica Broker and Virtual Machines. It is possible to cluster applications and services that are not clustered aware by using the available Generic application or Generic Service role respectively. The figure below shows the roles representing services and applications that can be configured for high availability.
Either the Failover Cluster Manager or Windows PowerShell can be used to configure these roles. The following code provides an example of applying the Generic Application role using Windows PowerShell. Add-‐ClusterGenericApplicationRole -‐CommandLine notepad.exe ` -‐Name notepad
66
-‐StaticAddress 192.168.1.225
The following command can be used to verify that the generic application is online: Get-‐ClusterResource “notepad application” | fl
Failover Cluster Manager also shows that the generic application is up and running. See the figure below.
Failover Clustered File Server Options
67
Windows server 2012 R2 supports two different clustered file servers’ implementations: Scale-‐Out File Server for application data and File Server for general use. Scale-‐Out File Server for Application Data It is also known as an active-‐active cluster; this feature was introduced in Windows Server 2012 and it is the recommended clustered file server option to deploy Hyper-‐V nodes and Microsoft SQL servers over Server Message Block (SMB). This high performance solution allows you to store server application data on file shares that are concurrently available online on all nodes. Because the aggregated bandwidth from all the nodes is now the maximum cluster bandwidth, the performance boost can be very significant. You can increase the total bandwidth by bringing additional nodes into the cluster. These scale-‐out files shares require SMB 3.0 or higher and they are not available in any version of Windows Server previous to Windows Server 2012. File Server for General Use This is the traditional failover clustering solution that has been available on previous versions of Windows Server in which only one node is available at a time in an active-‐passive configuration. It supports some important features that cannot be implemented on Scale-‐Out File Servers like data deduplication, DFS replication, dynamic access control, work folders, NFS shares, branchcache and File Server Resource Manager screen and quota management. Closing Remarks Installing the Windows Server 2012 R2 failover clustering feature has some strict hardware and software requirements. This chapter demonstrates how to connect the cluster nodes to shared storage, how to create a cluster and configure a generic application role using Windows PowerShell. There is more to do now that the cluster is up and running as we can configure additional services and applications for failover protection. After all, that is the whole idea of setting up the cluster. Our next and final chapter in this series will walk through the configuration of a highly available file server. And saving the best for last, you will see the implementation of cluster shared volumes (CSV) and how they are used on a Hyper-‐V cluster to provide failover protection in a virtualized environment. Live migration will be tested to validate the functionality of the Hyper-‐V cluster.
Part 3 The previous section in this book covered the steps to connect cluster nodes to shared storage, the installation of the Windows Server 2012 failover clustering feature, and the configuration of a cluster role using Windows PowerShell. This
68
chapter demonstrates the process of deploying and configuring a highly available file server, the implementation of cluster shared volumes (CSV), and how to manage a Hyper-‐V cluster to provide failover protection to virtual machines. Deploying and Configuring a Highly Available File Server Our lab uses a cluster name, ClusterA.abc.com, which consists of two nodes that are identified as ServerA1 and ServerA2. You must install the file server role service on every cluster node before a highly available file server can be configured on the cluster. For our lab, both ServerA1 and ServerA2 have the file server role service already installed. There are also three disks that have been added to the cluster; one of the disks is the witness quorum and the other two are used for data storage. To deploy the clustered file server, let’s complete the following steps:
1. In the Failover Cluster Manager, expand ClusterA.abc.com. Expand Storage, and click Disks. Make sure that Cluster Disk 1, Cluster Disk 2, and Cluster Disk 3 are present and online.
2. Right-‐click Roles, and then select Configure Role.
69
3. On the Before You Begin page, click Next.
4. On the Select Role page, select File Server, and then click Next.
70
5. On the File Server Type page, click File Server for general use, and then
click Next.
71
6. On the Client Access Point page, in the Name box, type GeneralFS; in
the Addressbox, type 192.168.1.215; and then click Next. GeneralFS will join the Active Directory as a computer object that can be seen in the Computers container of Active Directory Users or the Active Directory Administrative Center. Also, the same name will register on the DNS server with its corresponding IP address.
72
7. On the Select Storage page, select the Cluster Disk 2 check box, and then
clickNext.
73
8. On the Confirmation page notice the network name and the name for the
organizational unit (OU) where the cluster account will be created, then click Next.
74
9. On the Summary page, click Finish.
75
10. Under ClusterA.abc.com, click on Roles to confirm that the GeneralFS file
server role is up and running. Note that ServerA1 is the GeneralFS role’s Owner Node.However, it is important to test failover protection in order to verify that ServerA2can also hold the ownership of GeneralFS in case ServerA1 becomes unavailable.
76
11. To test failover protection, right-‐click on GeneralFS, and then click on Move
– Select Node.
12. On the Move Clustered Role box, select ServerA2 and click OK.
77
13. Verify that the role failed over to ServerA2, which is now the owner
of GeneralFS.
78
Add a Shared Folder to a Highly Available File Server Now that the clustered file server has been created, it’s time to add shared folders to further assess the functionality of this highly available solution.
1. In Failover Cluster Manager, expand ClusterA.abc.com, and then click Roles.
2. Right-‐click GeneralFS, and then select Add File Share.
79
3. In the New Share Wizard, on the Select the profile for this share page,
click SMB Share – Quick, and then click Next.
80
4. On the Select the server and the path for this share page, under Server,
make sure that GeneralFS is selected and click Next.
81
5. On the Specify share name page, in the Share name box, type Reports, and
then click Next.
82
6. On the Configure share settings page, review the available settings, note
that theEnable Branchcache on the file share option is not available because the Branchcache for Network Files role service is not installed on the server. Click Next.
83
7. On the Specify permissions to control access page, click Next.
84
8. On the Confirm selections page, verify the settings assigned to the file share
and click Create.
85
9. On the View results page, confirm that the share was successfully created
and clickClose.
86
Failover and Failback Failover hands over the authority and responsibility of providing access to resources from one node to another in a cluster. This may happen when a system admin consciously relocates resources to another node to realign loads for maintenance purposes. An unexpected, unplanned downtime could also affect one node due to hardware failure or a network breakdown. Furthermore, service failure on an active node can initiate failover to another node. The failover process takes all the resources in the instance offline in an order that is defined by the instance’s dependency levels. It always tries dependent resources first, followed by the resources on which they rely on. Let’s say that a service depends on a cluster disk resource, the cluster service takes the service offline first to allow the service to write changes to the disk before the disk is taken offline. After all the resources have been taken offline, the cluster service seeks to resettle the instance to another node, according with the preferred owner’s order listed for that cluster role.
87
After all the resources are offline, the Cluster service attempts to transfer the clustered role to the node that is listed next on the clustered role’s list of preferred owners. (See the screen shot for Step # 3 on the Configure failover and failback settings lab below). Once the cluster service moves the cluster role to another node, it attempts to bring all the resources online in reverse order from that in which they were taken offline. In our cluster disk and service example, the disk comes online first and then the service. That way, the service will not try to write to a cluster disk that is not available yet. Let’s review the failover and failback settings in the next phase of our lab. Configure Failover and Failback Settings
1. In the Failover Cluster Manager, click Roles, right-‐click GeneralFS, and then clickProperties.
2. Click the Failover tab to configure the number of times that the cluster
service should attempt to restart or failover a service or application in a given time period, click the Failover tab and specify values under Failover. By default, a maximum of one failure is allow in a 6-‐hour period.
88
3. Click the General tab. Select both ServerA1 and ServerA2 as preferred
owners. Notice that you can move the nodes up or down to indicate your level of preference.
89
4. On the Failover tab, click Allow failback. Click Failback between, and set
values to17 and 7 hours to allow failback to occur between 5:00 PM and 7:00 AM, then clickOK. Keep in mind that you must configure at least one preferred owner if you want failback to take place.
90
Validate the Deployment of the Highly Available File Server To validate the clustered configuration, let’s access the file share to create data and then make the node that owns the clustered file server role unavailable.
1. From a client computer in the network, open File Explorer, and in the Address bar, type \\GeneralFS and press Enter.
91
1. Verify that you can access the Reports folder.
2. Create a text document inside the Reports folder.
92
3. On ServerA1, open the Failover Cluster Manager.
Expand ClusterA.abc.com and then click Roles. Note that the current owner of GeneralFS is ServerA2.
4. Click on Nodes, right-‐click ServerA2, click More Actions, and then
click Stop Cluster Service.
5. Click on Roles to confirm that GeneralFS failed over to ServerA1.
93
6. Switch to the network client computer and verify that you can still access
\\GeneralFS\ and the Reports folder data.
Cluster Shared Volume (CSV) In a traditional Windows failover cluster implementation, multiple nodes cannot access a LUN or a volume on the shared storage simultaneously. CSV enables
94
multiple nodes to share a single LUN at the same time. Each node gains exclusive access to individual files on the LUN instead of the whole LUN. CSVs run as a distributed file access solution that allow multiple nodes in the cluster simultaneously access the same file system on a disk. Only NTFS is supported on Windows Server 2012, but Windows Server 2012 R2 added support for the resilient file system (ReFS). CSVs can only be configured within a failover cluster, after the disks from the shared storage have been added to the cluster. The following steps show how to create a clustered shared volume.
1. From Cluster Manager, select Disks, right-‐click on Cluster Disk 1, and select Add to Cluster Shared Volumes.
2. Verify that Cluster Disk 1 is now assigned to Cluster Shared Volume.
Once the CSV has been created, it can be used to store the highly available virtual machines that will be hosted on the Hyper-‐V cluster. Hyper-‐V Clustering
95
Hyper-‐V clustering requires that the cluster nodes be physical computers, this is known as host clustering. In other words, it is not possible to create a Hyper-‐V cluster using virtual machines as cluster nodes, also referred to as guest clustering. Implementing host clustering for Hyper-‐V allows you to configure virtual machines as highly available resources. In this case, the failover protection is set at the host-‐server level. In consequence, the guest operating system and applications that are running within the virtual machines do not have to be cluster-‐aware. Nevertheless, the virtual machines are still highly available. Configuring a Highly Available Virtual Machine For our lab, the Hyper-‐V role has already been installed on ServerA1 and ServerA2. For details on installing and configuring the Hyper-‐V role, see this article.
1. In the Failover Cluster Manager console, right-‐click Roles, click Virtual Machinesand select New Virtual Machines.
2. Select ServerA1 as the cluster node, and click OK.
96
3. In the New Virtual Machine Wizard, click Next.
97
4. On the Specify Name and Location page, type Score1 for the Name, click
Store the virtual machine in a different location, and then click Browse.
98
5. Browse to and select C:\ClusterStorage\Volume1\ and then click Select
Folder.
99
6. On the Specify Generation page, click Next
100
7. On the Assign Memory page, type 2048, make sure that Use Dynamic
Memory for this virtual machine is checked and then click Next. For details on Hyper-‐V memory management, see this article.
101
8. On the Configure Networking page, click External, and then click Next.
102
9. On the Connect Virtual Hard Disk page, leave the default settings and click
Next.
103
10. On the Installation Options, select Install an operating system from a
bootable CD/DVD-‐ROM, click on image file (.iso), click browse and load a Windows Server 2012 R2 ISO file. Click Next.
104
11. On the Completing the New Virtual Machine Wizard page, click Finish.
105
12. On the Summary page, confirm that high availability was successfully
configured for the role. Click Finish.
106
13. In the Failover Cluster Manager console, click Roles, right-‐click Score1 and
click Start.
107
14. In the Failover Cluster Manager console, click Roles, right-‐click Score1 and
click Connect to complete the guest operating system installation.
108
15. Once the installation completes, verify that you can access the Score1 virtual
machine.
109
Perform a Live Migration for the Virtual Machine
1. From a client computer in the network, send a continuous ping to Score1 by typing the following from a command prompt:
Ping –t Score1
2. In the Failover Cluster Manager, expand ClusterA.abc.com, and
click Roles. Then right-‐click Score1, select Move, select Live Migration, and then click Select Node.
110
3. Click ServerA2 and click OK.
111
4. Get back to the client computer to monitor the pings to Score1. In our labs
only one packet was lost, but the ping continued as the Score1 virtual machine failed over toServerA2.
112
5. In the Failover Cluster Manager, click Roles to confirm that ServerA2 owns
theScore1 virtual machine now.
Closing Remarks High availability is one of the top priorities in many data centers and IT departments. Windows Server 2012 R2 provides a robust clustering solution that can be used with many applications and services. File servers and Hyper-‐V servers are among the most common implementations of Windows Failover clustering. This book provides a hands-‐on approach to the deployment and configuration of highly available file servers and host clustering with Hyper-‐V. In both scenarios, the aim is the same: Minimize a single point of failure and detect failures as they happen.