+ All Categories
Home > Documents > Multi Node Cluster Installation Guide.pdf

Multi Node Cluster Installation Guide.pdf

Date post: 25-Oct-2015
Category:
Upload: kitten257
View: 203 times
Download: 3 times
Share this document with a friend
Description:
Topics in this chapter include:Installation PrerequisitesFlow of Cluster InstallationPreparing the SANOrder of Installing ZCS ServersDownloading the ZCS SoftwareInstalling and Configuring Active NodesInstalling ZCS Cluster on Standby NodesPreparing Red Hat Cluster Suite for ZCSModify Zimbra LDAP and MTA Servers for Logger ServiceStart the Red Hat Cluster Suite DaemonsTesting the Cluster Set upConfiguring ZCS in VCSView Zimbra Cluster Status
Popular Tags:
24
VMware Zimbra Collaboration Server Cluster Installation - Multi-Node Configuration Network Edition 7.1 March 2011
Transcript

VMware Zimbra Collaboration ServerCluster Installation -

Multi-Node Configuration

Network Edition 7.1

March 2011

Legal Notices

Copyright ©2005-2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.

VMware and Zimbra are registered trademarks or trademarks of VMware, Inc. in the United states and/or other jurisdiction. All other marks and names mentioned herein may be trademarks of their respective companies.

VMware, Inc.

3401 Hillview AvenuePalo Alto , California 94304 USA

www.zimbra.com

ZCS 7.1

March 2011

ZCS Cluster Installation- Multi-Node Configuration Guide

VMware Zimbra Collaboration Server Cluster (ZCS) 7.1 integration requires planning the cluster design and precisely executing the configuration. The ZCS Cluster software automates the setup on the nodes. The scripts in the ZCS software install configure the servers cluster integration.Your cluster management software manages the ZCS cluster.

Topics in this chapter include:

Installation Prerequisites

Flow of Cluster Installation

Preparing the SAN

Order of Installing ZCS Servers

Downloading the ZCS Software

Installing and Configuring Active Nodes

Installing ZCS Cluster on Standby Nodes

Preparing Red Hat Cluster Suite for ZCS

Modify Zimbra LDAP and MTA Servers for Logger Service

Start the Red Hat Cluster Suite Daemons

Testing the Cluster Set up

Configuring ZCS in VCS

View Zimbra Cluster Status

This guide explains how to install ZCS multi-server set up in a multi-node cluster environment.

Before you install your cluster enviornment, also read the following guides:

Red Hat Installation Modifications for ZCS guide about preparing the Red Hat Enterprise Linux operating system for ZCS

Network Edition of ZCS 7.1 Multi-Server Installation Guide about installing ZCS and determine the system requirements

To get the latest copy of the documentation, go to http://www.zimbra.com/support/documentation.html.

For cluster integration to provide high availability, ZCS integrates with either of the following:

Red Hat® Cluster Suite running on Red Hat Enterprise Linux® AS or ES

• Release 4, Update 5

• Release 5, Update 3

Veritas™ Cluster Server by Symantec (VCS) version 5.0 with maintenance pack 1 or later

VMware Zimbra Collaboration Server Network Edition 7.1 9

ZCS Cluster Installation- Multi-Node Configuration Guide

The ZCS cluster installation scripts do not configure ZCS cluster management. You manually configure VCS to manage the ZCS servers. See Configuring ZCS in VCS on page 29 for information about configuring ZCS.

Note: This guide does not explain how to use the cluster management software. Before setting up the ZCS cluster environment, you should know the concepts and terminology of the software you are using to manage high availability.

Installation Prerequisites

All servers must meet the requirements described in the Systems Requirements section of the ZCS Multi-Server Installation Guide, in addition to the requirements described here.

ZCS Server Requirements

For ZCS clustering, the server operating system must be Red Hat Enterprise Linux AS/ES

Release 4, Update 5

Release 5, Update 3

The operating system must be configured as described in the Red Hat Installation Modifications for ZCS guide.

Cluster Managers For Clustering

One of the following cluster management tools must be installed on every Zimbra mailbox server before you start to install the ZCS cluster.

Red Hat Enterprise Linux Cluster Suite for the Red Hat Enterprise Linux software you are running

If you are using Red Hat Cluster Manager, go to the Red Hat Cluster Suite website for specific system requirements for cluster configurations using Red Hat Cluster Suite. If you are not familiar with the Red Hat Cluster Suite, read the documentation to understand how each of the components works to provide high availability.

Note: Red Hat Cluster Suite consists of Red Hat Cluster Manager and Linux Virtual Server Cluster. For ZCS, only Red Hat Cluster Manager is used. In this guide, Red Hat Cluster Suite refers only to Cluster Manager.

In many cases, you may not need to use Red Hat’s graphical Cluster Configuration Tool to configure the ZCS cluster. If you do, refer to the Red Hat Cluster Suite documentation for detailed configuration and management instructions.

10 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

Veritas Cluster Server by Symantec (VCS) version 5.0 with maintenance pack 1 or later

If you are using Veritas Cluster Server, go to the Symantec website for specific system requirements for cluster configurations. If you are not familiar with Veritas Cluster Server, read the Veritas Cluster Server User’s Guide.

Hardware for the Cluster Environment

The following hardware is required:

SAN (Storage Area Network - FibreChannel or iSCSI) with partitions to store the data for each of the ZCS servers. The size of the shared storage device depends on your expected site capacity. See Preparing the SAN on page 12.

If using Red Hat Cluster Manager, a network power control switch to connect cluster nodes is required. The power control switch is used as the fence device for I/O fencing during a failover.

Flow of Cluster Installation

The ZCS cluster install process starts with the Active nodes.

Installs the necessary files, defines users and groups, and creates the mount points for the clustered service

Installs ZCS

Mounts the SAN volume(s)

If using Red Hat Cluster Manager, the following is also done:

• Runs the cluster postinstall.pl program

• Runs the cluster configurator script to prepare the Red Hat Cluster Suite

• Copies the cluster config. file to the standby node

• Starts Red Hat Cluster Suite daemons

If using Veritas Cluster Server, after the SAN volume(s) are mounted, use the VCS configuration interface to configure ZCS. See Configuring ZCS in VCS on page 29.

On the Standby nodes, this process does the following:

• Installs the necessary files, defines users and groups, and creates the mount points for the clustered service

• Installs ZCS software

• If using Red Hat Cluster Manager, starts Red Hat Cluster Suite daemons on the standby nodes

VMware Zimbra Collaboration Server Network Edition 7.1 11

ZCS Cluster Installation- Multi-Node Configuration Guide

Cluster Screens Scenario

The screen-shots in this guide show configuring a cluster environment with two active nodes, one standby node, and two cluster services and separate LDAP and MTA servers that are not under the control of the cluster manager. The domain name is example.com.

The following Zimbra servers are configured:

One Zimbra LDAP server, ldap.example.com

One Zimbra MTA server, mta.example.com

Three Zimbra mailbox nodes. Two mailbox nodes are active servers. One mailbox node is the standby server.

• Active mailbox node 1, node1.example.com

• Active mailbox node 2, node2.example.com

• Standby mailbox node, node3.example.com

Two cluster services, one for each of the active nodes

• Cluster Service A, clusterA.example.com

• Cluster Service B, clusterB.example.com

Eighteen volumes are configured on the SAN for this example cluster, nine for each of the two services.

Preparing the SAN

You can place all service data on a single volume or choose to place the service data in multiple volumes. Configure the SAN device and create the partitions for the volumes.

If you select to configure the SAN in one volume with subdirectories, all service data goes under a single SAN volume.

If you select to partition the SAN into multiple volumes, the SAN device is partitioned to provide the multiple volumes for each Zimbra mailbox server in the cluster. The directory hierarchy for these volumes is located under /opt/zimbra-cluster/mountpoints/. For example, /opt/zimbra-cluster/mountpoints/clusterA.example.com.

12 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

Example of the type of volumes that can be created under /opt follows.

Order of Installing ZCS Servers

Install and configure ZCS servers in the following order:

1. Zimbra LDAP server.

2. Active and standby mailbox nodes as the Zimbra mailbox servers in the cluster.

3. MTA servers. The MTA server is last because you need to configure one of the active cluster services’ hostname as the MTA auth host.

Downloading the ZCS Software

For the latest Zimbra software, go to www.zimbra.com/downloads/. Download and save the ZCS network edition file to the computers from which you will install the software. The ZCS software includes all packages necessary to install ZCS including the zimbra cluster package.

Install the LDAP Server

See Chapter 4, Multiple-Server Installation, Installing Zimbra LDAP Master Server section in the 7.1 Multi-Server Installation Guide for instructions about how to install the Zimbra LDAP server.

Installing and Configuring Active Nodes

Log in as root to the first active node.

1. Bring up the service IP address on the active node.

[root@node1 ~]# ip addr add xx.xx.xxx.xx dev eth0

• conf Volume for the service-specific configuration files

• log Volume for the local logs for Zimbra mailbox server

• redolog Volume for the redo logs for the Zimbra mailbox server

• db/data Volume for the MySQL data files for the data store

• store Volume for the message files

• index Volume for the search index files

• backup Volume for the backup files

• logger/db/data

Volume for the MySQL data files for logger service’s MySQL instance

VMware Zimbra Collaboration Server Network Edition 7.1 13

ZCS Cluster Installation- Multi-Node Configuration Guide

2. Unpack the ZCS .tgz file.

tar xzvf <zcsname.tgz>

3. Change directories to the unpacked file and type the following command to begin the cluster install.

./install.sh --cluster active

4. The nodes require Zimbra and Postfix users and groups. Type the Zimbra group ID and Zimbra user ID to be used. The same user and group IDs must be configured on every node. Change the default if the IDs are not available on all nodes in the cluster. If the IDs are not the same, some nodes will not be able to access files on SAN that is owned by these users/groups.

a. Type the Zimbra group ID (GID) to be used. The default is 500.

b. Type the Postfix group ID. The default is 501.

c. Type the Postdrop group ID. The default is 502.

d. Type the Zimbra user ID (UID) to be used. The default is 500.

e. Type the Postfix user ID. The default is 501.

The root directory for the mount points is created.

14 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

5. On each active node, create mount points for the cluster services. Enter one service name per prompt.

• Type the first cluster service hostname, press Enter. For example, clusterA.example.com. If you are installing on one volume, you create only one mount point, or you can install on multiple volumes and you will create a mount point for each volume.

• Type additional cluster service names until all services are configured. In this scenario, add clusterB.example.com.

• Type done, when all mount points are created.

Each Zimbra cluster node needs zimbra and postfix users and groups. The same user and group IDs must be used on all nodes. If not, some nodes will not be able to access files on SAN owned by these users/groups.

Enter zimbra group ID: [500]... groupadd -g 500 zimbragroupadd: group zimbra exists

Enter postfix group ID: [501]... groupadd -g 501 postfixgroupadd: group postfix exists

Enter postdrop group ID: [502]... groupadd -g 502 postdropgroupadd: group postdrop exists

Enter zimbra user ID: [500]... useradd -u 500 -g zimbra -G postfix,tty -d /opt/zimbra -s /bin/bash zimbra useradd: user zimbra exists... chown root:root /opt/zimbra

Enter postfix user ID: [501]... useradd -u 501 -g postfix -d /opt/zimbra/postfix -s /bin/bash postfix useradd: user postfix exists... chown root:root /opt/zimbra/postfixchown: cannot access `/opt/zimbra/postfix': No such file or directory

Creating root directory for mount points... mkdir -p /opt/zimbra-cluster/mountpoints

VMware Zimbra Collaboration Server Network Edition 7.1 15

ZCS Cluster Installation- Multi-Node Configuration Guide

6. Enter the cluster service hostname that will be active on this node. This is the same as the public host name and the same as the name associated with the IP that the cluster migrates from node to node. This is not the same as the node hostname.

7. If you create multiple SAN volumes, mount the cluster services entered in Step 5 and create the directories that will have separate partitions SAN. The first volume, /opt/zimbra-cluster/mountpoints <clusterservicename.com>, is required.

The volumes must be mounted before proceeding.

The directories that can be created are as follows:

• conf

• log

• redolog

• db/data

• store

• index

• backup

• logger/db/data

To create directories, type the following

• mkdir -p /opt/zimbra-cluster/mountpoints/<clusterservicename.com>/<dirname>

On every mailbox server node you need to create mount points for all cluster services. Enter one service name per prompt.

Enter cluster service name ("done") to finish: clusterA.example.com... mkdir -p /opt/zimbra-cluster/mountpoints/clusterA.example.com

Enter cluster service name ("done") to finish: clusterB.example.com... mkdir -p /opt/zimbra-cluster/mountpoints/clusterB.example.com

Enter cluster service name ("done") to finish: done

Mount points were created for the following cluster services:

clusterA.example.comclusterB.example.com

Enter the active cluster service name for this node: [node1.example.com]clusterA.example.com

16 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

• mount LABEL=<dirname> /opt/zimbra-cluster/mountpoints/<clusterservicename.com>/<dirname> <dirname>

Note: You can mount one volume for all services or you can mount separate volumes. To mount one volume for all services, as root type:

mount -v LABEL=<labelname> /opt/zimbra-cluster/mountpoints/<clusterservice-name.com> <dirname>

Next Steps

Before you rerun install.sh --cluster active to install the ZCS software on the active node, delete the /opt/zimbra symlink that was created by the initial pass of the software. Type rm -rf /opt/zimbra

Installing the ZCS Software

Refer to the ZCS 7.1 Multi-Server Installation Guide, Network Edition, Chapter 4 Multiple-Server Installation for ZCS installation instructions.

Important: If you install the Logger package, it must be installed on each mailbox node but only enabled on the first active node.

For each active node in the cluster, install the ZCS, note these configuration points. These changes are made during the installation from the Main menu, Common configuration submenu.

• When the Zimbra software is installed, the installation detects the hostname configured for the server and automatically inserts this name as the default hostname for various values. The server hostname must be changed to the cluster service name configured in Step 5 in the Installing and Configuring Active Nodes section.

• The LDAP server name and LDAP password are required. To find the LDAP password, after the LDAP server is installed, on the LDAP server, type su - zimbra, then type zmlocalconfig -s ldap_root_password.

Make the following changes to the Installing Zimbra Mailbox Server installation instructions in the installation guide:

Please mount all the SAN volumes and rerun install.sh --cluster active.[root@node1 zcs-NETWORK 7.0.1_1703.RHEL4_ 64.20111105125148]# mount -v LABEL=cluster01 /opt/zimbra-cluster/mountpoints/clusterA.example.comcd /opt/zimbra-cluster/mountpoints/clusterA.example.commkdir -p conf log store db/data redolog index backup logger/db/datamount LABEL=clusterA.example.com-conf confmount LABEL=clusterA.example.com-log logmount LABEL=clusterA.example.com-store storemount LABEL=clusterA.example.com-db-data db/datamount LABEL=clusterA.example.com-redolog redologmount LABEL=clusterA.example.com-index indexmount LABEL=clusterA.example.com-backup backupmount LABEL=clusterA.example.com-logger-db-data logger/db/data

VMware Zimbra Collaboration Server Network Edition 7.1 17

ZCS Cluster Installation- Multi-Node Configuration Guide

1. Instead of step 1 bullet three, to start the ZCS installation, type:

./install.sh --cluster active -l /<directory>/ZCSLicense.xml

For the first active node only, to install your Zimbra license during the installation process, type -l /<directory>/ZCSLicense.xml. If you do not install it now you are asked to install the license when you configure the Zimbra store.

2. When you select the Zimbra packages to install, include the Zimbra-Cluster package.

3. When the DNS error to resolve MX displays, enter Yes to change the domain name. Modify the domain name to the cluster service hostname (not the active node name).

4. When the Main menu displays, in the Common configuration menu change the server hostname to the cluster service name configured in Step 6; change LDAP hostname to point at the LDAP master, not the cluster service name and change the LDAP root password.

5. On the first active node only, in the zimbra-store menu, set the Admin Password for the administrator account. On the other active nodes, disable Create Admin User. Type 2 and then type N. You only need to create one admin account.

6. In the zimbra-store menu, configure the SMTP host to the same mta-server host name on all cluster nodes.

7. If you change the Web server mode settings in the zimbra-store menu, the Web mode must be identical on all cluster nodes.

When Configuration Complete - press return to exit displays, the cluster install on the active node is complete.

Continue to install all active nodes.

Installing ZCS Cluster on Standby Nodes

For standby nodes, you define the same group ID and user ID and identify the cluster services names. The ZCS software is installed but not configured on a standby node.

1. Unpack the ZCS .tgz file.

tar xzvf <zcsname.tgz>

2. Change directories to the unpacked file and type the following command to begin the cluster install.

./install.sh --cluster standby

3. Each Zimbra cluster node needs Zimbra and Postfix users and groups. Type the Zimbra group ID (GID) and Zimbra user ID (UID) to be used. The same user and group IDs must be used on all nodes.

18 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

a. Type the Zimbra group ID (GID) to be used. The default is 500.

b. Type the Postfix group ID. The default is 501.

c. Type the Postdrop group ID. The default is 502.

d. Type the Zimbra user ID (UID) to be used. The default is 500.

e. Type the Postfix user ID. The default is 501.

The root directory for the mount points is created.

4. To create mount points for the cluster services on the standby node, type the active node cluster service hostname. Press Enter. Mount point(s) are created for the cluster. These are the same service names as on the active host.

5. Continue to add each standby cluster service. The same cluster service names must be entered as on the active nodes.

6. Type done, to finish the mount point configuration.

7. The ZCS install automatically starts to install the ZCS software packages. Select the same Zimbra packages as installed on the active host. There are no modifications necessary to installing the packages.

The Zimbra processes are stopped, various cluster-specific adjustments are made to the ZCS installation and unnecessary data files are deleted.

After the software installation is complete, you are asked to enter the active cluster service name for this standby node. This creates the symlink to /opt/zimbra.

Preparing Red Hat Cluster Suite for ZCS

If you are using the Red Hat Cluster Suite, after the active and standby nodes have ZCS and cluster installed, the ZCS Cluster Configurator script is run one of the active nodes to prepare Red Hat Cluster Suite to run ZCS. The cluster configurator script is run only on one active mailbox node.

The cluster configurator asks a series of questions to gather information about the cluster and generate the cluster configuration file, /etc/cluster/cluster.conf. This is the main configuration file of Red Hat Cluster Suite.

Software Installation complete!

Operations logged to /tmp/install.log.730

Generating ssh keys for cluster standby node...done.Installing ssh key into authorized_keys...done.

Enter the active cluster service name for this node: [clusterA.example.com] <cluster_service_name>

[root@node2 zcs-NETWORK-5.0.2_1703.RHEL4_64.20071105125148]#

VMware Zimbra Collaboration Server Network Edition 7.1 19

ZCS Cluster Installation- Multi-Node Configuration Guide

The cluster configurator installs the generated configuration file on each cluster node as /etc/cluster/cluster.conf.

The ZCS Cluster Configurator configures the following:

Fence Device. This is the network power switch. The active node is plugged into the fence device. The cluster uses the fence device for I/O fencing during failover.

Cluster Nodes. The active node is added as a member to the cluster and the fence device setting is configured for the active node.

Managed Resources. The preferred node for each service and the list of volumes to be mounted from the SAN are configured.

Note: The ZCS cluster configurator should generate a correct configuration file for most installations, but some cases are more complicated. For instance, if you are using multiple fence devices or highly customized SAN setup, the configurator script will not work. In those cases, use the configurator to generate an initial cluster.conf. Then run the graphical Red Hat Cluster Configuration Tool to make the necessary changes. Using the ZCS cluster configurator script first is recommended because the script automates the steps for the basic configuration. After using the Red Hat Cluster Configuration Tool, you must manually copy the final cluster.conf file to each cluster host.

ZCS Cluster Configurator steps

1. When ZCS installation is complete on the active and standby nodes, on one active node, change directories to the directory where the ZCS .tgz file was unpacked and start the configure-cluster.pl. Type

bin/configure-cluster.pl

Press Enter to continue.

The configurator checks to verify that the server installation is correct.

20 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

2. Each Zimbra cluster on the network must have a unique name to avoid interfering with another Red Hat Cluster Suite cluster. The maximum number of characters for this name is 16.

Enter a name to identify this cluster. Press Enter.

3. Select the network power switch type that is used as the fence device. For ZCS configuration, you must select either APC or WTI as the network power switch device, even if this is not the device you are using. After the cluster configuration is complete, you can change the generated configuration file from the Red Hat Cluster Manager Console system-config-cluster GUI tool.

a. Enter the number the corresponds to the fence device vendor:

• 1 for APC

• 2 for WTI

b. Enter the fence device hostname/IP address, login, and password

4. Enter the fully-qualified hostname for each of the nodes in the cluster and the plug number associated with the node’s power cord. When all the nodes are identified, type Done.

[root@node1 bin]# bin/configure-cluster.pl

Zimbra Collaboration Suite Cluster Configurator

This script will guide you through creating an initial configuration file for Red Hat Cluster Suite. A series of questions will be asked to collect the necessary information. At the end, the configuration data will be saved to a file and the file will be copied to all cluster nodes, as /etc/cluster/cluster.conf on each node.

Press Enter to continue---------------------------------------------------------

Each Zimbra cluster on the network must have a unique name.Enter the cluster name:mycluster

Choose fence device vendor: 1) APC 2) WTIChoose from above (1-2): 1Enter fence device hostname/IP address: apc.example.comEnter fence device login [apc]:xxEnter fence device password: apc

VMware Zimbra Collaboration Server Network Edition 7.1 21

ZCS Cluster Installation- Multi-Node Configuration Guide

5. For each service choose a preferred node to run on. A list of services is displayed, select a service, press Enter.

6. Choose the preferred node on which to run the selected service, press Enter.

7. A Zimbra cluster service must mount service-specific data volumes. Choose the volume setup type.

Note: You can place all service data on a single volume or chose to place data on multiple volumes. Single volume deployments are only recommended for testing environments. Multiple volume options support a customized configuration of one to nine volume sets.

For each cluster node you must provide its fully-qualified hostname and the plug number on the fence device.

Enter node hostname ("done" if no more): node1.example.comEnter fence device plug number for node1.example.com: 1

Enter node hostname ("done" if no more): node2.example.comEnter fence device plug number for node2.example.com: 2

Enter node hostname ("done" if no more): node3.example.comEnter fence device plug number for node3.example.com: 3

Enter node hostname ("done" if no more): done

For each service you need to choose a preferred node to run on, and enter the list of volumes to be mounted from the SAN.

Choose a service: 1) clusterA.example.com2) clusterB.example.com

3) DoneChoose from above (1-3): 1

Choose preferred node on which to run service cluster.example.com: 1) node1.example.com 2) node2.example.com

3) node3.example.comChoose from above (1-3): 1

22 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

8. A prompt, asking if the server includes the volume. If you created the directory described, enter Y.

9. You are prompted for the Device name or LABEL for each one of the mount points you choose to use. In a multi-volume enviornment this is no less then one volume and can contain up to nine prompts based on which items you choose to mount separately. Enter the cluster mount labels defined for the active nodes.

A Zimbra cluster service must mount service-specific data volumes. Two choices are provided in this configuration process. All service data can be placed on a single volume, or multiple volumes can be used for different types of data files. In the multiple-volumes case nine volumes are used per service.

Choose volume setup type: 1) single volume 2) multiple volumesChoose from above (1-2): 2

VMware Zimbra Collaboration Server Network Edition 7.1 23

ZCS Cluster Installation- Multi-Node Configuration Guide

ZCS directoryVolume clusterA.example.com-vol: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb5Do you have a separate volume for conf on clusterA.example.com? (Y/N) yConf directory:Volume clusterA.example.com-conf: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/conf Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb6Do you have a separate volume for logs on clusterA.example.com? (Y/N) yLog directory:Volume clusterA.example.com-log: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/log Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb7Do you have a separate volume for mailbox redologs on clusterA.example.com? (Y/N) y

24 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

10. When Choose a service..., displays, select Done. The configuration is complete.

11. Press Enter to view a summary of the cluster configuration.

After viewing the summary, save the configuration to a file. You can either accept the default name or rename the configuration file.

Redolog directory:Volume clusterA.example.com-redolog: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/redolog Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb8Do you have a separate volume for MySQL data on clusterA.example.com? (Y/N) yMySQL data directory:Volume clusterA.example.com-db-data: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/db/data Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb9Do you have a separate volume for mailbox message store on clusterA.example.com? (Y/N) yMessage store directory:Volume clusterA.example.com-store: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/store Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb10Do you have a separate volume for search indices on clusterA.example.com? (Y/N) ySearch index directory:Volume clusterA.example.com-index: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/index Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb11Do you have a separate volume for backups on clusterA.example.com? (Y/N) yBackup directory:Volume clusterA.example.com-backup: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/backupEnter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb12Do you have a separate volume for logger MySQL data on clusterA.example.com? (Y/N) yLogger MySQL data directory:Volume clusterA.example.com-logger-db-data: mount point = /opt/zimbra-cluster/mountpoints/clusterA.example.com/logger/db/data Enter device name (e.g. /dev/sda5, LABEL=mylabel): /dev/sdb13

Choose a service: 1) clusterA.example.com

2) clusterB.example.com3) Done

Choose from above (1-3): 3

VMware Zimbra Collaboration Server Network Edition 7.1 25

ZCS Cluster Installation- Multi-Node Configuration Guide

12. The configuration file must be copied to all cluster nodes. The ZCS configurator script can copy the files or you can do it manually. If you want the script to copy the file to the standby node, enter Y. Enter the root password, if prompted.

When asked, press Enter, to continue.

Finished collecting information.Press Enter to view summary of the configuration.

Configuration Summary--------------------

Cluster Name: mycluster

Fence Device: name: fence-device agent: fence_apc ipaddr: apce- apc.example.com login: apc passwd: apc

Nodes: node1.example.com - fence port 1 node2.example.com - fence port 2

node3.example.com - fence port 3

Services: clusterA.example.com ipaddr: 10.10.141.200 preferred node: node1.example.com volumes: clusterA.example.com-vol mountpoint: /opt/zimbra-cluster/mountpoints/clusterA.example.com device: LABEL=mycluster-----------------------------------------------------------

About to save configuration file.Enter filename [/tmp/cluster.conf.29843]:

------------------------------------------------------------cluster.conf to all nodes.

Press Enter to continue.

26 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

Important: Use the Red Hat Cluster Configuration Tool if you want to further customize the cluster configuration after the configuration file is generated and copied to all cluster nodes. If you customize the configuration file, you must then manually copy the updated cluster.conf to all nodes.

Install the MTA Server

See the Multi-Server Installation Guide for instructions about how to install the Zimbra MTA server.

Modify Zimbra LDAP and MTA Servers for Logger Service

You must modify the syslog setup on the Zimbra LDAP server and Zimbra MTA servers.

1. On the LDAP server, as root, run /opt/zimbra/bin/zmsyslogsetup.

2. On the MTA server, as root, run /opt/zimbra/bin/zmsyslogsetup.

Cluster configuration saved in /tmp/cluster.conf.29843This file must be copied to all cluster nodes now. This script can do it for you using scp, or you can do it manually.Copy to all cluster nodes using scp? (Y/N) y

Copying /tmp/cluster.conf.29843 to node1.example.com:/etc/cluster/cluster.conf... scp /tmp/cluster.conf.29843 [email protected]:/etc/cluster/[email protected]'s password:cluster.conf.29843 100% 1560 1.5KB/s 00:00

Copying /tmp/cluster.conf.29843 to node2.example.com:/etc/cluster/cluster.conf... scp /tmp/cluster.conf.29843 [email protected]:/etc/cluster/[email protected]'s password:cluster.conf.29843 100% 1560 1.5KB/s 00:00

Copying /tmp/cluster.conf.29843 to node3.example.com:/etc/cluster/cluster.conf.... scp /tmp/cluster.conf.29843 [email protected]:/etc/cluster/[email protected]'s password:cluster.conf.29843 100% 5439 5.3KB/s 00:00

Configuration generated and pushed to all cluster nodes.

If necessary, use system-config-cluster GUI tool to further customize the cluster configuration.

VMware Zimbra Collaboration Server Network Edition 7.1 27

ZCS Cluster Installation- Multi-Node Configuration Guide

Start the Red Hat Cluster Suite Daemons

After the cluster configuration file is copied to every node, you can start the Red Hat Cluster Suite daemons.

Important: Before you start the cluster services, stop zimbra on all active nodes, kill all orphaned zimbra processes and unmount all SAN volumes.

In order to start the cluster daemons correctly, you must be logged on to each node before proceeding, and to see any errors, you should have two sessions open for each node. In our example, you would have six screens opened. You enter a command for one node, then enter the same command for the second, and so forth. You must enter each command on all nodes, before proceeding to the next command.

Log on to each node as root.

Run tail -f /var/log/messages, on each node to watch for any errors.

Open another session for each node.

To start the Red Hat Cluster Service on a member, type the following commands in this order. Remember to enter the command on all nodes before proceeding to the next command.

1. service ccsd start. This is the cluster configuration system daemon that synchronizes configuration between cluster nodes.

2. service cman start. This is the cluster heartbeat daemon. The command may not complete on all nodes immediately. It returns when all nodes have established heartbeat with one another.

3. service fenced start. This is the cluster I/O fencing system that allows cluster nodes to reboot a failed node during failover.

4. service rgmanager start. This manages cluster services and resources.

The service rgmanager start command returns immediately, but initializing the cluster and bringing up the ZCS application for the defined cluster services may take some time.

After all commands have been issued on all nodes, run the clustat command, clustat -il, on one node, to verify all cluster services have been started.

Continue to enter the clustat command until it reports all nodes have joined the cluster, and all services have been started.

Because nodes may not join the cluster in sequence, some of the services may start on nodes that are different from the configured preferred nodes. This is expected and eventually will be restarted on the configured preferred node.

When clustat shows all services are running on the preferred nodes, the cluster configuration is complete.

28 Network Edition 7.1 VMware Zimbra Collaboration Server

ZCS Cluster Installation- Multi-Node Configuration Guide

What to do if cluster services does not relocate to preferred node

If the services does not relocate to the preferred nodes after several minutes, you can issue Red Hat Cluster Suite utility commands to manually correct the situation.

Note: Not starting correctly on the preferred nodes usually is an issue that happens only the first time the cluster is started.

For each cluster service that is not running on the correct preferred node, run clusvcadm -d <cluster service name>, as root on one of the cluster nodes.

This disables the service by stopping all associated Zimbra processes, releasing the service IP address, and unmounting the service’s SAN volumes.

To enable a disabled service, run clusvcadm -e <service name> -m <node name>. This command can be run on any cluster node. It instructs the specified node to mount the SAN volumes of the service, bring up the service IP address, and start the Zimbra processes.

Testing the Cluster Set up

To perform a quick test to see if failover works:

1. Log in to the remote power switch and turn off an active mailbox node.

2. To watch the standby node take over the failed service, run clustat, on one of the other nodes.

3. Run tail -f /var/log/messages. You will observe the cluster becomes aware of the failed node, I/O fence it, and bring up the failed service on a standby node.

Configuring ZCS in VCS

Each ZCS instance is represented in VCS as a service group having one public IP address, one or more file systems, and an application agent. Use VCS configuration interface to configure ZCS as you would any other clustered application, with the following caveat:

Two ZCS instances cannot run on the same host simultaneously. Because VCS by default allows this type of operation, additional configuration is needed to prevent it. This is done using the VCS concepts called limits and prerequisites.

[[email protected]]#clusvcadm -d clusterA.example.com

[[email protected]]#clusvcadm -e clusterA.example.com -m node1.example.com

VMware Zimbra Collaboration Server Network Edition 7.1 29

ZCS Cluster Installation- Multi-Node Configuration Guide

Set a prerequisite called ZCSInstance to 1 on each ZCS service group, and set a limit called ZCSInstance to 1 on each host. This means running a ZCS instance will cost "1", and each host is limited to the capacity of "1". Therefore no host will be able to run 2 or more ZCS instances.

An application agent script must be defined for the ZCS instance. Its name should be the ZCS instance name. Set StartProgram property to:

/opt/zimbra-cluster/bin/zmcluctl start cluster.example.com

Set StopProgram property to:

/opt/zimbra-cluster/bin/zmcluctl stop cluster.example.com

Set MonitorProgram property to:

/opt/zimbra-cluster/bin/zmcluctl status cluster.example.com

Note: "cluster.example.com" should be replaced with the actual ZCS instance name.

Define dependencies in the service group. Make sure all file systems for the ZCS instance are mounted in the correct order and public IP address is brought up, before the application is started.

View Zimbra Cluster Status

Go the Zimbra administration console to check the status of the Zimbra cluster. The Server Status page shows the cluster server, the node, the services running on the cluster server, and the time the cluster was last checked. The standby nodes are displayed as standby. If a service is not running, it is shown as disabled. Managing and maintaining the Zimbra Cluster is through the Red Hat Cluster Manager.

30 Network Edition 7.1 VMware Zimbra Collaboration Server


Recommended