Date post: | 08-Jul-2018 |
Category: |
Documents |
Upload: | sarasasasa |
View: | 234 times |
Download: | 0 times |
of 36
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
1/93
Red Hat Enterprise Linux 7
High Availability Add-On Reference
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
2/93
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
3/93
Red Hat Enterprise Linux 7 High Availability Add-On Reference
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
4/93
Legal Notice
Copyright © 2015 Red Hat, Inc. and others.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0Unported License. If you dis tribute this do cument, or a modified versio n of it, you must provideattribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all RedHat trademarks must be removed.
Red Hat, as the licenso r of this document, waives the right to enforce, and agrees no t to assert,Section 4d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the InfinityLogo, and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o thercountries.
Linux ® is the registered trademark o f Linus Torvalds in the United States and o ther countries.
Java ® is a regis tered trademark o f Oracle and/or its affiliates.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the UnitedStates and/or o ther countries.
MySQL ® is a registered trademark o f MySQL AB in the United States, the European Union andother countries.
Node.js ® is an o fficial trademark of Joyent. Red Hat Software Collections is not formallyrelated to o r endorsed by the official Joyent Node.js open so urce o r commercial project.
The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/servicemarks or trademarks/service marks of the OpenStack Foundation, in the United States and o ther
countries and are used with the OpenStack Foundation's permiss ion. We are not affiliated with,endorsed or sponso red by the OpenStack Foundation, or the OpenStack community.
All o ther trademarks are the property of their respective owners.
Abstract
Red Hat High Availability Add-On Reference provides reference information abo ut installing,configuring, and managing the Red Hat High Availability Add-On for Red Hat Enterprise Linux 7.
http://creativecommons.org/licenses/by-sa/3.0/
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
5/93
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents
Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference
Overview
1.1. New and Changed Features
1.2. Installing Pacemaker configuration too ls
1.3. Co nfiguring the iptables Firewall to Al lo w Cluster Comp onents
1.4. The Cluster and Pacemaker Config uration Files
Chapt er 2. T he pcs Command Line Int erface
2.1. The pcs Co mmands
2.2. pcs Usage Help Disp lay
2.3. Viewing the Raw Cluster Co nfiguration
2.4. Saving a Configuratio n Change to a File
2.5. Disp laying Status
2.6. Di sp laying the Full Cluster Configuration
2.7. Disp laying The Current pcs Version
2.8. Backing Up and Restoring a Cluster Configuration
Chapt er 3. Clust er Creat ion and Administrat ion
3.1. Cluster Creatio n
3.2. Managing Cluster No des
3.3. Setting User Permissio ns
3.4. Removing the Cluster Configuratio n
3.5. Disp laying Cluster Status
Chapt er 4. Fencing: Configu ring ST ONIT H
4.1. Available STONITH (Fencing) Agents
4.2. General Properties of Fencing Devices
4.3. Disp laying Device-Specific Fencing Op tions 4.4. Creating a Fencing Device
4.5. Configuring Storag e-Based Fence Devices with unfencing
4.6. Di splaying Fencing Devices
4.7. Mod ifying and Deleting Fencing Devices
4.8. Managing No des with Fence Devices
4.9. Ad d itional Fencing Config uration Options
4.10 . Co nfiguring Fencing Levels
4.11. Configuring Fencing for Red undant Power Supp lies
Chapt er 5. Configu ring Clust er Resources
5.1. Resource Creatio n
5.2. Resource Prop erties
5.3. Resource-Speci fic Parameters
5.4. Resource Meta Op tions
5.5. Resource Groups
5.6. Resource Operations
5.7. Displaying Configured Resources
5.8. Modifying Resource Parameters
5.9. Multiple Monitoring Operations
5.10. Enabling and Disab ling Cluster Resources
5.11. Cluster Resources Cleanup
Chapter 6. Resource Constraints
6.1. Location Constraints
6.2. Ord er Constraints
3
3
4
5
5
6
6
6
7
7
7
8
8
8
9
9
12
14
16
16
17
17
17
1819
19
19
20
20
20
23
24
25
25
26
26
27
29
31
33
33
34
34
35
36
36
38
T able of Cont ents
1
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
6/93
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3. Col oc atio n of Reso urces
6.4. Disp laying Constraints
Chapt er 7. Managing Cluster Resources
7.1. Manually Moving Reso urces Aro und the Cluster
7.2. Moving Resources Due to Failure
7.3. Moving Reso urces Due to Co nnectivity Changes
7.4. Enabling , Disab ling, and Banning C luster Reso urces
7.5. Disabling a Mo nitor Op erations
7.6. Managed Resources
Chapt er 8. Advanced Resource types
8.1. Resource Clones
8 .2. Multi-State Resources: Reso urces That Have Multipl e Mod es
8.3. Event Notification with Monitoring Resources
8 .4. The pacemaker_remo te Service
Chapt er 9. Pacemaker Rules
9.1. Node Attribute Expressions
9.2. Time/Date Based Expressions
9.3. Date Specifications
9.4. Durations
9.5. Configuring R ules with pcs
9.6 . Sample Time Based Expressi ons
9.7. Using Rules to Determine Reso urce Location
Chapt er 10 . Pacemaker Clust er Propert ies
10 .1. Summary o f Cluster Properties and Op tions
10 .2. Setting and Removing Cluster Prop erties
10 .3. Querying Cluster Property Settings
Chapt er 11. T he pcsd Web UI
11.1. p cs d Web UI Setup
11.2. Managing Clusters with the pcs d Web UI
11.3. Cluster Nodes
11.4. Fence Devices
11.5. Cluster Resources
11.6. C luster Pro perties
Append ix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
A.1. Clus ter Creation with rg manager and with Pacemaker A.2. Cluster Creation with Pacemaker in Red Hat Enterpris e Linux 6 .5 and Red Hat Enterpri se Linux
7
Append ix B. Revision Hist ory
Index
41
43
44
44
45
46
47
48
48
50
50
52
54
56
60
60
61
61
62
62
62
63
64
64
66
66
68
68
68
69
69
69
70
71
71
75
77
79
High Availability Add- On Reference
2
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
7/93
Chapter 1. Red Hat High Availability Add-On Configuration and
Management Reference Overview
This do cument prov ides descriptions o f the options and features tha t the Red Hat High Availab ility
Add-On us ing Pacemaker supp orts. For a step-by-step basic con figuration example, refer to Red Hat
High Availability Add-On Administration.
You can configure a Red Hat High Availability Add-On cluster with the pcs configuration interface or
with the pcsd GUI interface.
1.1. New and Changed Feat ures
This section l ists features of the Red Hat High Availab ility Add-On tha t are new since the initial
release of Red Hat Enterprise Linux 7.
1.1.1. Ne w and Changed Features for Red Hat Enterprise Linux 7.1
Red Hat Enterprise Linux 7.1 inclu des the follo wing documentation and feature updates and
changes.
The pcs resource cleanup command can now reset the resource status an d failcount for all
resources, as documented in Section 5.11, “Cluster Resources Cleanup” .
You can specify a lifetime pa rameter for the pcs resou rce move command , as documented in
Section 7.1, “Manua lly Moving Resources Around the Cluster” .
As of Red Hat Enteprise Linux 7.1, you can use the pcs acl command to set permissions for local
users to allow read-only or read-write access to the cluster configuration by using access control
lists (ACLs). For information on ACLs, see Section 3.3, “Setting User Permissions” .
Section 6 .2.3, “Ordered Resource Sets” and Section 6.3, “Colocation of Resources” have been
extensively updated and clarified.
Section 5 .1, “Resource Creation” documents the disabled pa rameter of the pcs resource
create command, to indica te that the resou rce being created is no t started automatically.
Section 3.1.6, “Configuring Quorum Options” documents the new cluster quorum unb lock
feature, which prevents the cluster from waiting for a ll no des when estab lishing quorum.
Section 5 .1, “Resource Creation” documents the before and after pa rameters of the pcs
resource create command , which can be used to configure resource group ordering.
As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster con figuration in a
tarball a nd restore the cluster configu ration files on a ll nodes from backu p with the backup and
restore option s of the pcs config command. For in formation on this feature, see Section 2.8,
“Backing Up and Restoring a Cluster Configuration” .
Small clarifications have been made throughout this document.
1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2
Red Hat Enterprise Linux 7.2 includes the follo wing documentation and feature updates andchanges.
Chapt er 1 . Red Hat H igh Availabilit y Add- On Configurat ion and Management Reference Overview
3
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
8/93
You can n ow use the pcs resource relocate run command to move a resource to its preferred
node, as determined by current cluster status, constraints, location of resources and other
settings. For in formation on this command, see Section 7.1.2, “Moving a Resource to its Preferred
Node”.
Section 8.3, “Event Notification with Monitorin g Resources” has been modified and expanded to
better document how to configure the ClusterMon resource to execute an external prog ram to
determine what to d o with cluster notifications.
When configuring fencing for redundant power supplies, you now are only required to define
each device once and to specify that both devices are requ ired to fence the node. For information
on configuring fencing for redundan t power supplies, see Section 4.11, “Configu ring Fencing for
Redund ant Power Supplies”.
This document now provides a p rocedure for adding a node to an existing cluster in
Section 3.2.3, “Add ing Cluster Nodes”.
The new resource-discovery loca tion constraint op tion allows you to in dicate whether
Pacemaker should perform resource discovery on a node for a specified resource, as
documented in Table 6.1, “Location Constraint Options” .
Small clarifications and corrections have been made throughou t this document.
1.2. Installing Pacemaker configurat ion tools
You can use the following yum inst all command to install the Red Hat High Availab ility Add-On
software packages along with all available: fence agents from the High Availability channel.
# yum install pcs fence- agents- all
Alternately, you can ins tall the Red Hat High Availab ility Add-On software packages along with only
the fence agent that you requ ire with the follo wing co mmand.
# yum install pcs fence- agents-model
The following command displays a listing of the availab le fence agents.
# rpm -q - a | grep fence
fence-agents-rhevm-4.0.2-3.el7.x86_64
fence-agents-ilo-mp-4.0.2-3.el7.x86_64
fence-agents-ipmilan-4.0.2-3.el7.x86_64...
The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel. You can install
them, as needed, with the follo wing command.
# yum install lvm2-cluster gfs2-u tils
High Availability Add- On Reference
4
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
9/93
Warning
After you install the Red Hat High Availabili ty Add-On packages, you should ensure tha t your
software update preferences are set so that nothing is instal led automatically. Installa tion on a
running cluster can cause unexpected b ehaviors.
1.3. Configuring t he iptables Firewall to Allow Clust er Components
The Red Hat High Availability Add-On requ ires tha t the following ports be enabled for incoming
traffic:
For TCP: Ports 2224, 3121, 21064
For UDP: Ports 5405
For DLM (if using the DLM lock manag er with clvm/GFS2): Port 21064
You can enable these ports by means of the firewalld daemon by executing the follo wing
commands.
# firewall-cmd --p ermanent --ad d-service=hig h-availability
# firewall-cmd -- add- service=hig h-availability
1.4. T he Cluster and Pacemaker Configuration Files
The configuration files for the Red Hat High Availab ility add -on a re corosync.conf and cib.xml.
Do not edit these files directly; use the pcs or pcsd in terface instead.
The corosync.conf file provides the cluster pa rameters used by corosync , the cluster manager
that Pacemaker is bu ilt on.
The cib.xml file is an XML file tha t represents bo th the cluster’s configuration and current state of all
resources in the cluster. This file is used by Pacemaker's Cluster Information Base (CIB). The
contents of the the CIB a re automatically kept in syn c across the entire cluster
Chapt er 1 . Red Hat H igh Availabilit y Add- On Configurat ion and Management Reference Overview
5
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
10/93
Chapter 2. The pcs Command Line Interface
The pcs command line interface controls and configures corosync and Pacemaker by providing an
interface to the corosync.conf and cib.xml files.
The general format of the pcs command is as follows.
pcs [-f file] [-h] [commands]...
2.1. The pcs Commands
The pcs commands are as follows.
cluster
Configure cluster options and nodes. For information on the pcs cluster command, see
Chapter 3, Cluster Creation and Administration.
resource
Create and manage cluster resources. For in formation on the pcs cluster command, see
Chapter 5, Configuring Cluster Resources, Chapter 7, Managing Cluster Resources, and Chapter 8,
Advanced Resource types.
stonith
Configure fence devices for use with Pacemaker. For information on the pcs stonith command ,
see Chapter 4, Fencing: Configuring STONITH.
constraint
Manage resource constrain ts. For information on the pcs constraint command, see Chapter 6,
Resource Constraints.
property
Set Pacemaker properties. For in formation on seeting prop erties with the pcs property
command, see Chapter 10, Pacemaker Cluster Properties.
status
View current cluster and resource status. For information on the pcs status command, seeSection 2.5, “D isplaying Status”.
config
Display complete cluster configura tion in u ser-readab le form. For information on the pcs config
command, see Section 2.6, “D isplaying the Full Cluster Configuration” .
2.2. pcs Usage Help Display
You can use the -h option of pcs to disp lay the parameters of a pcs command and a description of
those pa rameters. For example, the following command d ispla ys the parameters of the pcs
resource command . Only a portion of the output is shown.
# pcs resource -h
High Availability Add- On Reference
6
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
11/93
Usage: pcs resource [commands]...
Manag e pacemaker resources
Commands:
show [resource id] [--all]
Sho w all currently con figured resou rces or if a resource is specified
show the options for the configured resource. If --all is specified
resource options will be disp layed
start
Start resou rce specified by resource_id
...
2.3. Viewing the Raw Cluster Configuration
Althou gh you sh ou ld no t edit the cluster configuration file directily, you can view the raw cluster
configuration with the pcs cluster cib command.
You can save the raw cluster configura tion to a specified file with the pcs cluster cib filename as
described in Section 2.4, “Saving a Configuration Change to a File”.
2.4. Saving a Configurat ion Change to a File
When using the pcs command, you can use the - f option to save a configuration change to a file
without affecting the active CIB.
If you have previously con figured a cluster and there is already a n active CIB, you use the follo wing
command to save the raw xml a file.
pcs cluster cib filename
For example, the following command saves the raw xml from the CIB in to a file name testf i le.
pcs cluster cib testfile
The following command creates a resource in the file testf i le1 but does not add tha t resource to the
currently runnin g cluster configuration.
# pcs -f testfil e1 resource create VirtualIP ocf:heartbeat:IPaddr2 ip= 192.16 8.0.120
cidr_netmask=24 op monit or int erval=30s
You can p ush the current content of testf i le to the CIB with the follo wing co mmand.
pcs cluster cib-push filename
2.5. Displaying Stat us
You can disp lay the status of the cluster and the cluster resou rces with the follo wing command.
pcs status commands
Chapt er 2. T he pcs Command Line Int erface
7
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
12/93
If you do not specify a commands parameter, this command disp lays a ll information about the cluster
and the resources. You display the status of only particular cluster components by specifying
resources, groups, cluster, nodes , or pcsd .
2.6. Displaying t he Full Clust er Configurat ion
Use the follo wing command to disp lay the full cu rrent cluster con figuration.
pcs config
2.7. Displaying T he Current pcs Version
The following command displays the current version of pcs that is running.
pcs --version
2.8. Backing Up and Restoring a Cluster Configuration
As of the Red Hat Enterprise Linux 7.1 release, you ca n back u p the cluster configu ration in a tarbal l
with the fo llowing command. If you do no t specify a file na me, the standard ouptut wil l be used.
pcs config backup filename
Use the follo wing command to restore the cluster configuration files on a ll nodes from the backup. If
you d o no t specify a file name, the standard input will be used. Specifying the - - local op tion restores
on ly the files on the current node.
pcs co nfig restore [--local] [filename]
High Availability Add- On Reference
8
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
13/93
Chapter 3. Cluster Creation and Administration
This chapter describes how to perform basic cluster administration with Pacemaker, inclu ding
creating the cluster, managing the cluster components, and displaying cluster status.
3.1. Clust er Creat ion
To create a runn ing cluster, perform the following s teps:
1. Start the pcsd on each n ode in the cluster.
2. Authenticate the nodes that will constitute the cluster.
3. Configu re and sync the cluster nodes.
4. Start cluster services on the cluster nodes.
The following sections described the commands tha t you u se to perform these steps.
3.1.1. Start ing the pcsd daemon
The follo wing commands start the pcsd service and enable pcsd at system start. These commands
should be run on each node in the cluster.
# systemctl st art p csd.service
# systemctl enab le pcsd .service
3.1.2. Authent icating t he Cluster Nodes
The following command authenticates pcs to the pcs daemon on the nodes in the cluster.
The username for the pcs administrator must be hacluster on every node. It is recommended
that the password for user hacluster be the same on each no de.
If you do not specify username or password , the system will prompt you for those parameters for
each node when you execute the command.
If you do not specify any nodes, this command will authenticate pcs on the nodes that are
specified with a pcs cluster setup command, if you have previously executed that command.
pcs cluster auth [node] [...] [-u username] [-p password ]
For example, the following command authenticates user hacluster on z1.example.com for both of
the nodes in the cluster tha t consis t of z1.example.com and z2.example.com. This command
prompts for the password for user hacluster on the clus ter nod es.
root@z1 ~]# pcs clust er auth z1. example.com z2. example.com
Username: hacluster
Password:
z1.example.com: Authorized
z2.example.com: Authorized
Authorization tokens are stored in the file ~/.pcs/tokens (or /var/lib/pcsd/tokens ).
Chapt er 3. Clust er Creat ion and Administrat ion
9
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
14/93
3.1.3. Configuring and Start ing t he Cluster Nodes
The following command configures the cluster configuration file and syncs the configuration to the
specified nodes.
If you specify the - -start op tion, the command will also start the cluster services on the specified
nodes. If necessary, you can also start the cluster services with a separate pcs cluster start
command.
When you create a cluster with the pcs cluster setup - -start command or when you start
cluster services with the pcs cluster start command, there may be a sligh t delay before the
cluster is up and running. Before performing any subsequent actions on the cluster and its
configura tion, it is recommended that you use the pcs cluster status command to be sure that
the cluster is up and run ning .
If you specify the - - local option, the command will perform changes on the local node only.
pcs cluster setup [--start] [--local] --na me cluster_ name node1 [node2] [...]
The follo wing command starts cluster services on the specified no de or nodes.
If you specify the - -all option, the command starts clus ter services on a ll nodes.
If you do not specify any nodes, cluster services are started on the loca l nod e only.
pcs cluster start [--all] [node] [...]
3.1.4. Configuring T imeout Values for a Cluster
When yo u create a cluster with the pcs cluster setup command, timeout values for the cluster areset to default valu es tha t shou ld be suitable for most cluster configura tions. If you system requ ires
different timeout values, however, you can modify these values with the pcs cluster setup options
summarized in Table 3.1, “Timeout Options”
Table 3.1. Timeout O ptions
Option Descript ion
--token timeout Sets time in milliseconds until a token loss is declared a fter not
receiving a token (defau lt 1000 ms)
- - join timeout sets time in milliseconds to wait for join mesages (default 50 ms)
- -con sensus timeout sets time in milliseconds to wai t for consensus to be ach ieved
before starting a n ew roun d o f member- ship configu ration
(defau lt 1200 ms)
--miss_count _const count sets the maximum number of times on receipt of a token a
message is checked for retransmission before a retransmission
occu rs (defau lt 5 messag es)
-- fail _recv_con st failures specifies how many rotations of the token without receiving any
messages when messages shou ld be received may occur before
a new configuration is formed (default 2500 failures)
For example, the following command creates the cluster new_cluster an d sets the token timeoutvalue to 10000ms (10 seconds) and the join timeout value to 100ms.
# pcs cluster setup - -name new_cluster nod eA nod eB --t oken 10000 --join 100
High Availability Add- On Reference
10
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
15/93
3.1.5. Configuring Redundant Ring Protocol (RRP)
When yo u create a cluster with the pcs cluster setup command, you can configure a cluster with
Redundant Ring Protocol by specifying both interfaces for each node. When using the default udpu
transpo rt, when you sp ecify the cluster nodes you specify the ring 0 address followed by a ' ,', then
the ring 1 address.
For example, the following co mmand con figures a cluster named my_rrp_clusterM with two nodes,
node A and n ode B. Node A has two in terfaces, nodeA-0 and nodeA-1 . Node B has two in terfaces,nodeB-0 and nodeB-1 . To con figure these nodes as a cluster using RRP, execute the following
command.
# pcs cluster set up --n ame my_rrp_cluster no deA-0,nodeA-1 nod eB-0,nod eB-1
For information on configuring RRP in a cluster that uses udp transport, see the help screen for the
pcs cluster setup command.
3.1.6. Configuring Quorum Options
A Red Hat Enterprise Linux High Availab ility Add-On cluster uses the votequorum service to avo id
split-bra in situa tions. A number of votes is assigned to each system in the cluster, and cluster
operations are allo wed to p roceed only when a majority of votes is present. The service must be
loaded into al l nodes or none; if it is loaded into a su bset of cluster nodes, the results will be
unpredictable. For information on the configuration a nd operation of the votequorum service, see
the votequorum(5) man page.
In a situation in which you know tha t the cluster in inquorate but you want the cluster to proceed with
resource management, you can use the following command to p revent the cluster from waiting for all
nodes when establishing quo rum.
Note
This command shou ld be used with extreme caution . Before issuing this command, it is
imperative that you ensure that nodes that are no t currently in the cluster are switched off.
# pcs cluster quorum unb lock
There are some special features of quorum con figura tion tha t you can set when you create a cluster
with the pcs cluster setup command. Table 3.2, “Quorum Options” summarizes these option s.
Table 3.2. Quorum Options
Option Descript ion
--wait_for_all When enab led, the cluster will be quorate for the first time only
after all nodes have been visible a t least once at the same time.
--auto_tie_breaker When enabled, the cluster can suffer up to 5 0% of the nodes
failing at the same time, in a deterministic fashion. The cluster
partition , or the set of nodes tha t are still in contact with the
nodeid configured in auto_tie_breaker_node (or lowestnodeid if no t set), will remain quorate. The other nodes will be
inquorate.
Chapt er 3. Clust er Creat ion and Administrat ion
11
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
16/93
--last_man_standing When enabled, the cluster can dyn amically recalcu late
expected_votes and quorum und er specific circumstances. You
must enable wait_for_al l and yo u must specify
last_man_standing_window when you enable this option.
- -
last_man_standing_windo
w
The time, in milliseconds, to wait before recalcula ting
expected_votes and quorum after a cluster loses nodes.
Option Descript ion
For further information about configuring and using these options, see the votequorum(5) man
page.
3.2. Managing Clust er Nodes
The following sections describe the commands you use to manage cluster nodes, including
commands to start and stop clus ter services and to add and remove cluster nodes.
3.2.1. Stopping Cluster Services
The following command stop s cluster services on the specified node or nodes. As with the pcs
cluster start , the - -all option stops cluster services on all nodes and if you do not specify any
nodes, cluster services are stopped on the local node only.
pcs cluster stop [--all] [node] [...]
You can force a stop of clus ter services on the loca l node with the following command, which
performs a kill -9 command.
pcs clu ster kill
3.2.2. Enabling and Disabling Cluster Services
Use the follo wing command to configu re the cluster services to run on startup o n the specified node
or nodes.
If you specify the - -all option , the command enab les cluster services on all n odes.
If you do not specify any nodes, cluster services are enabled on the loca l nod e only.
pcs c luster enab le [--all] [node] [...]
Use the follo wing co mmand to co nfigu re the cluster services not to run on startup on the specified
node or nodes.
If you specify the - -all option , the command disab les cluster services on all n odes.
If you do not specify any nodes, cluster services are disables on the local node only.
pcs clu ster disab le [--all] [node] [...]
3.2.3. Adding Cluster Nodes
High Availability Add- On Reference
12
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
17/93
Use the follo wing p rocedure to add a new node to an existing cluster. In this example, the existing
cluster nodes are clusternode-01.example.com, clusternode-02.example.com, and
clusternode-03.example.com. The new node is newnode.example.com .
On the new node to add to the cluster, perform the following tasks.
1. Install the cluster packag es;
[root@newnode ~]# yum install -y pcs fence-agen ts-all
2. If you are runn ing the firewalld daemon, execute the follo wing co mmands to enable the
ports tha t are required by the Red Hat High Availabili ty Add-On.
# firewall-cmd --p ermanent --ad d-service=hig h-availability
# firewall-cmd -- add- service=hig h-availability
3. Set a password for the user ID hacluster . It is recommended that you use the same
password for each node in the cluster.
[root@newnode ~]# passwd hacluster
Changing password for user hacluster.
New password :
Retype new password :
passwd: all authentication tokens updated successfully.
4. Execute the following command s to start the pcsd service and to enab le pcsd at system
start.
# systemctl st art p csd.service
# systemctl enab le pcsd .service
On a node in the existing c luster, perform the following tasks.
1. Authenticate user hacluster on the new cluster node.
[root@clusternode-01 ~]# pcs cluster au th newnode.example.com
Username: hacluster
Password:
newnode.example.com: Authorized
2. Add the new nod e to the existing clu ster. This command a lso syncs the clus ter configu ration
file corosync.conf to all n odes in the cluster, including the new node you are addin g.
[root@clusternode-01 ~]# pcs cluster no de add newnode.example.com
On the new node to add to the cluster, perform the following tasks.
1. Authenticate user hacluster on the new node for all nodes in the cluster.
[root@newnode ~]# pcs cluster auth
Username: haclusterPassword:
clusternode-01.example.com: Authorized
Chapt er 3. Clust er Creat ion and Administrat ion
13
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
18/93
clusternode-02.example.com: Authorized
clusternode-03.example.com: Authorized
newnode.example.com: Already authorized
2. Start and enab le cluster services on the new node.
[root@newnode ~]# pcs cluster start
Starting Cluster...[root@newnode ~]# pcs cluster enable
3. Ensure tha t you con figure a fencing device for the new cluster node. For information on
con figuring fencin g devices, see Chapter 4, Fencing: Configuring STONITH.
3.2.4. Removing Cluster Nodes
The follo wing command shuts down the specified nod e and removes it from the cluster configu ration
file, corosync.conf, on al l of the other nod es in the cluster. For information on removin g al l
information about the cluster from the cluster nodes entirely, thereby d estroying the cluster
permanently, refer to Section 3.4, “Removing the Cluster Configuration” .
pcs cluster nod e remove node
3.2.5. Standby Mode
The follo wing command pu ts the specified no de into standby mode. The specified no de is no longer
ab le to host resources. Any resources currently active on the node will be moved to another node. If
you specify the - -all, this command puts all nodes into standby mode.
You can u se this command when upda ting a resource's packages. You can a lso use this command when testing a configuration, to simulate recovery without actually shutting d own a node.
pcs cluster standby node | --all
The following command removes the specified node from standby mode. After running this command,
the specified node is then ab le to host resources. If you sp ecify the - -all, this command removes al l
nodes from standby mode.
pcs cluster unstandby node | --all
Note that when you execute the pcs cluster standb y command , this adds constraints to the
resources to prevent them from runn ing on the indicated node. When you execute the pcs cluster
unstandby command, this removes the constraints. This does no t necessarily move the resources
back to the indicated node; where the resources can run at that point depends on how you have
configured you r resources initially. For information on resource constrain ts, refer to Chapter 6,
Resource Constraints.
3.3. Set t ing User Permissions
By defau lt, the root user and any user who is a member of the group haclient has full read/writeaccess to the cluster con figuration. As of Red Hat Enteprise Linux 7.1, you can use the pcs acl
command to set permissions for local users to al low read-on ly or read-write access to the cluster
configuration by using access control lists (ACLs).
High Availability Add- On Reference
14
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
19/93
Setting permissions for local users is a two-step process:
1. Execute the pcs acl role create... command to create a role which defines the permission s
for that role.
2. Assign the role you created to a user with the pcs acl u ser create command .
The following example procedure provides read-only access for a cluster configuration to a local
user named rouser.
1. This procedure requ ires that the user rouser exists on the loca l system and tha t the user
rouser is a member of the group haclient .
# adduser rouser
# usermod - a -G haclient rou ser
2. Enab le Pacemaker ACLs with the enable-acl cluster prop erty.
# pcs property set enable-acl=t rue --fo rce
3. Create a role named read-only with read-on ly permissions for the cib.
# pcs acl role create read-o nly descriptio n= "Read access to cluster" read
xpat h /cib
4. Create the user rouser in the pcs ACL system and assign that user the read-only role.
# pcs acl user create rouser read-on ly
5. View the current ACLs.
# pcs acl
User: rouser
Roles: read-only
Role: read-only
Description : Read access to cluster
Permission: read xpa th /cib (read-only-read)
The follo wing example procedure provides write access for a cluster configura tion to a local user
named wuser.
1. This procedure requ ires that the user wuser exists on the loca l system and tha t the user
wu ser is a member of the group haclient .
# adduser wuser
# usermod - a -G h aclient wuser
2. Enab le Pacemaker ACLs with the enable-acl cluster prop erty.
# pcs property set enable-acl=t rue --fo rce
3. Create a role named write- access with write permissions for the cib.
# pcs acl role create write-access description= "Full access" write xpath /cib
Chapt er 3. Clust er Creat ion and Administrat ion
15
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
20/93
4. Create the user wuser in the pcs ACL system and a ssign tha t user the write-access role.
# pcs acl user create wuser write- access
5. View the current ACLs.
# pcs acl
User: rouser Roles: read-only
User: wuser
Roles: write-access
Role: read-only
Description : Read access to cluster
Permission: read xpa th /cib (read-only-read)
Role: write-access
Description : Full Access
Permission: write xpath /cib (write-access-write)
For further information about cluster ACLs, see the help screen for the pcs acl command.
3.4. Removing the Cluster Configurat ion
To remove all clu ster configuration files and stop a ll cluster services, thus permanently destroying a
cluster, use the follo wing co mmand.
Warning
This co mmand permanently removes any clu ster con figura tion tha t has b een created. It is
recommended that you run pcs cluster stop before destroying the cluster.
pcs c luster destroy
3.5. Displaying Cluster Stat us
The follo wing command d ispla ys the current status of the cluster and the cluster resou rces.
pcs status
You can d ispla y a sub set of information ab out the current status of the cluster with the following
commands.
The following command d ispla ys the status of the cluster, but no t the cluster resources.
pcs cluster status
The follo wing command d ispla ys the status o f the cluster resources.
pcs status resources
High Availability Add- On Reference
16
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
21/93
Chapter 4. Fencing: Configuring STONITH
STONITH is an acron ym for Shoot-The-Other-Node-In-The-Head and it protects you r data from being
corrupted b y rogue nodes or con current access.
Just because a node is unresponsive, this does not mean it is not accessing your da ta. The only way
to be 100% sure tha t your da ta is safe, is to fence the node using STONITH so we can be certain tha t
the node is truly o ffline, before allowing the da ta to be accessed from another node.
STONITH also has a role to p lay in the event tha t a clustered service canno t be stopped. In this ca se,
the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service
elsewhere.
4.1. Available STONITH (Fencing) Agent s
Use the following co mmand to view of list of all ava ilab le STONITH agents. You specify a filter, then
this command disp lays only the STONITH agents that match the filter.
pcs stonith list [filter ]
4.2. General Properties of Fencing Devices
Note
To disab le a fencing device/resou rce, you can set the target-role as you would for a normal
resource.
Note
To prevent a specific node from using a fencing device, you can configure location
constraints for the fencing resource.
Table 4.1, “General Properties of Fencing D evices” describes the general p roperties you can set for
fencing devices. Refer to Section 4.3, “ Displaying Device-Specific Fencing Options” for informationon fencin g properties you can set for specific fencing devices.
Note
For information on more advanced fencin g con figuration properties, refer to Section 4.9,
“Additional Fencing Configuration Options”
Table 4 .1. General Properties of Fencing Devices
Field Type Default Descript ion
Chapt er 4 . Fencing: Configu ring ST ONIT H
17
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
22/93
priority integer 0 The priority of the stonith resource.
Devices are tried in order of highest
prio rity to lowest.
pcmk_host_map string A mapping of host names to ports
numbers for devices that do not support
host names. For example:
node1:1;node2:2,3 tells the clu ster touse port 1 for node1 and po rts 2 and 3
for node2
pcmk_host_list string A list of machines controlled by this
device (Optional unless
pcmk_host_check=static-list).
pcmk_host_check strin g d yn amic-list Ho w to d etermin e wh ich ma ch in es a re
con trolled by the device. Allowed va lues:
dynamic-list (query the device),
static-list (check the pcmk_host_list
attribute), none (assu me every device
can fence every machine)
Field Type Default Descript ion
4.3. Displaying Device-Specific Fencing Options
Use the following co mmand to view the options for the specified STONITH agent.
pcs stonith describe stonith_agent
For example, the following command disp lays the op tions for the fence agent for APC over
telnet/SSH.
# pcs stonit h describe fence_apc
Stoni th option s for: fence_apc
ipaddr (requ ired): IP Address or Hostname
log in (requ ired): Login Name
passwd: Login password or passph rase
passwd_script: Script to retrieve password
cmd_prompt: Force command prompt
secure: SSH connection
port (requ ired): Phys ical plu g number or name of virtua l mach ine
identity_file: Identity file for ssh switch: Physical switch number on device
inet4_only: Forces agent to use IPv4 addresses on ly
inet6_only: Forces agent to use IPv6 addresses only
ipport: TCP port to use for connection with device
action (required): Fencing Action
verbose: Verbose mode
debug : Write debug in formation to given file
version: Display version information and exit
help: Displa y help and exit
separator: Separator for CSV created by operation list
power_timeou t: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuin g command
log in_timeout: Wait X seconds for cmd prompt after log in
power_wait: Wait X seconds a fter issuing ON/OFF
delay: Wait X seconds before fencin g is started
High Availability Add- On Reference
18
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
23/93
retry_on : Count of attempts to retry power on
4.4. Creat ing a Fencing Device
The follo wing command creates a ston ith device.
pcs s tonith create stonith_id stonith_device_type [stonith_device_options ]
# pcs ston ith create MyStonit h f ence_virt pcmk_host _list=f 1 op monit or
interval=30s
If you use a single fence device for several nodes, using a different port of each nod e, you do not
need to create a device separa tely for each n ode. Instead you can use the pcmk_host_map option
to define which p ort goes to which node. For example, the follo wing command creates a sin gle
fencin g device called myapc-west-13 that uses an APC powerswitch ca lled west-apc and uses
port 15 for node west -13.
# pcs stonith create myapc-west-13 fence_apc pcmk_host_list="west-13"
ipadd r="west-apc" login ="apc" passwd= "apc" port ="15"
The following example, however, uses the APC powerswitch n amed west -apc to fence nodes west -
13 using port 15, west-14 using port 17, west-15 using port 18, and west -16 using port 19.
# pcs stoni th create myapc fence_apc pcmk_host _list="west-13,west-14 ,west-
15,west-16" pcmk_host_map="west-13:15;west-14:17;west-15:18;west-16:19"
ipadd r="west-apc" login ="apc" passwd= "apc"
4.5. Configuring Storage-Based Fence Devices with unfencing
When creating a SAN/storag e fence device (that is, one that uses a non-po wer based fencin g agent),
you must set the meta o ption provides=unfencing when creating the stonith device. This
ensures that a fenced node is unfenced before the node is reboo ted and the cluster services are
started on the node.
Setting the provides=unfencing meta option is not necessary when configuring a power-based
fence device, since the device itself is provid ing power to the node in order for it to bo ot (and a ttempt
to rejoin the cluster). The act of booting in this case implies that unfencing occu rred.
The following command configures a stonith device named my-scsi-shooter that uses the
fence_scsi fence agent, enabling unfencing for the device.
pcs stonith create my-scsi-shooter fence_scsi devices=/dev/sda meta p rovides=unfencin g
4.6. Displaying Fencing Devices
The follo wing command sho ws all currenly con figured fencing devices. If a stonith_id is specified, the
command shows the options for that configured stonith device only. If the - - ful l option is specified,all con figured stonith options a re displayed.
pcs stonith show [stonith_id ] [--full]
Chapt er 4 . Fencing: Configu ring ST ONIT H
19
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
24/93
4.7. Modifying and Delet ing Fencing Devices
Use the follo wing command to modify or add op tions to a cu rrently con figured fencin g device.
pcs stonith update stonith_id [stonith_device_options ]
Use the follo wing co mmand to remove a fencing device from the current configu ration .
pcs stonith delete stonith_id
4.8. Managing Nodes with Fence Devices
You can fence a node manua lly with the following command . If you specify - -of f this will use the off
API call to ston ith which will turn the node off instead o f rebooting it.
pcs ston ith fence node [--off]
You can confirm whether a specified node is currently po wered off with the following command.
Note
If the node you specify is still runn ing the cluster software or services normally co ntrolled by
the cluster, data corruption /cluster failure will occur.
pcs stonith confirm node
4.9. Additional Fencing Configurat ion Opt ions
Table 4.2, “Advanced Properties of Fencing D evices” . summarizes additional properties you can set
for fencin g devices. Note that these prop erties are for advanced use only.
Table 4 .2. Advanced Properties o f Fencing Devices
Field Type Default Descript ion
pcmk_host_argument string port An alternate parameter to supply instead
of port. Some devices do no t suppo rt the
standard port parameter or may provide
additional ones. Use this to specify an
alternate, device-specific, parameter that
shou ld indicate the machine to b e
fenced. A value of none can be used to
tell the cluster not to sup ply any
additional parameters.
pcmk_reboot_action string reboot An alternate command to run instead of
reboot . Some devices do not supportthe standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the reboot action.
High Availability Add- On Reference
20
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
25/93
pcmk_reboot_timeout time 60s Specify an alternate timeout to use for
reboo t actions instead of stonith-
timeout . Some devices need much
more/less time to complete than normal.
Use this to specify an alternate, device-
specific, timeout for reboot actions.
pcmk_reboot_retries integer 2 The maximum number of times to retry thereboot command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with an other task
so Pacemaker will au tomatically retry the
operation, if there is time remain ing . Use
this option to al ter the number of times
Pacemaker retries reboot actions before
giving up.
pcmk_off_action string off An alternate command to run instead of
of f . Some devices do not support thestandard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the off action.
pcmk_off_timeout time 60s Specify an alternate timeout to use for off
actions instead of stonith-t imeout .
Some devices need much more or much
less time to complete than normal. Use
this to sp ecify an a lternate, device-
specific, timeout for o ff actions.
pcmk_off_retries integer 2 The maximum number of times to retry the
off command within the timeout period.
Some devices do not support multiple
connections. Operations may fail if the
device is busy with ano ther task so
Pacemaker will au tomatically retry the
operation, if there is time remain ing . Use
this option to al ter the number of times
Pacemaker retries off action s before
giving up.
pcmk_list_action string list An alternate command to run instead of
list . Some devices do no t suppo rt the
standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the list ac tion.
pcmk_list_timeout time 60s Specify an alternate timeout to use for list
actions instead of stonith-t imeout .
Some devices need much more or much
less time to complete than normal. Use
this to sp ecify an a lternate, device-
specific, timeout for list actions.
Field Type Default Descript ion
Chapt er 4 . Fencing: Configu ring ST ONIT H
21
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
26/93
pcmk_list_retries integer 2 The maximum number of times to retry the
list command within the timeout period.
Some devices do not support multiple
connections. Operations may fail if the
device is busy with ano ther task so
Pacemaker will au tomatically retry the
operation, if there is time remain ing . Usethis option to al ter the number of times
Pacemaker retries list actions before
giving up.
pcmk_monitor_action string monitor An alternate command to run instead of
monitor. Some devices do not support
the standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the monitor ac tion.
pcmk_monitor_timeout time 60s Specify an alternate timeout to use for
monitor actions instead of stonith-timeout . Some devices need much more
or much less time to complete than
normal. Use this to specify an alternate,
device-specific, timeout for monitor
actions.
pcmk_monitor_retries integer 2 The maximum number of times to retry the
monitor command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with an other task
so Pacemaker will au tomatically retry the
operation, if there is time remain ing . Use
this option to al ter the number of times
Pacemaker retries monitor actions before
giving up.
pcmk_status_action string status An alternate command to run instead of
status. Some devices do not suppo rt
the standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the status action.
pcmk_status_timeout time 60s Specify an alternate timeout to use for
status actions instead of stonith-
timeout . Some devices need much more
or much less time to complete than
normal. Use this to specify an alternate,
device-specific, timeout for status
actions.
Field Type Default Descript ion
High Availability Add- On Reference
22
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
27/93
pcmk_status_retries integer 2 The maximum number of times to retry the
status command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with an other task
so Pacemaker will au tomatically retry the
operation, if there is time remain ing . Usethis option to al ter the number of times
Pacemaker retries status actions before
giving up.
Field Type Default Descript ion
4.10. Configuring Fencing Levels
Pacemaker supports fencin g nodes with multiple devices through a feature called fencing topologies.
To implement topo log ies, create the ind ividu al devices as you no rmally would a nd then define one or
more fencing levels in the fencing-topology section in the configuration.
Each level is attempted in ascend ing numeric order, starting at 1.
If a device fails, processing terminates for the current level. No further devices in that level are
exercised and the next level is attempted instead.
If all devices are successfully fenced, then that level has succeeded a nd no other levels a re tried.
The operation is finished when a level has p assed (success), or all levels have been attempted
(failed).
Use the follo wing command to ad d a fencin g level to a n ode. The devices are given as a comma-
separated list of ston ith ids, which a re attempted for the node at that level.
pcs stonith level add level node devices
The follo wing command lists al l of the fencing levels that are currently configu red.
pcs ston ith level
In the following example, there are two fence devices con figured for node rh7-2: an ilo fence device
called my_ilo an d a n apc fence device called my_apc. These commands sets up fence levels so that
if the device my_ilo fails and is un ab le to fence the node, then Pacemaker will attempt to use thedevice my_apc. This example also shows the outpu t of the pcs ston ith level command after the
levels are con figured.
# pcs sto nit h level add 1 rh7-2 my_ilo
# pcs sto nit h level add 2 rh7-2 my_apc
# pcs stoni th level
Node: rh7-2
Level 1 - my_ilo
Level 2 - my_apc
The following command removes the fence level for the specified node and d evices. If no nodes ordevices are specified then the fence level you specify is removed from all n odes.
pcs stonith level remove level [node_id ] [stonith_id ] ... [stonith_id ]
Chapt er 4 . Fencing: Configu ring ST ONIT H
23
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
28/93
The follo wing command clears the fence levels on the specified node or stonith id. If you do not
specify a node or stonith id , all fence levels are cleared.
pcs ston ith level clear [node|stonith_id (s)]
If you specify more than one stonith id , they must be separated by a co mma and no spaces, as in the
following example.
# pcs st on ith level clear d ev_a,dev_b
The following command verifies that all fence devices and n odes specified in fence levels exist.
pcs stonith level verify
4.11. Configuring Fencing for Redundant Power Supplies
When con figuring fencing for redun dant power supp lies, the cluster must ensure tha t whenattempting to reboo t a host, both po wer supplies are turned off before either power supp ly is turned
back on.
If the node never completely loses power, the node may no t release its resources. This opens up the
possibility of nodes accessing these resources simultaneously and corrupting them.
Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly con figure different versions o f the
devices which used either the 'on' or 'o ff' actions. Since Red Hat Enterprise Linux 7.2, it is now on ly
required to define each device once and to sp ecify that bo th are requ ired to fence the node, as in the
following example.
# pcs sto nith create apc1 fence_apc_snmp ipadd r=apc1.example.com login =u ser
passwd='7a4D#1j!pz864'
pcmk_host_map="node1.example.com:1,node2.example.com:2"
# pcs sto nith create apc2 fence_apc_snmp ipadd r=apc2.example.com login =u ser
passwd='7a4D#1j!pz864'
pcmk_host_map="node1.example.com:1,node2.example.com:2"
# pcs sto nith level add 1 no de1.example.com apc1,apc2
# pcs sto nith level add 1 no de2.example.com apc1,apc2
High Availability Add- On Reference
24
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
29/93
Chapter 5. Configuring Cluster Resources
This chapter provides information on configuring resources in a cluster.
5.1. Resource Creat ion
Use the following co mmand to create a c luster resou rce.
pcs resource create resource_id standard:provider:type|type [resource options]
[op operation_action operation_options [operation_action operation_options]...]
[meta meta_options...] [--clone clone_options |
--master master_options | --group group_name
[--before resource_id | --after resource_id ]] [--disabled]
When you sp ecify the - -group op tion, the resource is add ed to the resource group named. If the
grou p do es not exist, this creates the grou p an d adds this resource to the group . For information o n
resource groups, refer to Section 5.5, “Resource Groups” .
The - -before and - -after op tions sp ecify the position of the added resource relative to a resou rce
that already exists in a resource group.
Specifying the - -disabled option ind icates tha t the resource is no t started au tomatically.
The following command creates a resource with the name Virtua lIP of standard ocf , provider
heartbeat , and type IPaddr2. The floating address of this resource is 192.168.0.120, the system
wil l check whether th e resource is ru nning every 30 seconds.
# pcs resource create VirtualIP ocf :heartbeat:IPaddr2 ip= 19 2.168.0.120
cidr_netmask=24 op monit or int erval=30s
Alternately, you can omit the standard and provider fields and u se the follo wing command. This will
default to a standard o f ocf and a provider of heartbeat .
# pcs resource create VirtualIP IPaddr2 ip= 19 2.168.0.120 cidr_netmask=24 op
monit or in terval=30s
Use the follo wing co mmand to delete a configu red resource.
pcs resource delete resource_id
For example, the following command deletes an existing resource with a resource ID of VirtualIP
# pcs resource d elete VirtualIP
For information on the resource_id , standard , provider , and type fields of the pcs resource create
command, refer to Section 5 .2, “Resource Properties”.
For information o n definin g resource parameters for individu al resources, refer to Section 5.3,
“Resource-Specific Parameters” .
For information o n definin g resource meta options, which are used by the cluster to decide how a
resource shou ld b ehave, refer to Section 5 .4, “Resource Meta Option s” .
Chapt er 5. Configuring Clust er Resources
25
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
30/93
For information on defining the operations to perform on a resource, refer to Section 5.6,
“Resource Operations” .
Specifying the - -clone creates a clon e resource. Specifying the --master creates a master/slave
resource. For in formation on resource clones and resou rces with multiple modes, refer to
Chapter 8, Advanced Resource types.
5.2. Resource Propert ies
The properties that you define for a resou rce tell the cluster which script to u se for the resource, where
to find that script and what standards it conforms to. Table 5 .1, “Resource Properties” describes
these properties.
Table 5.1. Resource Properties
Field Descript ion
resource_id Your name for the resource
standard The standard the script conforms to. Allowed values: ocf , service, upstart ,systemd , lsb , stonith
type The name of the Resource Agent you wish to use, for example IPaddr or
Filesystem
p ro vid er Th e OCF sp ec a llo ws mu ltip le ven do rs to su pply th e sa me Reso urceAg en t.
Most of the agents sh ipped by Red Hat use heartbeat as the provider.
Table 5.2, “Commands to D isplay Resource Properties”. summarizes the commands tha t displa y the
available resource properties.
Table 5.2. Commands t o Disp lay Resource Properties
pcs Display Command Output
pcs resource list Displa ys a list of all a vailable resources.
pcs resource standards Displa ys a list of availa ble resources agent standards.
pcs resource providers Displa ys a list of availab le resources agent providers.
pcs resource list string Displays a list of avai lab le resources filtered by the
specified string. You can use this command to display
resources filtered by the name of a standard, a
prov ider, or a type.
5.3. Resource-Specific Paramet ers
For any individu al resource, you ca n u se the following command to display the parameters you can
set for that resource.
# pcs resource describe standard:provider:type |type
For example, the following command disp lays the pa rameters you can set for a resource of type LVM.
# pcs resou rce describ e LVM
Resource options for: LVM
volg rpna me (requ ired): The name of volume grou p.
exclusive: If set, the volume grou p will be activa ted exclusively.
High Availability Add- On Reference
26
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
31/93
partial_activation: If set, the volume grou p will be activated even
on ly partial o f the physica l volumes avai lab le. It helps to set to
true, when you are using mirroring logica l volumes.
5.4. Resource Meta Options
In addition to the resource-specific parameters, you can configure additional resource options forany resou rce. These options are used by the cluster to decide how your resource shou ld behave.
Table 5.3, “Resource Meta Op tions” describes this o ptions.
Table 5.3. Resource Meta Optio ns
Field Default Descript ion
priority 0 If not all resources can be active, the clus ter will
stop lower priority resou rces in order to keep
higher priority ones active.
target -role Started What state should the cluster attempt to keep this
resource in? Allowed va lues:
* Stopped - Force the resource to b e stopped
* Started - Allow the resource to be started (In the
case o f multistate resources, they will no t promoted
to master)
* Master - Allow the resou rce to b e started and, if
appropriate, promoted
is-managed true Is the cluster allowed to start and stop theresource? Allowed values: true, false
resource-stickiness 0 Value to indicate how much the resource prefers to
stay where it is.
Chapt er 5. Configuring Clust er Resources
27
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
32/93
requires Ca lcu la ted Indica tes under what cond itions the resource can
be started.
Defaults to fencing except under the condition s
noted below. Possible values:
* nothing - The cluster can a lways start the
resource.
* quorum - The cluster can on ly start this resource
if a majority of the configured nodes are active.
This is the default value if stonith-enabled is
false or the the resource's standard is stonith.
* fencing - The cluster can on ly start this resource
if a majority of the configured nodes are active and
any failed or unknown nodes have been powered
off.
* unfencing - The cluster can o nly start this
resource if a majority of the configured nodes are
active and any failed o r unkno wn nodes have been
powered off and on ly on nodes that have been
unfenced . This is the defau lt value if the
provides=unfencing stonith meta option h as
been set for a fencing device. For information on
the provides=unfencing stonith meta option,
see Section 4.5, “Configuring Storage-Based
Fence Devices with un fencing” .
mig rat io n- th resh old IN FINIT Y
(disabled)
How many failures may occur for this resource on a
node, before this node is marked ineligib le to host
this resource. For information on configuring the
migration-threshold option, refer to Section 7.2,
“Moving Resources Due to Failure” .
failure- t imeout 0 (disabled) Used in conjunction with the migration-
threshold op tion, indicates how many seconds to
wait before acting as i f the fai lu re had not occu rred,
and potentially allowing the resource back to the
node on which it failed. For information on
configuring the failure-timeout op tion , refer toSection 7.2, “Movin g Resources Due to Failu re”.
mult iple-act ive stop_start What shou ld the cluster do if it ever finds the
resource active on more than one nod e. Allowed
values:
* block - mark the resource as unmanaged
* stop_only - stop all active instances and leave
them tha t way
* stop_start - stop all active instances and start
the resource in one location o nly
Field Default Descript ion
To change the defau lt value of a resource option, use the following command.
High Availability Add- On Reference
28
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
33/93
pcs resource defaults options
For example, the following command resets the defau lt value of resource-stickiness to 100.
# pcs resource defaults resource-stickiness=100
Omitting the options parameter from the pcs resource default s disp lays a list of currently
configured default values for resource options. The following example shows the output of this
command a fter you h ave reset the default va lue of resource-stickiness to 100.
# pcs resource default s
resource-stickiness:100
Whether you have reset the defau lt value of a resource meta option or no t, you can set a resource
option for a p articula r resou rce to a value other than the default when you create the resource. The
following shows the format of the pcs resource create command you u se when specifying a value
for a resource meta op tion.
pcs resource create resource_id standard:provider:type|type [resource options] [meta meta_options...]
For example, the following command creates a resource with a resource-stickiness value of 50.
# pcs resource create VirtualIP ocf :heartbeat:IPaddr2 ip= 19 2.168.0.120
cidr_netmask=24 meta resource-stickin ess=50
You can also set the value of a resource meta option for an existing resou rce, grou p, cloned
resource, or master resource with the following command.
pcs resource meta resource_id | group_id | clone_id | master_id meta_options
In the following example, there is an existing resource named dummy_resource. This command
sets the failure-timeout meta op tion to 2 0 seconds, so that the resou rce can a ttempt to restart on
the same node in 20 seconds.
# pcs resource meta du mmy_resource fail ure- timeout =20s
After executing this command, you can disp lay the values for the resou rce to verity that failure-
timeout=20s is set.
# pcs reso urce sh ow du mmy_resou rce
Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy)
Meta Attrs: failu re-timeout=20s
Operations: s tart interval=0s timeout=20 (dummy_resource-start-timeout-20)
stop interval=0s timeout=20 (dummy_resource-stop-timeout-20)
monitor interval=10 timeout=20 (dummy_resource-monitor-interval-10)
For information o n resource clone meta options, see Section 8 .1, “Resource Clones”. For information
on resource master meta options, see Section 8 .2, “Multi-State Resources: Resources That Have
Multiple Modes” .
5.5. Resource Groups
Chapt er 5. Configuring Clust er Resources
29
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
34/93
One of the most common elements of a cluster is a set of resources that need to be loca ted together,
start sequentially, and stop in the reverse order. To simplify this con figura tion, Pacemaker supp orts
the concept of group s.
You create a resource grou p with the follo wing command, specifying the resou rces to include in the
grou p. If the group does not exist, this co mmand creates the grou p. If the grou p exists, this command
adds add itiona l resources to the grou p. The resources will start in the order you specify them with
this command , and will stop in the reverse order of their starting order.
pcs resource group add group_name resource_id [resource_id ] ... [resource_id ]
[--before resource_id | --after resource_id
You can use the - -before and - -after options of this command to specify the position o f the added
resources relative to a resource that already exists in the group .
You can a lso add a new resource to an existing grou p when you create the resource, usin g the
following command. The resou rce you create is added to the grou p na med group_name.
pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action
operation_options] --group group_name
You remove a resource from a grou p with the following command. If there are no resou rces in the
grou p, this command removes the group itself.
pcs resource group remove group_name resource_id ...
The following command lists all currently configured resource groups.
pcs resource group list
The follo wing example creates a resou rce group named shortcut that contains the existing
resources IPaddr and Email.
# pcs resource group add shortcut IPaddr Email
There is no limit to the number of resources a group can contain . The fundamental p roperties of a
group are as follows.
Resources are started in the order in which yo u specify them (in this example, IPaddr first, then
Email).
Resources are stopped in the reverse order in which you specify them. (Email first, then IPaddr).
If a resource in the group cannot run anywhere, then no resource specified after that resource is
allowed to run .
If IPaddr cannot run a nywhere, neither can Email.
If Email can not run anywhere, however, this does not affect IPaddr in any way.
Obviously as the group grows bigger, the reduced configuration effort of creating resource groups
can become significant.
5.5.1. Group Opt ions
High Availability Add- On Reference
30
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
35/93
A resource group inherits the following options from the resources tha t it con tains: priority, target-
role , is-managed For information o n resource options, refer to Table 5.3, “Resource Meta Options” .
5.5.2. Group Stickiness
Stickiness, the measure of how much a resource wants to stay where it is, is additive in g roup s. Every
active resou rce of the group will contribute its stickiness value to the group’s total. So if the default
resource-stickiness is 100, an d a grou p has seven members, five of which a re active, then thegrou p as a whole will prefer its current location with a score of 500.
5.6. Resource Operat ions
To ensure that resources remain healthy, you can add a monitoring operation to a resource's
definition. If you d o no t specify a monitoring op eration for a resource, by defau lt the pcs command
wil l create a monito ring op eration, with an in terva l that is determined b y the resource agent. If the
resource agent does not provide a defau lt mon itoring interval, the pcs command will create a
monitoring operation with a n in terval of 60 seconds.
Table 5.4, “Properties of an Operation” summarizes the properties of a resource monitoring
operation.
Table 5.4. Properties of an Operation
Field Descript ion
id Unique name for the action . The system assigns this when you con figure an
operation.
name The action to perform. Common va lues: monitor, start , stop
interval How frequently (in seconds) to perform the operation. Defau lt value: 0,
mean ing never.
timeout How long to wait before declaring the action has failed. If you find tha t your
system includ es a resource that takes a long time to start or stop o r perform a
non-recurring monitor action at startup, and requ ires more time than the
system allows before declaring that the start action has failed, you can
increase this valu e from the default of 20 or the value of timeout in "op
defaults".
on-fail The action to take if this a ction ever fails. Allowed values:
* ignore - Pretend the resource did not fail
* block - Do not perform any further operations on the resource
* stop - Stop the resou rce and do not start it elsewhere
* restart - Stop the resource and start it again (possibly o n a d ifferent nod e)
* fence - STONITH the node on which the resou rce failed
* standby - Move all resou rces away from the node on which the resou rce
failed
The default for the stop operation is fence when STONITH is enabled an d
block otherwise. All o ther operations default to restart .
enabled If false, the operation is treated as if it does not exist. Allowed values: true,
false
Chapt er 5. Configuring Clust er Resources
31
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
36/93
You can configure monitoring operations when you create a resource, using the following command.
pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action
operation_options [operation_type operation_options]...]
For example, the following command creates an IPaddr2 resource with a monitoring operation. The
new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A
monitoring operation will be performed every 30 seconds.
# pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip= 19 2.168.0.9 9
cidr_netmask=24 nic=et h2 op monit or interval=30s
Alternately, you can add a monitoring operation to an existing resource with the following command.
pcs resource op a dd resource_id operation_action [operation_properties]
Use the follo wing command to delete a configured resource operation.
pcs resource op remove resource_id operation_name operation_properties
Note
You must specify the exact operation properties to properly remove an existing operation .
To change the values of a monitoring op tion, you remove the existing operation, then add the new
operation . For example, you can create a VirtualIP with the follo wing command.
# pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip= 19 2.168.0.9 9
cidr_netmask=24 n ic=eth 2
By d efau lt, this command creates these operations.
Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s)
stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s)
monitor interval=10s timeout=20s (VirtualIP-monitor-in terval-10s)
The chan ge the stop timeout operation , execute the following command s.
# pcs resource o p remove VirtualIP sto p interval=0s t imeout =20s
# pcs resource op add VirtualIP stop in terval=0s timeout= 4 0s
# pcs resource sho w Virtu alIP
Resource: VirtualIP (class=ocf p rovider=heartbeat type=IPadd r2)
Attribu tes: ip=192 .168.0.99 cidr_netmask=24 nic=eth2
Operations: start interval=0s timeout=20s (Virtua lIP-start-timeout-20s)
monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)
stop interval=0s timeout=40s (VirtualIP-na me-stop-interval-0s-timeout-40s)
To set global default values for monitoring operations, use the following command.
pcs resource op defau lts [options]
High Availability Add- On Reference
32
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
37/93
For example, the following co mmand sets a g lobal defau lt of a timeout value of 240s for all
monitoring operations.
# pcs resource op defaults t imeout= 240s
To d isplay the currently con figured default valu es for monitoring o perations, do not specify any
options when yo u execute the pcs resource op defau lts commmand.
For example, following command displays the default monitoring operation values for a cluster which
has been configured with a timeout value of 240s.
# pcs resource op default s
timeout: 240s
5.7. Displaying Configured Resources
To display a list of all configured resources, use the following command.
pcs resource show
For example, if your system is configured with a resource named VirtualIP and a resource named
WebSite, the pcs resource show command yields the following output.
# pcs resource show
VirtualIP (ocf::heartbeat:IPadd r2): Started
WebSite (ocf::heartbeat:apache): Started
To disp lay a l ist of all configured resou rces and the parameters configured for those resources, use
the - - ful l op tion o f the the pcs resource show command, as in the follo wing example.
# pcs resource show --fu ll
Resource: VirtualIP (type=IPadd r2 c lass=ocf p rovider=heartbeat)
Attribu tes: ip=192 .168.0.120 cidr_netmask=24
Operations: monitor interval=30s
Resource: WebSite (type=apache class=ocf provider=heartbeat)
Attribu tes: statusurl=http://loca lhost/server-status con figfile=/etc/httpd /conf/httpd.conf
Operations: monitor interval=1min
To display the configured parameters for a resource, use the following command.
pcs resource show resource_id
For example, the following command disp lays the currently con figured pa rameters for resource
VirtualIP.
# pcs resource sho w Virtu alIP
Resource: VirtualIP (type=IPadd r2 c lass=ocf p rovider=heartbeat)
Attribu tes: ip=192 .168.0.120 cidr_netmask=24 Operations: monitor interval=30s
Chapt er 5. Configuring Clust er Resources
33
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
38/93
. .
To modify the parameters of a configured resource, use the following command.
pcs resource update resource_id [resource_options]
The follo wing sequence of commands show the initial values of the configured pa rameters for
resource VirtualIP, the command to ch ange the value of the ip para meter, and the values follo wing
the update command.
# pcs resource sho w Virtu alIP
Resource: VirtualIP (type=IPadd r2 c lass=ocf p rovider=heartbeat)
Attribu tes: ip=192 .168.0.120 cidr_netmask=24
Operations: monitor interval=30s
# pcs resource updat e VirtualIP ip=19 2.169 .0.120
# pcs resource sho w Virtu alIP
Resource: VirtualIP (type=IPadd r2 c lass=ocf p rovider=heartbeat)
Attribu tes: ip=192 .169.0.120 cidr_netmask=24
Operations: monitor interval=30s
5.9. Multiple Monitoring Operations
You can configure a sing le resource with a s many monitor operations as a resource agent supports.
In this way you can do a sup erficial health check every minute and progressively more intense ones
at higher intervals.
Note
When con figuring multiple mon itor operations, you must ensu re that no two operations are
performed at the same interval.
To con figure additional monitoring operations for a resource that supports more in-depth checks at
different levels, you add an OCF_CHECK_LEVEL=n option.
For example, if you configu re the follo wing IPaddr2 resource, by default this creates a monitoring
operation with an in terval of 10 seconds a nd a timeout value of 20 seconds.
# pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip= 19 2.168.0.9 9cidr_netmask=24 nic=et h2
If the Virtual IP supports a d ifferent check with a depth of 10, the following command causes
Packemaker to perform the more advanced monitoring check every 60 seconds in a dd ition to the
normal Virtual IP check every 10 seconds. (As noted, you sho uld not con figure the add itiona l
monitoring operation with a 10-second interval as well.)
# pcs resou rce op add Virtu alIP mon ito r interval=6 0s OCF_CHECK_LEVEL=10
5.10. Enabling and Disabling Cluster Resources
The follo wing command enab les the resou rce specified by resource_id .
High Availability Add- On Reference
34
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
39/93
pcs resource enable resource_id
The follo wing command disab les the resource specified by resource_id .
pcs resource disab le resource_id
5.11. Cluster Resources Cleanup
If a resource has failed, a failure message appears when you display the cluster status. If you resolve
that resource, you can clear tha t failure status with the pcs resource cleanup command. This
command resets the resource status and failcou nt, telling the clus ter to forget the operation history o f
a resource and re-detect its current state.
The follo wing command cleans up the resource specified by resource_id .
pcs resource cleanup resource_id
If you do not specify a resource_id , this co mmand resets the resou rce status and fai lcount for all
resources.
Chapt er 5. Configuring Clust er Resources
35
8/19/2019 Red Hat Enterprise Linux 7 - High Availability Add-On Reference
40/93
Chapter 6. Resource Constraints
You can d etermine the behavior of a resource in a cluster by con figuring con straints for that
resource. You can configure the follo wing ca tegories of constraints:
location constraints — A location constraint determines which nodes a resource can run on.
Location constraints are described in Section 6.1, “Location Constraints” .
order constraints — An order constraint determines the order in which the resources run. Order
constraints are described in Section 6.2, “Order Constraints” .
colocation constraints