+ All Categories
Home > Documents > Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster...

Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster...

Date post: 11-Mar-2018
Category:
Upload: duongthuan
View: 227 times
Download: 7 times
Share this document with a friend
32
VERITAS Cluster Suit on RHEL 5 By Amit Kumar
Transcript
Page 1: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

VERITAS Cluster Suit on RHEL 5

By Amit Kumar

Page 2: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Veritas Cluster

Cluster Information

Veritas Cluster 5.0 can have upto 32 nodes.

LLT (Low-Latency Transport)

Veritas uses a high-performance, low-latency protocol for cluster communications. LLT runs directly on top of the data link provider interface (DLPI) layer via Ethernet and has several major junctions:

sending and receiving heartbeats monitoring and transporting network traffic over multiple network links to every active system

within the cluster load-balancing traffic over multiple links maintaining the state of communication providing a nonroutable transport mechanism for cluster communications.

Group membership services/Atomic Broadcast (GAB)

GAB provides the following:

Group Membership Services - GAB maintains the overall cluster membership by the way of its Group Membership Services function. Heartbeats are used to determine if a system is active member, joining or leaving a cluster. GAB determines what the position of a system is in within a cluster.

Atomic Broadcast - Cluster configuration and status information is distributed dynamically to all system within the cluster using GAB's Atomic Broadcast feature. Atomic Broadcast ensures all active system receive all messages, for every resource and service group in the cluster. Atomic means that all system receive the update, if one fails then the change is rolled back on all systems.

High Availability Daemon (HAD)

The HAD tracks all changes within the cluster configuration and resource status by communicating with GAB. Think of HAD as the manager of the resource agents. A companion daemon called hashadow moniotrs HAD and if HAD fails hashadow attempts to restart it. Like wise if hashadow daemon dies HAD will restart it. HAD maintains the cluster state information. HAD uses the main.cf file to build the cluster information in memory and is also responsible for updating the configuration in memory.

VCS architecture

So putting the above altogether we get:

Agents monitor resources on each system and provide status to HAD on the local system HAD on each system send status information to GAB GAB broadcasts configuration information to all cluster members LLT transports all cluster communications to all cluster nodes HAD on each node takes corrective action, such as failover, when necessary

Page 3: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Service Groups

There are three types of service groups:

Failover - The service group runs on one system at any one time. Parallel - The service group can run simultaneously pn more than one system at any time. Hybrid - A hybrid service group is a combination of a failover service group and a parallel

service group used in VCS 4.0 replicated data clusters, which are based on Veritas Volume Replicator.

When a service group appears to be suspended while being brought online you can flush the service group to enable corrective action. Flushing a service group stops VCS from attempting to bring resources online or take them offline and clears any internal wait states.

Resources

Resources are objects that related to hardware and software, VCS controls these resources through these actions:

Bringing resource online (starting) Taking resource offline (stopping) Monitoring a resource (probing)

When you link a parent resource to a child resource, the dependency becomes a component of the service group configuration. You can view the dependencies at the bottom of the main.cf file.

Proxy ResourceA proxy resource allows multiple service groups to monitor the same network interface. This reduces the network traffic that would result from having multiple NIC resources in different service groups monitoring the same interface.

Phantom ResourceThe phantom resource is used to report the actual status of a service group that consists of only persistent resources. A service group shows an online status only when all of its nonpersistent resources are online. Therefore, if a service group has only persistent resources (network interface), VCS considers the group offline, even if the persistent resources are running properly. By adding a phantom resource, the status of the service group is hsown as online.

scsi-initiator-id

All node within the cluster must have a unique scsi-initiator-id, to set the scsi-initiator-id follow below:

1. At the OBP set the scsi-initiator-id to 6

OK> setenv scsi-initiator-id 6OK> printenv scsi-initiator-id

2. When the server has booted create and enter the following in /kernel/drv/glm.conf

Page 4: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

  name="glm" parent=/pci@1f,4000unit-address="5"scsi-initiator-id=6; 

3. To check that the scsi-initiator-id has been set use the following command

  # prtconf -v          # search through the listing finding scsi-initiator-id ( Solaris )

Installation

Before you install VCS make sure you have the following prepared:

Cluster Name Unique ID Number Hostnames of the servers Devices names of the network interfaces for the private networks Root access Able to perform remote shell from all systems (.rhosts file requires updating) VCS software

To install VCS follow below, remember that both hosts must be able to root SSH into each other without requesting for a password: -

1. Start the VCS installation by entering

# ./installVCS2. Enter the cluster name and the unique ID number

Cluster name:             cluster1Unique ID:                1

3. Enter the systems names that require clustering

System names: station40 station504. The software will now check each servers remote access and then install the software on each

server.5. A list will appear detailing all the NIC's available. Select the FIRST then the SECOND private

networks links

First Link:           hme0Second Link:          qfe0

6. Answer Yes to the next questions (Servers are identical)7. The LLT and GAB files will be copied and a successful message will appear

Page 5: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Veritas Cluster Cheat sheet

LLT and GAB Commands |  Port Membership | Daemons | Log Files | Dynamic Configuration | Users | Resources | Resource Agents | Service Groups | Clusters | Cluster Status | System Operations | Sevice Group Operations | Resource Operations | Agent Operations | Starting and Stopping

LLT and GAB

VCS uses two components, LLT and GAB to share data over the private networks among systems.These components provide the performance and reliability required by VCS.

LLT

LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack

GAB

GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab).

LLT and GAB files

/etc/llthosts The file is a database, containing one entry per system, that links the LLT system ID with the hosts name. The file is identical on each server in the cluster.

/etc/llttab The file contains information that is derived during installation and is used by the utility lltconfig.

/etc/gabtab The file contains the information needed to configure the GAB driver. This file is used by the gabconfig utility.

/etc/VRTSvcs/conf/config/main.cf

The VCS configuration file. The file contains the information that defines the cluster and its systems.

Gabtab Entries

/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124/sbin/gabconfig -c -n2

Page 6: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

gabdiskconf -i   Initialises the disk region-s   Start Block-S   Signature

gabdiskhb (heartbeat disks)

-a   Add a gab disk heartbeat resource-s   Start Block-p   Port-S   Signature

gabconfig -c   Configure the driver for use-n   Number of systems in the cluster.

LLT and GAB Commands

Verifying that links are active for LLT lltstat -n

verbose output of the lltstat command lltstat -nvv | more

open ports for LLT lltstat -p

display the values of LLT configuration directives

lltstat -c

lists information about each configured LLT link

lltstat -l

List all MAC addresses in the cluster lltconfig -a list

stop the LLT running lltconfig -U

start the LLT lltconfig -c

verify that GAB is operating

gabconfig -a

Note: port a indicates that GAB is communicating, port h indicates that VCS is started

stop GAB running gabconfig -U

start the GAB gabconfig -c -n <number of nodes>

override the seed values in the gabtab file

gabconfig -c -x

GAB Port Memberbership

List Membershipgabconfig -a

Unregister port f /opt/VRTS/bin/fsclustadm cfsdeinit

Port Function

a   gab driverb   I/O fencing (designed to guarantee data integrity)d   ODM (Oracle Disk Manager)f   CFS (Cluster File System)h   VCS (VERITAS Cluster Server: high availability daemon)o   VCSMM driver (kernel module needed for Oracle and VCS interface)q   QuickLog daemonv   CVM (Cluster Volume Manager)w   vxconfigd (module for cvm)

Page 7: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Cluster daemons

High Availability Daemon had

Companion Daemon hashadow

Resource Agent daemon <resource>Agent

Web Console cluster managerment daemon CmdServer

Cluster Log Files

Log Directory /var/VRTSvcs/log

primary log file (engine log file) /var/VRTSvcs/log/engine_A.log

Starting and Stopping the cluster

"-stale" instructs the engine to treat the local config as stale"-force" instructs the engine to treat a stale config as a valid one

hastart [-stale|-force]

Bring the cluster into running mode from a stale state using the configuration file from a particular server

hasys -force <server_name>

stop the cluster on the local server but leave the application/s running, do not failover the application/s

hastop -local

stop cluster on local server but evacuate (failover) the application/s to another node within the cluster

hastop -local -evacuate

stop the cluster on all nodes but leave the application/s running

hastop -all -force

Cluster Status

display cluster summary hastatus -summary

continually monitor cluster hastatus

verify the cluster is operating hasys -display

Cluster Details

information about a cluster haclus -display

value for a specific cluster attribute haclus -value <attribute>

modify a cluster attribute haclus -modify <attribute name> <new>

Enable LinkMonitoring haclus -enable LinkMonitoring

Disable LinkMonitoring haclus -disable LinkMonitoring

Users

Page 8: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

add a user hauser -add <username>

modify a user hauser -update <username>

delete a user hauser -delete <username>

display all users hauser -display

System Operations

add a system to the cluster hasys -add <sys>

delete a system from the cluster hasys -delete <sys>

Modify a system attributes hasys -modify <sys> <modify options>

list a system state hasys -state

Force a system to start hasys -force

Display the systems attributes hasys -display [-sys]

List all the systems in the cluster hasys -list

Change the load attribute of a system hasys -load <system> <value>

Display the value of a systems nodeid (/etc/llthosts)

hasys -nodeid

Freeze a system (No offlining system, No groups onlining)

hasys -freeze [-persistent][-evacuate]

Note: main.cf must be in write mode

Unfreeze a system ( reenable groups and resource back online)

hasys -unfreeze [-persistent]

Note: main.cf must be in write mode

Dynamic Configuration  

The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put back into read only mode the .stale file is removed.

Change configuration to read/write mode

haconf -makerw

Change configuration to read-only mode haconf -dump -makero

Check what mode cluster is running in

haclus -display |grep -i 'readonly'

0 = write mode1 = read only mode

Check the configuration file

hacf -verify /etc/VRTS/conf/config

Note: you can point to any directory as long as it has main.cf and types.cf

convert a main.cf file into cluster commands

hacf -cftocmd /etc/VRTS/conf/config -dest /tmp

convert a command file into a main.cf file hacf -cmdtocf /tmp -dest /etc/VRTS/conf/config

Service Groups

Page 9: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

add a service group

haconf -makerw   hagrp -add groupw  hagrp -modify groupw SystemList station40 1 station50 2  hagrp -autoenable groupw -sys station40haconf -dump -makero

delete a service group haconf -makerw

  hagrp -delete groupwhaconf -dump -makero

change a service group

haconf -makerw   hagrp -modify groupw SystemList station40 1 station50 2 sun3 3 haconf -dump -makero

Note: use the "hagrp -display <group>" to list attributes

list the service groups hagrp -list

list the groups dependencies hagrp -dep <group>

list the parameters of a group hagrp -display <group>

display a service group's resource hagrp -resources <group>

display the current state of the service group hagrp -state <group>

clear a faulted non-persistent resource in a specific grp

hagrp -clear <group> [-sys] <host> <sys>

Change the system list in a cluster

# remove the hosthagrp -modify grp_zlnrssd SystemList -delete <hostname>

# add the new host (don't forget to state its position)hagrp -modify grp_zlnrssd SystemList -add <hostname> 1

# update the autostart list hagrp -modify grp_zlnrssd AutoStartList <host> <host>

Service Group Operations

Start a service group and bring its resources online

hagrp -online <group> -sys <sys>

Stop a service group and takes its resources offline

hagrp -offline <group> -sys <sys>

Switch a service group from system to another hagrp -switch <group> to <sys>

Enable all the resources in a group hagrp -enableresources <group>

Disable all the resources in a group hagrp -disableresources <group>

Freeze a service group (disable onlining and offlining)

hagrp -freeze <group> [-persistent]

note: use the following to check "hagrp -display <group> | grep TFrozen"

Unfreeze a service group (enable onlining and offlining)

hagrp -unfreeze <group> [-persistent]

note: use the following to check "hagrp -display <group> | grep TFrozen"

Enable a service group. Enabled groups can only be brought online

haconf -makerw   hagrp -enable <group> [-sys]

Page 10: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

haconf -dump -makero

Note to check run the following command "hagrp -display | grep Enabled"

Disable a service group. Stop from bringing online

haconf -makerw  hagrp -disable <group> [-sys]haconf -dump -makero

Note to check run the following command "hagrp -display | grep Enabled"

Flush a service group and enable corrective action.

hagrp -flush <group> -sys <system>

Resources

add a resource

haconf -makerw hares -add appDG DiskGroup groupwhares -modify appDG Enabled 1hares -modify appDG DiskGroup appdghares -modify appDG StartVolumes 0haconf -dump -makero

delete a resourcehaconf -makerw

hares -delete <resource>haconf -dump -makero

change a resource

haconf -makerw hares -modify appDG Enabled 1haconf -dump -makero

Note: list parameters "hares -display <resource>"

change a resource attribute to be globally wide hares -global <resource> <attribute> <value>

change a resource attribute to be locally wide hares -local <resource> <attribute> <value>

list the parameters of a resource hares -display <resource>

list the resources hares -list  

list the resource dependencies hares -dep

Resource Operations

Online a resource hares -online <resource> [-sys]

Offline a resource hares -offline <resource> [-sys]

display the state of a resource( offline, online, etc)

hares -state

display the parameters of a resource hares -display <resource>

Offline a resource and propagate the command to its children

hares -offprop <resource> -sys <sys>

Cause a resource agent to immediately monitor the resource

hares -probe <resource> -sys <sys>

Page 11: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Clearing a resource (automatically initiates the onlining)

hares -clear <resource> [-sys]

Resource Types

Add a resource type hatype -add <type>

Remove a resource type hatype -delete <type>

List all resource types hatype -list

Display a resource type hatype -display <type>

List a partitcular resource type hatype -resources <type>

Change a particular resource types attributes hatype -value <type> <attr>

Resource Agents

add a agent pkgadd -d . <agent package>

remove a agent pkgrm <agent package>

change a agent n/a

list all ha agents haagent -list  

Display agents run-time information i.e has it started, is it running ?

haagent -display <agent_name>  

Display agents faults haagent -display |grep Faults

Resource Agent Operations

Start an agent haagent -start <agent_name>[-sys]

Stop an agent haagent -stop <agent_name>[-sys]

Veritas Cluster : Check List { Before Build Cluster Please go through attached VCS Check List }

Page 12: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Build New VERITAS Cluster Suit on RHEL 5 Box’s.

Prerequisite:-

Rhel 5.6+ OS Software : VRTS_SF_HA_Solutions_5.1_SP1_RHEL.tar.gz 2 or more Node’s Yum Server Configure { Install Apache } Share Storage { Configure Linux SCSI-Target / StarWind or from VMAX Storage }

Requirements :- NIC/IP Detailed Requirement:-

Server 1 2 NIC Bonding { Local IP } + 2 NIC { LLT Heartbeat without IP } Server 2 2 NIC Bonding { Local IP } + 2 NIC { LLT Heartbeat without IP } Virtual IP 1 virtual IP for HTTP ( Cluster IP )

Note: IP and NIC Requirement are depends upon your environment, whatever mentioned above is minimum requirement to configure VCS.

How to Configure VCS Apache Cluster

Step 1 : Firstly Install RHEL 5.6 on both all the Node’s respectively with custom packages.

Step 2 : Configure Network Bonding Create the bond interface file for the public network and save the file as

# vim /etc/sysconfig/network-scripts/ifcfg-bond0

Page 13: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

DEVICE=bond0IPADDR=192.168.5.20 [This will be actual network IP address]NETMASK=255.255.255.0GATEWAY=192.168.5.1USERCTL=noBOOTPROTO=staticONBOOT=yes

After creating bond0 file, modify eth0 and eth1 file respectively.# vim /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0USERCTL=noONBOOT=yesMASTER=bond0SLAVE=yesBOOTPROTO=none

Note : - Make sure you remove HW Address / IP Address / Gateway Information from eth0 and eth1 and add 2 important lines under those file:

After creating bond0 file, modify eth0 and eth1 file respectively.# vim /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1USERCTL=noONBOOT=yesMASTER=bond0SLAVE=yesBOOTPROTO=none

Note : - Make sure you remove HW Address / IP Address / Gateway Information from eth0 and eth1 and add 2 important lines under those file:

Load bond driver/module# vim /etc/modprobe.conf

alias bond0 bondingoptions bond0 mode=balance-alb miimon=100

Test configuration# modprobe bonding

# service network restart

Check with the below command whether Bonding is actually working or not.# cat /proc/net/bonding/bond0

Setup SSH Password Less Authentication b/w all Nodes#ssh-keygen#ssh-copy-id –i .ssh/id_rsa.pub station40

Step 3: Installing VCS on the Below Mentioned Nodes

Station40.example.com: 192.168.5.40 Station50.example.com: 192.168.5.50 Station60.example.com: 192.168.5.60

Page 14: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Copy the installer VRTS_SF_HA tar-ball into /root

# cd /root# tar -zxvf VRTS_SF_HA_Solutions_5.1_SP1_RHEL.tar.gz# cd /dvd1-redhatlinux/rhel5_x86_64/#./installer

For Cluster:

Page 15: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 16: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

After Installation is completed

Page 17: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 18: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 19: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 20: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Configure same for station50.example.com

Page 21: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 22: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 23: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)
Page 24: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Hence Installation is finished on all the nodes.

Page 25: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Set the PATH Variable On Both node:# vim .bash_profile export PATH=$PATH:/sbin:/usr/sbin:/opt/VRTSvcs/bin:/etc/vx/bin:/usr/lib/vxvm/bin# export PATH# exit

Login againVerify Cluster related information on any Node:# lltconfig

LLT is running # lltstat -nvv |less

# gabconfig -aGAB Port Memberships===============================================================Port a gen 469401 membership 01Port h gen 469404 membership 01

# hastatus -sum

Page 26: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Create a Service Group

hagrp -add groupwhagrp -modify groupw SystemList station40 1 station50 2hagrp -autoenable groupw -sys station40

Create a disk group resource , volume and filesystem resource

We have to create a disk group resource, this will ensure that the disk group has been imported before we start any volumes hares -add appDG DiskGroup groupwhares -modify appDG Enabled 1hares -modify appDG DiskGroup appdghares -modify appDG StartVolumes 0

Once the disk group resource has been created we can create the volume resourcehares -add appVOL Volume groupwhares -modify appVOL Enabled 1hares -modify appVOL Volume app01hares -modify appVOL DiskGroup appdg

Now that the volume resource has been created we can create the filesystem mount resource hares -add appMOUNT Mount groupwhares -modify appMOUNT Enabled 1hares -modify appMOUNT MountPoint /appshares -modify appMOUNT BlockDevice /dev/vx/dsk/appdg/app01hares -modify appMOUNT FSType vxfs

To ensure that all resources are started in order, we create dependencies against each other

hares –listhaconf -makerwhares -link appVOL appDGhares -link appMOUNT appVOLhares -dep appVOL haconf –dump -makero

Create a application resource

Once the filesystem resource has been created we cab add a application resource, this will start, stop and monitor the application.hares -add sambaAPP Application groupwhares -modify sambaAPP Enabled 1hares -modify sambaAPP User roothares -modify sambaAPP StartProgram "/etc/init.d/samba start"hares -modify sambaAPP StopProgram "/etc/init.d/samba stop"hares -modify sambaAPP CleanProgram "/etc/init.d/samba clean"hares -modify sambaAPP PidFiles "/usr/local/samba/var/locks/smbd.pid" "/usr/local/samba/var/locks/nmbd.pid"hares -modify sambaAPP MonitorProcesses "smbd -D" "nmdb -D"

Create a single virtual IP resource

create a single NIC resource hares -add appNIC NIC groupwhares -modify appNIC Enabled 1 hares -modify appNIC Device qfe0

Page 27: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

Create the single application IP resource hares -add appIP IP groupwhres -modify appIP Enabled 1 hres -modify appIP Device qfe0hres -modify appIP Address 192.168.0.3hres -modify appIP NetMask 255.255.255.0hres -modify appIP IfconfigTwice 1

Create a multi virtual IP resource

Create a multi NIC resource hares -add appMultiNICA MultiNICA groupwhares -local appMultiNICA Devicehares -modify appMulitNICA Enabled 1 hares -modify appMulitNICA Device qfe0 192.168.0.3 qfe1 192.168.0.3 -sys station40 station50 hares -modify appIPMultiNIC NetMask 255.255.255.0hares -modify appIPMultiNIC ArpDelay 5hares -modify appIPMultiNIC IfconfigTwice 1

Create the multi Ip address resource, this will monitor the virtual IP addresses. hares -add appIPMultiNIC IPMultiNIC groupwhares -modify appIPMultiNIC Enabled 1 hares -modify appIPMultiNIC Address 192.168.0.3hares -modify appIPMultiNIC NetMask 255.255.255.0hares -modify appIPMultiNIC MultiNICResName appMultiNICAhares -modify appIPMultiNIC IfconfigTwice 1

Clear resource fault

# hastatus -sum-- SYSTEM STATE-- System     State              Frozen A station40         RUNNING    0 A station50         RUNNING    0 -- GROUP STATE-- Group       System   Probed   AutoDisabled    State B  groupw   station40        Y             N                          OFFLINE B  groupw   station50        Y             N                          STARTING|PARTIAL-- RESOURCES ONLINING-- Group     Type      Resource              System     IStateE groupw   Mount    app02MOUNT   station50          W_ONLINE

# hares -clear app02MOUNT

Flush a group

# hastatus -sum-- SYSTEM STATE-- System     State              Frozen A station40         RUNNING    0 A station50         RUNNING    0

-- GROUP STATE-- Group       System   Probed   AutoDisabled    State B  groupw   station40        Y             N                          STOPPING|PARTIALB  groupw   station50        Y             N                          OFFLINE|FAULTED

Page 28: Veritas Cluster · Web viewVERITAS Cluster Suit on RHEL 5 By Amit Kumar Veritas Cluster Cluster Information Veritas Cluster 5.0 can have upto 32 nodes. LLT (Low-Latency Transport)

-- RESOURCES FAILED-- Group      Type       Resource               System C groupw    Mount    app02MOUNT     station50

-- RESOURCES ONLINING-- Group       Type       Resource               System      IStateE groupw     Mount    app02MOUNT     station40           W_ONLINE_REVERSE_PROPAGATE

-- RESOURCES OFFLINING-- Group        Type             Resource     System      IStateF groupw      DiskGroup   appDG          station40          W_OFFLINE_PROPAGATE

# hagrp -flush groupw -sys station40

 

Thanking You!!!Amit KumarB.B.A, Red Hat Certified Security Specialist (RHCSS)Email:- [email protected]:- 09545593332,08800711919______________________________Take Care, Be Happy And Enjoy!!!!!!!


Recommended