+ All Categories
Home > Documents > Build_Your_Own_Oracle_Rac_10G_.pdf

Build_Your_Own_Oracle_Rac_10G_.pdf

Date post: 03-Jun-2018
Category:
Upload: balvinder-singh-rawat
View: 213 times
Download: 0 times
Share this document with a friend

of 20

Transcript
  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    1/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    Page 1 Page 2 Page 3

    Build Your Own Oracle RAC 10 g Release 2 Cluster on Linux and FireWire (Continued)For development and testing only; production deployments will not be supported!

    11. Configure the Linux Servers for OraclePerform the following configuration procedures on all nodes in the cluster!

    Several of the commands within this section will need to be performed on every node within the cluster every time the machine is booted. This sectionprovides very detailed information about setting shared memory, semaphores, and file handle limits. Instructions for placing them in a startup script(/etc/sysctl.conf) are included in Section 14 ("All Startup Commands for Each RAC Node").

    Overview

    This section focuses on configuring both Linux servers: getting each one prepared for the Oracle RAC 10 g installation. This includes verifying enoughswap space, setting shared memory and semaphores, and finally how to set the maximum amount of file handles for the OS.

    Throughout this section you will notice that there are several different ways to configure (set) these parameters. For the purpose of this article, I will bemaking all changes permanent (through reboots) by placing all commands in the /etc/sysctl.conf file.

    Swap Space Considerations

    Installing Oracle10 g Release 2 requires a minimum of 512MB of memory. (Note: An inadequate amount of swap during the installation willcause the Oracle Universal Installer to either "hang" or "die")

    To check the amount of memory / swap you have allocated, type either:# cat / proc/ memi nf o | grep MemTot alMemTot al : 1034352 kB

    If you have less than 512MB of memory (between your RAM and SWAP), you can add temporary swap space by creating a temporary swap file.This way you do not have to use a raw device or even more drastic, rebuild your system.

    As root, make a file that will act as additional swap space, let's say about 300MB:

    # dd i f =/ dev/ zer o of =t empswap bs=1k count =300000

    Now we should change the file permissions:

    # chmod 600 t empswap

    Finally we format the "partition" as swap and add it to the swap space:

    # mke2f s t empswap# mkswap t empswap# swapon t empswap

    Setting Shared Memory

    Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of inter-process communications (IPC) available, mainly due to the fact that no kernel involvement occurs when data is being passed between theprocesses. Data does not need to be copied between processes.

    Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foregroundprocesses. Adequate sizing of the SGA is critical to Oracle performance because it is responsible for holding the database buffer cache, shared SQL,access paths, and so much more.

    To determine all shared memory limits, use the following:

    # i pcs - l m

    - - - - - - Shared Memory Li mi ts - - - - - - - -max number of segment s = 4096max seg si ze ( kbyt es) = 32768max t ot al shar ed memor y ( kbyt es) = 8388608mi n seg si ze ( bytes) = 1

    Setting SHMMAX

    The SHMMAX parameters defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of sharedmemory and it is possible that incorrectly setting SHMMAX could limit the size of the SGA. When setting SHMMAX, keep in mind that thesize of the SGA should fit within one shared memory segment. An inadequate SHMMAX setting could result in the following:

    ORA- 27123: unabl e t o at t ach t o shar ed memor y segment

    You can determine the value of SHMMAX by performing the following:

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    2/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    # cat / pr oc/ sys/ ker nel / shmmax33554432

    The default value for SHMMAX is 32MB. This size is often too small to configure the Oracle SGA. I generally set the SHMMAX parameter to2GB using the following methods:

    You can alter the default setting for SHMMAX without rebooting the machine by making the changes directly to the / proc filesystem ( / proc/ sys/ ker nel / shmmax ) by using the following command:

    # sysc t l - w ker nel . shmmax=2147483648

    You should then make this change permanent by inserting the kernel parameter in the / et c / sysct l . conf startup file:

    # echo " kernel . shmmax=2147483648" >> / et c/ sysct l . conf

    Setting SHMMNI

    We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segmentssystem wide. The default value for this parameter is 4096 .

    You can determine the value of SHMMNI by performing the following:

    # cat / pr oc/ sys/ ker nel / shmmni4096

    The default setting for SHMMNI should be adequate for your Oracle RAC 10 g Release 2 installation.

    Setting SHMALL

    Finally, we look at the SHMALL shared memory kernel parameter. This parameter controls the total amount of shared memory (in pages)

    that can be used at one time on the system. In short, the value of this parameter should always be at least:

    cei l ( SHMMAX/ PAGE_SI ZE)

    The default size of SHMALL is 2097152 and can be queried using the following command:

    # cat / pr oc/ sys/ kernel / shmal l2097152

    The default setting for SHMALL should be adequate for our Oracle RAC 10 g Release 2 installation.

    (Note: The page size in Red Hat Linux on the i386 platform is 4,096 bytes. You can, however, use bi gpages which supports theconfiguration of larger memory page sizes.)

    Setting Semaphores

    Now that you have configured our shared memory settings, it is time to configure your semaphores. The best way to describe a "semaphore" is as acounter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphoresets are supported in UNIX System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets".

    To determine all semaphore limits, use the following:

    # i pcs - l s

    - - - - - - Semaphore Li mi t s - - - - - - - -max number of ar r ays = 128max semaphor es per ar r ay = 250max semaphor es sys t em wi de = 32000max ops per s emop cal l = 32semaphor e max val ue = 32767

    You can also use the following command:

    # cat / proc/sys/ kernel / sem250 32000 32 128

    Setting SEMMSL

    The SEMMSL kernel parameter is used to control the maximum number of semaphores per semaphore set.

    Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting in the i ni t . or a file for all databases on theLinux system plus 10. Also, Oracle recommends setting the SEMMSL to a value of no less than 100.

    Setting SEMMNI

    The SEMMNI kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system. Oraclerecommends setting the SEMMNI to a value of no less than 100.

    Setting SEMMNS

    The SEMMNS kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system.

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    3/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parameter setting for each database on the system,adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system.

    Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be thelesser of:

    SEMMNS - or - ( SEMMSL * SEMMNI )

    Setting SEMOPM

    The SEMOPM kernel parameter is used to control the number of semaphore operations that can be performed per semop system call.

    The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. Asemaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to setSEMOPM equal to SEMMSL.

    Oracle recommends setting the SEMOPM to a value of no less than 100.

    Setting Semaphore Kernel Parameters

    Finally, we see how to set all semaphore parameters using several methods. In the following, the only parameter I care about changing(raising) is SEMOPM. All other default settings should be sufficient for our example installation.

    You can alter the default setting for all semaphore settings without rebooting the machine by making the changes directly to the/ proc file system ( / proc/sys/ kernel / sem ) by using the following command:

    # sysc t l - w ker nel . sem=" 250 32000 100 128"

    You should then make this change permanent by inserting the kernel parameter in the / et c / sysct l . conf startup file:

    # echo " kernel . sem=250 32000 100 128" >> / et c/ sysct l . conf

    Setting File Handles

    When configuring our Red Hat Linux server, it is critical to ensure that the maximum number of file handles is sufficiently large. Thesetting for file handles denotes the number of open files that you can have on the Linux system.

    Use the following command to determine the maximum number of file handles for the entire system:

    # cat / pro c/ sys / f s / f i l e- max102563

    Oracle recommends that the file handles for the entire system be set to at least 65536 .

    You can alter the default setting for the maximum number of file handles without rebooting the machine by making the changesdirectly to the / proc file system ( / pro c/ sys / f s / f i l e- max ) using the following:

    # sysctl - w f s. f i l e-max=65536

    You should then make this change permanent by inserting the kernel parameter in the / et c / sysct l . conf startup file:

    # echo "f s. f i l e- max=65536" >> / et c/ sysctl . conf

    You can query the current usage of file handles by using the following:

    # c at / p r oc / s ys / f s / f i l e- n r825 0 65536

    The file-nr file displays three parameters: total allocated file handles, currently used file handles, and maximum file handles that can beallocated.

    (Note: If you need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for 2.4.20 it is setto unlimited. Verify the ul i mi t setting my issuing the ulimit command:

    # ul i mi tunl i mi t ed

    12. Configure the hangcheck-timer Kernel ModulePerform the following configuration procedures on all nodes in the cluster!

    Oracle9 i Release 1 (9.0.1) and Oracle9 i Release 2 ( 9.2.0.1) used a userspace watchdog daemon called wat chdogd to monitor the health of thecluster and to restart a RAC node in case of a failure. Starting with Oracle9 i Release 2 (9.2.0.2) (and still available in Oracle 10 g Release 2), thewatchdog daemon has been deprecated by a Linux kernel module named hangcheck- t i mer which addresses availability and reliability problemsmuch better. The hang- check timer is loaded into the Linux kernel and checks if the system hangs. It will set a timer and check the timer after acertain amount of time. There is a configurable threshold to hang- check that, if exceeded will reboot the machine. Although the hangcheck- t i mermodule is not required for Oracle Clusterware ( Cluster Manager ) operation, it is highly recommended by Oracle.

    The hangcheck-timer.ko Module

    The hangcheck-timer module uses a kernel-based timer that periodically checks the system task scheduler to catch delays in order to determine the

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    4/20

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    5/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    cluster. Therefore you should be able to run r* commands like r sh , r cp , and r l ogi n on the Linux server you will be running the Oracle installer from,against all other Linux servers in the cluster without a password. The rsh daemon validates users using the / et c / hosts . equi v file or the . r h os t sfile found in the user's (oracle's) home directory. (The use of r cp and r sh are not required for normal RAC operation. However r cp and r sh should beenabled for RAC and patchset installation.)

    Oracle added support in Oracle RAC 10 g Release 1 for using the Secure Shell (SSH) tool suite for setting up user equivalence. This article, however,uses the older method of r cp for copying the Oracle software to the other nodes in the cluster. When using the SSH tool suite, the scp (as opposed tothe r cp ) command would be used to copy the software in a very secure manner.

    First, let's make sure that we have the r sh RPMs installed on each node in the RAC cluster:

    # rpm- q rsh rsh- server

    r sh- 0. 17- 25. 3r sh- ser ver - 0. 17- 25. 3

    From the above, we can see that we have the r sh and r sh- server installed. Were r sh not installed, we would run the following command from theCD where the RPM is located:

    # su -# rpm- i vh rsh- 0. 17- 25. 3. i 386. rpmr sh- server- 0. 17- 25. 3. i 386. rpm

    To enable the "rsh" and "rlogin" services, the "disable" attribute in the / et c / xi net d .d / r sh file must be set to " no " and xinetd must be reloaded. Dothat by running the following commands on all nodes in the cluster:

    # su -# chkconf i g rsh on# chkconf i g r l ogi n on# ser vi ce xi net d r el oadRel oadi ng conf i gur ati on: [ OK ]

    To allow the "oracle" UNIX user account to be trusted among the RAC nodes, create the / et c / hosts . equi v file on all nodes in the cluster:

    # su -# touch /e tc / hosts . equi v# chmod 600 / etc/ host s. equi v# chown r oot . r oot / et c/ hosts. equi v

    Now add all RAC nodes to the / et c / hosts . equi v file similar to the following example for all nodes in the cluster:

    # cat / et c / hosts . equi v+l i nux1 oracl e+l i nux2 oracl e+i nt - l i nux1 or acl e+i nt - l i nux2 or acl e

    Note: In the above example, the second field permits only the oracl e user account to run r sh commands on the specified nodes. For securityreasons, the / et c / hosts . equi v file should be owned by r oot and the permissions should be set to 600 . In fact, some systems will only honor thecontent of this file if the owner is r oot and the permissions are set to 600 .

    Before attempting to test your r sh command, ensure that you are using the correct version of r sh . By default, Red Hat Linux puts/ usr / kerberos/sb i n at the head of the $PATH variable. This will cause the Kerberos version of r sh to be executed.

    I will typically rename the Kerberos version of r sh so that the normal r sh command is being used. Use the following:

    # su -

    # whi ch r sh/ usr / kerberos/b i n/ rsh

    # mv / usr / kerberos/b i n/ rsh / usr / kerberos/b i n/ rsh .o r i gi nal# mv / usr / kerberos/b i n/ rcp / usr / kerberos/b i n/ rcp. or i gi nal# mv / usr / kerberos/b i n/ r l ogi n / usr / kerberos/b i n/ r l ogi n. or i gi nal

    # whi ch r sh

    / us r / bi n/ r s h

    You should now test your connections and run the r sh command from the node that will be performing the Oracle Clusterware and 10g RACinstallation. I will be using the node l i nux1 to perform all installs so this is where I will run the following commands from:

    # su - or acl e

    $ rsh l i nux1 l s - l / et c / hosts . equi v- rw- - - - - - - 1 r oot r oot 68 Sep 27 23: 37 / et c/ hosts. equi v

    $ r sh i nt - l i nux1 l s - l / et c / hos t s . equi v- rw- - - - - - - 1 r oot r oot 68 Sep 27 23: 37 / et c/ hosts. equi v

    $ rsh l i nux2 l s - l / et c / hosts . equi v- rw- - - - - - - 1 r oot r oot 68 Sep 27 23: 37 / et c/ hosts. equi v

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    6/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    $ r sh i nt - l i nux2 l s - l / et c / hos t s . equi v- rw- - - - - - - 1 r oot r oot 68 Sep 27 23: 37 / et c/ host s. equi v

    14. All Startup Commands f or Each RAC NodeVerify that the following startup commands are included on all nodes in the cluster!

    Up to this point, you have read in great detail about the parameters and resources that need to be configured on all nodes for the Oracle10 g RACconfiguration. This section will ler you " take a deep breath" and recap those parameters, commands, and entries (in previous sections of thisdocument) that need to happen on each node when the machine is booted.

    For each of the startup files below, entries in gray should be included in each startup file.

    /etc/modprobe.conf

    (All parameters and values to be used by kernel modules.)

    al i as et h0 b44al i as et h 1 t u l i pal i as snd- car d- 0 snd- i nt el 8x0opt i ons s nd- card- 0 i ndex=0al i as usb- cont ro l l er ehci - hcdal i as usb- cont r ol l er 1 uhci - hcdopt i ons sbp2 excl usi ve_l ogi n=0al i as scsi _host adapt er sbp2opt i ons hangcheck- t i mer hangcheck_t i ck=30 hangcheck_ mar gi n=180

    /etc/sysctl.conf

    (We wanted to adjust the default and maximum send buffer size as well as the default and maximum receive buffer size for the interconnect. This filealso contains those parameters responsible for configuring shared memory, semaphores, and file handles for use by the Oracle instance.)

    # Ker nel sysctl conf i gur at i on f i l e f or Red Hat Li nux## For bi nar y val ues, 0 i s di sabl ed, 1 i s enabl ed. See sysctl ( 8) and# sysct l . conf ( 5) f or more det ai l s .

    # Cont r ol s I P packet f or war di ngnet . i pv4. i p_f orward = 0

    # Cont ro l s source rout e ver i f i cat i onnet . i pv4. c onf . def aul t . r p_ f i l t e r = 1

    # Cont r ol s t he System Request debuggi ng f unct i onal i t y of t he ker nelkernel . sysrq = 0

    # Cont r ol s whet her core dumps wi l l append t he PI D t o t he core f i l ename.# Useful f or debuggi ng mul t i - t hr eaded appl i cat i ons.ker nel . cor e_uses_pi d = 1

    # Def aul t set t i ng i n bytes of t he socket r ecei ve buf f ernet . cor e. r mem_def aul t =262144

    # Defaul t sett i ng i n bytes of t he socket send buf f ernet . cor e. wmem_def aul t =262144

    # Maxi mums ocket r ecei ve buf f er si ze whi ch may be set by usi ng# t he SO_RCVBUF socket opti onnet . cor e. r mem_max=262144

    # Maxi mum socket send buf f er si ze whi ch may be set by usi ng# t he SO_SNDBUF socket opti onnet . cor e. wmem_max=262144

    # +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +# | SHARED MEMORY |# +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +ker nel . shmmax=2147483648

    # +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +# | SEMAPHORES |# | - - - - - - - - - - |# | |# | SEMMSL_ val ue SEMMNS_val ue SEMOPM_val ue SEMMNI _val ue |# | |# +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +ker nel . sem=250 32000 100 128

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    7/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    # +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +# | FI LE HANDLES |# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +f s. f i l e- max=65536

    /etc/hosts

    (All machine/IP entries for nodes in our RAC cluster.)

    # Do not r emove the f ol l owi ng l i ne, or vari ous progr ams# t hat r equi re net work funct i onal i t y wi l l f ai l .

    127. 0. 0. 1 l ocal host. l ocal domai n l ocal host# Publ i c Network - ( eth0)192. 168. 1. 100 l i nux1192. 168. 1. 101 l i nux2# Pri vat e I nt er connect - ( et h1)192. 168. 2. 100 i nt - l i nux1192. 168. 2. 101 i nt - l i nux2# Publ i c Vi r t ual I P (VI P) addresses f or - ( et h0)192. 168. 1. 200 vi p- l i nux1192. 168. 1. 201 vi p- l i nux2192. 168. 1. 106 mel ody192. 168. 1. 102 al ex192. 168. 1. 105 bart man

    /etc/hosts.equiv

    (Allow logins to each node as the oracle user account without the need for a password.)

    +l i nux1 oracl e+l i nux2 oracl e+i nt - l i nux1 or acl e+i nt - l i nux2 or acl e

    /etc/rc.local

    (Loading the hangcheck-timer kernel module.)

    #! / bi n/ sh## Thi s scr i pt wi l l be execut ed *af t er* al l t he other i ni t scr i pt s .# You can put your own i ni t i al i zat i on st uf f i n her e i f you don' t# want t o do t he f ul l Sys V s ty l e i ni t s tu ff .

    t ouch /var / l ock/ subsys/ l ocal

    # +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +# | HANGCHECK TI MER |# | ( I do not bel i eve t hi s i s requi red, but doesn' t hur t ) |# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +

    / sbi n/ modprobe hangcheck- t i mer

    15. Check RPM Packages fo r Oracle 10 g Release 2Perform the following checks on all nodes in the cluster!

    When installing the Linux O/S (CentOS Enterprise Linux or RHEL4), you should verify that all required RPMs are installed. If you followed theinstructions I used for installing Linux, you would have installed everything , in which case you will have all the required RPM packages. However, if youperformed another installation type (i.e. Advanced Server), you may have some packages missing and will need to install them. All of the required

    RPMs are on the Linux CDs/ISOs.

    Check Required RPMs

    The following packages (or higher versions) must be installed:

    make- 3. 80- 5gl i bc -2. 3. 4- 2. 9gl i bc-devel - 2. 3. 4- 2. 9gl i bc-headers- 2. 3. 4- 2. 9gl i bc- ker nheader s- 2. 4- 9. 1. 87cpp- 3. 4. 3- 22. 1compat - db- 4. 1. 25- 9compat - gcc-32-3. 2. 3-47. 3compat - gcc- 32- c++- 3. 2. 3- 47. 3compat - l i bstdc++- 33- 3. 2. 3-47. 3

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    8/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    compat - l i bstdc++- 296-2. 96- 132. 7. 2openmoti f - 2. 2. 3-9. RHEL4. 1set ar ch- 1. 6- 1

    To query package information (gcc and glibc-devel for example), use the " r pm - q [ , ] " command as follows:

    # r pm - q gcc gl i bc- develgcc-3. 4. 3- 22. 1gl i bc-devel - 2. 3. 4- 2. 9

    If you need to install any of the above packages, use " r pm - Uvh ". For example, to install the GCC 3.2.3-24 package, use:

    # r pm - Uvh gcc-3. 4. 3- 22. 1. i 386. r pm

    Reboot the System

    If you made any changes to the O/S, reboot all nodes in the cluster before attempting to install any of the Oracle components!!!

    # i ni t 6

    16. Install & Conf igure OCFS2Most of the configuration procedures in this section should be performed on all nodes in the cluster! Creating the OCFS2 filesystem, however, shouldbe executed on only one node in the cluster.

    It is now time to install OCFS2. OCFS2 is a cluster filesystem that allows all nodes in a cluster to concurrently access a device via the standardfilesystem interface. This allows for easy management of applications that need to run across a cluster.

    OCFS Release 1 was released in 2002 to enable Oracle RAC users to run the clustered database without having to deal with RAW devices. Thefilesystem was designed to store database related files, such as data files, control files, redo logs, archive logs, etc. OCFS2, in contrast, has beendesigned as a general-purpose cluster filesystem. With it, one can store not only database related files on a shared disk, but also store Oracle binariesand configuration files (shared Oracle Home) making management of RAC even easier.

    In this guide, you will be using OCFS2 to store the two files that are required to be shared by the Oracle Clusterware software. (Along with these twofiles, you will also be using this space to store the shared SPFILE for all Oracle RAC instances.)

    See this page for more information on OCFS2 (including Installation Notes) for Linux.

    Download OCFS

    First, download the OCFS2 distribution. The OCFS2 distribution comprises of two sets of RPMs; namely, the kernel module and the tools. The kernelmodule is available for download from http://oss.oracle.com/projects/ocfs2/files/ and the tools from http://oss.oracle.com/projects/ocfs2-tools/files/.

    Download the appropriate RPMs starting with the key OCFS2 kernel module (the driver). From the three available kernel modules (below), downloadthe one that matches the distribution, platform, kernel version and the kernel flavor (smp, hugemem, psmp, etc).

    ocfs2-2.6.9-11.0.0.10.3.EL-1.0.4-1.i686.rpm - (for single processor)

    or

    ocfs2-2.6.9-11.0.0.10.3.ELsmp-1.0.4-1.i686.rpm - (for multiple processors)

    or

    ocfs2-2.6.9-11.0.0.10.3.ELhugemem-1.0.4-1.i686.rpm - (for hugemem)

    For the tools, simply match the platform and distribution. You should download both the OCFS2 tools and the OCFS2 console applications.

    ocfs2-tools-1.0.2-1.i386.rpm - (OCFS2 tools) ocfs2console-1.0.2-1.i386.rpm - (OCFS2 console)

    The OCFS2 Console is optional but highly recommended. The ocf s2consol e application requires e2fsprogs, glib2 2.2.3 or later, vte 0.11.10 or later,pygtk2 (EL4) or python-gtk (SLES9) 1.99.16 or later, python 2.3 or later and ocfs2-tools.

    If you were curious as to which OCFS2 driver release you need, use the OCFS2 release that matches your kernel version. To determine your kernelrelease:

    $ uname - aLi nux l i nux1 2.6.9-11.0.0.10.3 . EL #1 Tue J ul 5 12: 20: 09 PDT 2005 i686 i 686 i 386 GNU/ Li nux

    Install OCFS2

    I will be installing the OCFS2 files onto two single-processor machines. The installation process is simply a matter of running the following command onall nodes in the cluster as the root user account:

    $ su -# rpm- Uvh ocf s2- 2. 6. 9- 11. 0. 0. 10. 3. EL-1 .0 . 4- 1. i 686. rpm\ ocf s2consol e- 1. 0. 2- 1. i 386. r pm \ ocfs2- t ool s-1 . 0. 2- 1. i 386. rpmPr epar i ng. . . ########################################### [ 100%]

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    9/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    0 1/4/2006 8

    1: ocf s2- t ool s ########################################### [ 33%] 2: oc f s2- 2. 6. 9- 11. 0. 0. 10. 3########################################### [ 67%] 3: ocf s2consol e ########################################### [ 100%]

    Disable SELinu x (RHEL4 U2 Only)

    RHEL4 U2 users (CentOS 4.2 is based on RHEL4 U2) are advised that OCFS2 currently does not work with SELinux enabled. If you are using RHEL4U2 (which includes you, since you are using CentOS 4.2 here) you will need to disable SELinux (using tool system- conf i g- secur i t yl evel ) to getthe O2CB service to execute.

    To disable SELinux, run the "Security Level Configuration" GUI utility:

    # /usr/bin/system-config-securitylevel &

    This will bring up the following screen:

    Figure 6 Security Level Configuration Opening Screen

    Now, click the SELi nux tab and check off the "Enabled" checkbox. After clicking [OK], you will be presented with a warning dialog. Simply acknowledgethis warning by clicking "Yes". Your screen should now look like the following after disabling the SELinux option:

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    10/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    Figure 7: SELinux Disabled

    After making this change on both nodes in the cluster, each node will need to be rebooted to implement the change:

    # init 6

    Configure OCFS2

    The next step is to generate and configure the / et c / ocfs2 /cl uster. conf file on each node in the cluster. The easiest way to accomplish this is torun the GUI tool ocf s2consol e . In this section, we will not only create and configure the / et c / ocfs2 /cl uster. conf file using ocf s2consol e ,but will also create and start the cluster stack O2CB. When the / et c / ocfs2/ cl uster. conf file is not present, (as will be the case in our example),the ocf s2consol e tool will create this file along with a new cluster stack service (O2CB) with a default cluster name of ocf s2 . This will need to bedone on all nodes in the cluster as the root user account:

    $ su -# ocf s2consol e &

    This will bring up the GUI as shown below:

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    11/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    Figure 8 ocfs2console GUI

    Using the ocf s2consol e GUI tool, perform the following steps:

    Select [ Cl uster ] - > [ Conf i gure Nodes. . . ] . This will start the OCFS Cluster Stack (Figure 9) and bring up the "Node Configuration"dialog.

    1.

    On the "Node Configuration" dialog, click the [Add] button.This will bring up the "Add Node" dialog.In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using l i nux1 / 192. 168. 1. 100 for the first node and l i nux2 / 192. 168. 1. 101 for thesecond node.Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" as shown in Figure 10.Click [Close] on the "Node Configuration" dialog.

    2.

    After verifying all values are correct, exit the application using [ F i l e] - > [ Qui t ] . This needs to be performed on all nodes in the cluster.3.

    Figure 9 . Starting the OCFS2 Cluster Stack

    The following dialog show the OCFS2 settings I used for the node l i nux1 and l i nux2 :

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    12/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    Figure 10 Configuring Nodes for OCFS2

    After exiting the ocf s2consol e , you will have a / et c / ocfs2 /cl uster. conf similar to the following. This process needs to be completed on allnodes in the cluster and the OCFS2 configuration file should be exactly the same for all of the nodes:

    node: i p_por t = 7777 i p_addr ess = 192. 168. 1. 100 number = 0 name = l i nux1 c l us t er = ocf s2

    node: i p_por t = 7777 i p_addr ess = 192. 168. 1. 101 number = 1 name = l i nux2 c l us t er = ocf s2

    cl uster : node_count = 2 name = ocf s2

    O2CB Cluster Service

    Before we can do anything with OCFS2 like formatting or mounting the file system, we need to first have OCFS2's cluster stack, O2CB, running (whichit will be as a result of the configuration process performed above) . The stack includes the following services:

    NM: Node Manager that keep track of all the nodes in the cluster.conf HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster TCP: Handles communication between the nodesDLM: Distributed lock manager that keeps track of all locks, its owners and statusCONFIGFS: User space driven configuration file system mounted at /configDLMFS: User space interface to the kernel space DLM

    All of the above cluster services have been packaged in the o2cb system service ( / et c / i ni t . d/ o2c b ). Here is a short listing of some of the moreuseful commands and options for the o2cb system service.

    /etc/init.d/o2cb status

    Modul e "conf i gf s": Not l oadedFi l esyst em" conf i gf s": Not mount ed

    Modul e "ocf s2_nodemanager" : Not l oadedModul e "ocf s2_dl m": Not l oadedModul e "ocf s2_dl mf s" : Not l oadedFi l esyst em" ocf s2_dl mf s": Not mount ed

    Note that with this example, all of the services are not loaded. I did an "unload" right before executing the "status" option. If you were to checkthe status of the o2cb service immediately after configuring OCFS using ocfs2console utility, they would all be loaded.

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    13/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    /etc/init.d/o2cb load

    Loadi ng modul e "conf i gf s" : OK Mount i ng conf i gf s fi l esyst em at / conf i g: OK Loadi ng modul e " ocf s2_nodemanager" : OK Loadi ng modul e " ocf s2_dl m" : OK Loadi ng modul e "ocf s2_dl mf s" : OK Mount i ng ocf s2_dl mf s f i l esyst emat / dl m: OK

    Loads all OCFS modules.

    /etc/init.d/o2cb online ocfs2

    St ar t i ng cl ust er ocf s2: OK

    The above command will online the cluster we created, ocfs2.

    /etc/init.d/o2cb offline ocfs2

    Unmount i ng ocf s2_dl mf s f i l esyst em: OK Unl oadi ng modul e "ocf s2_dl mf s" : OK Unmounti ng conf i gfs f i l esyst em: OK Unl oadi ng modul e "conf i gf s" : OK

    The above command will offline the cluster we created, ocfs2.

    /etc/init.d/o2cb unload

    Cl eani ng heart beat on ocf s2: OK St oppi ng cl ust er ocf s2: OK

    The above command will unload all OCFS modules.

    Configure O2CB to Start on Boot

    You now need to configure the on-boot properties of the OC2B driver so that the cluster stack services will start on each boot. All the tasks within thissection will need to be performed on both nodes in the cluster.

    Note: At the time of writing this guide, OCFS2 contains a bug wherein the driver does not get loaded on each boot even after configuring the on-bootproperties to do so. After attempting to configure the on-boot properties to start on each boot according to the official OCFS2 documentation, you willstill get the following error on each boot:

    . . .Mounti ng other f i l esyst ems: mount . ocf s2: Unabl e t o access cl ust er servi ce

    Cannot i ni t i al i ze cl ust er mount . ocf s2: Unabl e t o access cl uster ser vi ce Cannot i ni t i al i ze cl ust er [ FAI LED]. . .

    Red Hat changed the way the service is registered between chkconfig-1.3.11.2-1 and chkconfig-1.3.13.2-1. The O2CB script used to work with theformer.

    Before attempting to configure the on-boot properties:

    REMOVE the following lines in / et c / i ni t . d/ o2c b

    ### BEGI N I NI T I NFO# Provi des: o2cb# Requi r ed- St ar t :# Shoul d- St ar t :# Requi r ed- St op:# Def aul t - St ar t : 2 3 5# Def aul t - St op:# Descr i pt i on: Load O2CB cl ust er servi ces at syst emboot.

    ### END I NI T I NFO

    Re-register the o2cb service.

    # chkconfig --del o2cb# chkconfig --add o2cb# chkconfig --list o2cbo2cb 0: off 1: off 2: on 3: on 4: on 5: on 6: off

    # ll /etc/rc3.d/*o2cb*l r wxrwxrwx 1 r oot r oot 14 Sep 29 11: 56 / et c/ r c3. d/ S24o2cb - > . . / i ni t . d/ o2cb

    The service should be S24o2cb in the default runlevel.

    After resolving this bug, you can continue to set the on-boot properties as follows:

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    14/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    # /etc/init.d/o2cb offline ocfs2# /etc/init.d/o2cb unload # /etc/init.d/o2cb configureConf i gur i ng t he O2CB dr i ver .

    This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot. The currentvalues will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort.

    Load O2CB dri ver on boot ( y/ n) [ n]: yCl ust er t o st ar t on boot ( Ent er "none" t o cl ear ) [ ocf s2] : ocfs2Wr i t i ng O2CB conf i gur ati on: OK

    Loadi ng modul e "conf i gf s" : OK Mount i ng conf i gf s f i l esystem at / conf i g: OK Loadi ng modul e " ocf s2_nodemanager" : OK Loadi ng modul e " ocf s2_dl m": OK Loadi ng modul e "ocf s2_dl mf s" : OK Mount i ng ocf s2_dl mf s f i l esystem at / dl m: OK St ar t i ng cl ust er ocf s2: OK

    Format the OCFS2 Filesystem

    You can now start to make use of the partitions created in the section Create Partitions on the Shared FireWire Storage Device. (Well, at least the firstpartition!)

    If the O2CB cluster is offline, start it. The format operation needs the cluster to be online, as it needs to ensure that the volume is not mounted on somenode in the cluster.

    Earlier in this document, we created the directory / u02/ oradata/ orcl under the section Create Mount Point for OCFS / Clusterware. This sectioncontains the commands to create and mount the file system to be used for the Cluster Manager - / u02/ oradata/ orcl .

    Create the OCFS2 Filesys tem

    Unlike the other tasks in this section, creating the OCFS2 filesystem should only be executed on one node in the RAC cluster . You will be executingall commands in this section from l i nux1 only.

    Note that it is possible to create and mount the OCFS2 file system using either the GUI tool ocf s2consol e or the command-line tool mkf s. ocf s2 .From the ocf s2consol e utility, use the menu [ Tasks] - [ For mat] .

    See the instructions below on how to create the OCFS2 file system using the command-line tool mkf s. ocf s2 .

    To create the filesystem, use the Oracle executable mkf s. ocf s2 . For the purpose of this example, I run the following command only from l i nux1 asthe r oot user account:

    $ su -# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oradatafiles /dev/sda1

    mkf s. ocf s2 1. 0. 2Fi l esysteml abel =oradat af i l esBl ock si ze=4096 ( bi t s=12)Cl ust er si ze=32768 ( bi t s=15)Vol ume si ze=1011675136 ( 30873 cl ust er s) ( 246984 bl ocks)1 cl ust er groups ( t ai l cover s 30873 cl ust ers, r est cover 30873 cl ust ers)

    J ournal si ze=16777216I ni t i al number of node sl ot s: 4Cr eat i ng bi t maps: doneI ni t i al i z i ng superb l ock: doneWri t i ng systemf i l es : doneWr i t i ng superbl ock: doneWr i t i ng l ost +f ound: donemkf s. ocf s2 successf ul

    Mount the OCFS2 Filesystem

    Now that the file system is created, you can mount it. Let's first do it using the command-line, then I'll show how to include it in the / e t c / f s t a b to haveit mount on each boot. Mounting the filesystem will need to be performed on all nodes i n the Oracle RAC cluster as the r oot user account.

    First, here is how to manually mount the OCFS2 file system from the command line. Remember, this needs to be performed as the r oot user account:

    $ su -# mount -t ocfs2 -o datavolume /dev/sda1 /u02/oradata/orcl

    If the mount was successful, you will simply got your prompt back. You should, however, run the following checks to ensure the fil system is mountedcorrectly.

    Let's use the mount command to ensure that the new filesystem is really mounted. This should be performed on all nodes in the RAC cluster:

    # mount/ dev/ mapper / Vol Gr oup00- LogVol 00 on / t ype ext 3 ( r w)none on / proc t ype pr oc ( r w)

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    15/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    none on / sys t ype sysf s ( r w)none on / dev/ pt s t ype devpt s ( r w, gi d=5, mode=620)usbf s on / proc/ bus/ usb t ype usbf s ( r w)/ dev/ hda1 on / boot t ype ext 3 ( r w)none on / dev/ shm t ype t mpf s ( r w)none on / proc/ sys/ f s/ bi nf mt _mi sc t ype bi nf mt _mi sc ( r w)sunr pc on / var / l i b/ nf s/ r pc_pi pef s t ype r pc_pi pef s (r w)car t man: SHARE2 on / car t man t ype nf s ( r w, addr =192. 168. 1. 120)conf i gf s on / conf i g t ype conf i gf s ( r w)ocf s2_dl mf s on / dl m t ype ocf s2_dl mf s ( r w)/dev/sda1 on /u02/oradata/orcl type ocfs2 (rw,_netdev,datavolume)

    Note: You are using the dat avol ume option to mount the new filesystem here. Oracle database users must mount any volume that will contain theVoting Disk file, Cluster Registry (OCR), Data files, Redo logs, Archive logs, and Control files with the dat avol ume mount option so as to ensure thatthe Oracle processes open the files with the o_di r ect flag.

    Any other type of volume, including an Oracle home (not used in this guide), should not be mounted with this mount option.

    The volume will mount after a short delay, usually around five seconds. It does so to let the heartbeat thread stabilize. In a future release, Oracle plansto add support for a global heartbeat, which will make most mounts instantaneous.

    Configure OCFS to Mount Automatically at Startup

    Let's review what you've done so far. You downloaded and installed OCFS2, which will be used to store the files needed by Cluster Manager files. Aftergoing through the install, you loaded the OCFS2 module into the kernel and then formatted the clustered filesystem. Finally, you mounted the newlycreated filesystem. This section walks through the steps responsible for mounting the new OCFS2 file system each time the machine(s) are booted.

    Start by adding the following line to the / e t c / f s t a b file on all nodes in th e RAC cluster :

    /dev/sda1 /u02/oradata/orcl ocfs2 _netdev,datavolume 0 0

    Notice the _net dev option for mounting this filesystem. The _net dev mount option is a must for OCFS2 volumes; it indicates that the volume is to bemounted after the network is started and dismounted before the network is shutdown.

    Now, let's make sure that the ocf s2. ko kernel module is being loaded and that the file system will be mounted during the boot process.

    If you have been following along with the examples in this article, the actions to load the kernel module and mount the OCFS2 file system shouldalready be enabled. However, you should still check those options by running the following on all nodes in the RAC cluster as the r oot user account:

    $ su -# chkconfig --list o2cbo2cb 0: off 1: off 2: on 3: on 4: on 5: on 6: of f

    The flags that I have marked in bold should be set to " on ".

    Check Permissions on New OCFS2 Filesystem

    Use the l s command to check ownership. The permissions should be set to 0775 with owner " oracl e " and group " dba ". If this is not the case for allnodes in the cluster (which was the case for me), then it is very possible that the " oracl e " UID (175 in this example) and/or the " dba " GID ( 115 in thisexample) are not the same across all nodes.

    Let's first check the permissions:

    # ls -ld /u02/oradata/orcldr wxr- xr- x 3 root root 4096 Sep 29 12: 11 / u02/ or adat a/ or cl

    As you can see from the listing above, the oracl e user account (and the dba group) will not be able to write to this directory. Let's fix that:

    # chown oracle.dba /u02/oradata/orcl# chmod 775 /u02/oradata/orcl

    Let's now go back and re-check that the permissions are correct for each node in the cluster:

    # ls -ld /u02/oradata/orcldr wxr wxr - x 3 oracle dba 4096 Sep 29 12: 11 / u02/ or adat a/ or cl

    Adj ust the O2CB Heartbeat Thres hol d

    This is a very important section when configuring OCFS2 for use by Oracle Clusterware's two shared files on our FireWire drive. During testing, I wasable to install and configure OCFS2, format the new volume, and finally install Oracle Clusterware (with its two required shared files; the voting disk andOCR file), located on the new OCFS2 volume. I was able to install Oracle Clusterware and see the shared drive, however, during my evaluation I wasreceiving many lock-ups and hanging after about 15 minutes when the Clusterware software was running on both nodes. It always varied on which nodewould hang (either l i nux1 or l i nux2 in my example). It also didn't matter whether there was a high I/O load or none at all for it to crash (hang).

    Keep in mind that the configuration you are creating is a rather low-end setup being configured with slow disk access with regards to the FireWire drive.This is by no means a high-end setup and susceptible to bogus timeouts.

    After looking through the trace files for OCFS2, it was apparent that access to the voting disk was too slow (exceeding the O2CB heartbeat threshold)and causing the Oracle Clusterware software (and the node) to crash.

    The solution I used was to simply increase the O2CB heartbeat threshold from its default setting of 7, to 301 (and in some cases as high as 900). This

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    16/20

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    17/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    / dev/ mapper / Vol Gr oup00- LogVol 00 on / t ype ext 3 ( r w)none on / proc t ype pr oc ( r w)none on / sys t ype sysf s ( r w)none on / dev/ pt s t ype devpt s ( r w, gi d=5, mode=620)usbf s on / proc/ bus/ usb t ype usbf s ( r w)/ dev/ hda1 on / boot t ype ext 3 ( r w)none on / dev/ shm t ype t mpf s ( r w)none on / proc/ sys/ f s/ bi nf mt _mi sc t ype bi nf mt _mi sc ( r w)sunr pc on / var / l i b/ nf s/ r pc_pi pef s t ype r pc_pi pef s (r w)car t man: SHARE2 on / car t man t ype nf s ( r w, addr =192. 168. 1. 120)conf i gf s on / conf i g t ype conf i gf s ( r w)ocf s2_dl mf s on / dl m t ype ocf s2_dl mf s ( r w)/dev/sda1 on /u02/oradata/orcl type ocfs2 (rw,_netdev,datavolume)

    You should also verify that the O2CB heartbeat threshold is set correctly (to our new value of 301):

    # cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold 301

    How to Determine OCFS2 Version

    To determine which version of OCFS2 is running, use:

    # cat /proc/fs/ocfs2/versionOCFS2 1. 0. 4 Fr i Aug 26 12: 31: 58 PDT 2005 ( bui l d 0a22e88ab648dc8d2a1f 9d7796ad101c)

    17. Install & Configure Automatic Storage Management (ASMLib 2.0)Most of the installation and configuration procedures should be performed on all nodes . Creating the ASM disks, however, will only need to beperformed on a single node within the cluster.

    In this section, you will configure ASM to be used as the filesystem / volume manager for all Oracle physical database files (data, online redo logs,control files, archived redo logs) and a Flash Recovery Area.

    The ASM feature was introduced in Oracle Database 10 g Release 1 and is used to alleviate the DBA from having to manage individual files and drives. ASM is built into the Oracle kernel and provides the DBA with a way to manage thousands of disk drives 24x7 for both single and clustered instances of Oracle. All the files and directories to be used for Oracle will be contained in a disk group . ASM automatically performs load balancing in parallel acrossall available disk drives to prevent hot spots and maximize performance, even with rapidly changing data usage patterns.

    I start this section by first discussing the ASMLib 2.0 libraries and its associated driver for Linux plus other methods for configuring ASM with Linux.Next, I will provide instructions for downloading the ASM drivers (ASMLib Release 2.0) specific to your Linux kernel. Last, you will install and configurethe ASMLib 2.0 drivers while finishing off the section with a demonstration of how to create the ASM disks.

    If you would like to learn more about the ASMLib, visit www.oracle.com/technology/tech/linux/asmlib/install.html.

    Methods f or Configuri ng ASM with Lin ux (For Reference Only)

    When I first started this guide, I wanted to focus on using ASM for all database files. I was curious to see how well ASM works with this test RACconfiguration with regard to load balancing and fault tolerance.

    There are two different methods to configure ASM on Linux:

    ASM wi th A SMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. Raw devicesare not required with this method as ASMLib works with block devices.

    ASM wi th St andar d Li nux I/O: This method creates all Oracle database files on raw character devices managed by ASM using standard LinuxI/O system calls. You will be required to create raw devices for all disk partitions used by ASM.

    We will examine the "ASM with ASMLib I/O" method here.

    Before discussing the installation and configuration details of ASMLib, however, I thought it would be interesting to talk briefly about the second method,"ASM with Standard Linux I/O." If you were to use this method (which is a perfectly valid solution, just not the method we will be implementing here), youshould be aware that Linux does not use raw devices by default. Every Linux raw device you want to use must be bound to the corresponding blockdevice using the raw driver. For example, if you wanted to use the partitions we've created, (/dev/sda2, /dev/sda3, and /dev/sda4), you would need toperform the following tasks:

    Edit the file / et c/ sysconf i g/ r awdevi ces as follows:

    # r aw devi ce bi ndi ngs# f or mat : # # exampl e: / dev/ r aw/ r aw1 / dev/ sda1# / dev/ r aw/ r aw2 8 5/ dev/ r aw/ r aw2 / dev/ sda2/ dev/ r aw/ r aw3 / dev/ sda3/ dev/ r aw/ r aw4 / dev/ sda4

    The raw device bindings will be created on each reboot.

    1.

    You would then want to change ownership of all raw devices to the " oracl e " user account:2.

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    18/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    # chown or acl e: dba / dev/ r aw/ r aw2; chmod 660 / dev/ r aw/ r aw2# chown or acl e: dba / dev/ r aw/ r aw3; chmod 660 / dev/ r aw/ r aw3# chown or acl e: dba / dev/ r aw/ r aw4; chmod 660 / dev/ r aw/ r aw4

    The last step is to reboot the server to bind the devices or simply restart the rawdevices service:

    # servi ce r awdevi ces restar t

    3.

    As I mentioned earlier, the above example was just to demonstrate that there is more than one method for using ASM with Linux. Now let's move on tothe method that will be used for this article, "ASM with ASMLib I/O."

    Download the ASMLib 2.0 Packages

    First download the ASMLib 2.0 libraries (from OTN) and the driver (from my web site). Like OCFS, you need to download the version for the Linuxkernel and number of processors on the machine. You are using kernel 2.6.9-11.0.0.10.3.EL #1 on single-processor machines:

    # uname - aLi nux l i nux1 2. 6. 9-11. 0. 0. 10. 3. EL #1 Tue J ul 5 12: 20: 09 PDT 2005 i 686 i 686 i 386 GNU/ Li nux

    Oracle ASMLib Downloads f or Red Hat Enterprise Linux 4 AS

    oracleasm-2.6.9-11.0.0.10.3.EL-2.0.0-1.i686.rpm - (Driver for "up" kernels) -OR-oracleasm-2.6.9-11.0.0.10.3.ELsmp-2.0.0-1.i686.rpm - (Driver for "smp" kernels)oracleasmlib-2.0.0-1.i386.rpm - (Userspace library)oracleasm-support-2.0.0-1.i386.rpm - (Driver support files)

    Install ASMLib 2.0 Packages

    This installation needs to be performed on all nodes as the r oot user account:

    $ su -# r pm - Uvh or acl easm- 2. 6. 9- 11. 0. 0. 10. 3. EL-2. 0. 0- 1. i 686. r pm \ oracl easml i b- 2. 0. 0- 1. i 386. rpm\ oracl easm- suppor t - 2. 0. 0- 1. i 386. r pmPr epar i ng. . . ########################################### [ 100%] 1: or acl easm- suppor t ########################################### [ 33%] 2: or ac l easm- 2. 6. 9- 11. 0. 0. ########################################### [ 67%] 3: or acl easml i b ########################################### [ 100%]

    Configure and Loading the ASMLib 2.0 Packages

    Now that you downloaded and installed the ASMLib Packages for Linux, you need to configure and load the ASM kernel module. This task needs to berun on all nodes as r oot :

    $ su -# /etc/init.d/oracleasm configureConf i gur i ng the Or acl e ASM l i br ar y dr i ver .

    Thi s wi l l conf i gure t he on- boot pr oper t i es of t he Or acl e ASM l i br ar ydr i ver . The f ol l owi ng quest i ons wi l l determi ne whether t he dr i ver i sl oaded on boot and what per mi ssi ons i t wi l l have. The curr ent val ueswi l l be shown i n br acket s ( ' [ ] ' ) . Hi t t i ng wi t hout t ypi ng ananswer wi l l keep t hat curr ent val ue. Ct r l - C wi l l abor t .

    Def aul t user t o own the dri ver i nt er f ace [ ] : oracleDef aul t gr oup t o own t he dr i ver i nt erf ace [ ] : dbaStar t Oracl e ASM l i brary dr i ver on boot ( y/ n) [ n] : yFi x permi ssi ons of Or acl e ASM di sks on boot ( y/ n) [ y]: yWri t i ng Or acl e ASM l i br ar y dr i ver conf i gur at i on: [ OK ]Cr eat i ng / dev/ oracl easm mount poi nt: [ OK ]Loadi ng modul e " or acl easm" : [ OK ]Mounti ng ASMl i b dri ver f i l esyst em: [ OK ]Scanni ng system f or ASM di sks: [ OK ]

    Create ASM Disks for Oracle

    In Section 10, you created three Linux partitions to be used for storing Oracle database files like online redo logs, database files, control files, archivedredo log files, and a flash recovery area.

    Here is a list of those partitions we created for use by ASM:

    Oracle ASM Partitions Created

    Fi les ys tem Ty pe Par ti ti on Si ze Mo un t Po in t Fi le Ty pes

    ASM /dev/sda2 50GB ORCL:VOL1 Oracle Database Files

    ASM /dev/sda3 50GB ORCL:VOL2 Oracle Database Files

    ASM /dev/sda4 100GB ORCL:VOL3 Flash Recovery Area

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    19/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    20 1/4/2006 8

    Total 200GB

    The last task in this section it to create the ASM Disks. Creating the ASM disks only needs to be done on one node as the r oot user account. I will berunning these commands on l i nux1 . On the other nodes, you will need to perform a scandisk to recognize the new volumes. When that is complete,you should then run the oracl easml i s t di sks command on all nodes to verify that all ASM disks were created and available.

    $ su -# / et c/ i ni t . d/ or acl easm creat edi sk VOL1 / dev/sda2Marki ng di sk "/ dev/ sda2" as an ASM di sk [ OK ]

    # / et c/ i ni t . d/ or acl easm creat edi sk VOL2 / dev/sda3

    Marki ng di sk "/ dev/ sda3" as an ASM di sk [ OK ]# / et c/ i ni t . d/ or acl easm creat edi sk VOL3 / dev/sda4Marki ng di sk "/ dev/ sda4" as an ASM di sk [ OK ]

    Note: If you are repeating this guide using the same hardware (actually, the same shared drive), you may get a failure when attempting to create the ASM disks. If you do receive a failure, try listing all ASM disks using:

    # / et c / i ni t . d/ or acl eas m l i s t di s ksVOL1VOL2VOL3

    As you can see, the results show that I have three volumes already defined. If you have the three volumes already defined from a previous run, goahead and remove them using the following commands and then creating them again using the above (oracleasm createdisk) commands:

    # / et c/ i ni t . d/ oracl easmdel et edi sk VOL1

    Removi ng ASM di sk " VOL1" [ OK ]# / et c/ i ni t . d/ oracl easmdel et edi sk VOL2Removi ng ASM di sk " VOL2" [ OK ]# / et c/ i ni t . d/ oracl easmdel et edi sk VOL3Removi ng ASM di sk " VOL3" [ OK ]

    On all other nodes in the cluster, you must perform a scandisk to recognize the new volumes:

    # / et c / i ni t . d/ or acl easmscandi sksScanni ng system f or ASM di sks [ OK ]

    You can now test that the ASM disks were successfully created by using the following command on all nodes as the r oot user account:

    # / et c / i ni t . d/ or acl eas m l i s t di s ksVOL1VOL2VOL3

    18. Download Oracle 10 g RAC SoftwareThe following download procedures only need to be performed on one node in the cluster!

    The next logical step is to install Oracle Clusterware Release 2 (10.2.0.1.0), Oracle Database 10 g Release 2 (10.2.0.1.0), and finally the OracleDatabase 10 g Companion CD Release 2 (10.2.0.1.0) for Linux x86 software. However, you must first download and extract the required Oraclesoftware packages from OTN.

    You will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the clusternamely, l i nux1 . You willperform all installs from this machine. The Oracle installer will copy the required software packages to all other nodes in the RAC configuration we setup in Section 13.

    Login to one of the nodes in the Linux RAC cluster as the oracl e user account. In this example, you will be downloading the required Oracle softwareto l i nux1 and saving them to / u01/ app/ or acl e/ or ai nst al l .

    Downloading and Extracting the SoftwareFirst, download the Oracle Clusterware Release 2 (10.2.0.1.0), Oracle Database 10 g Release 2 (10.2.0.1.0), and Oracle Database 10 g Companion CDRelease 2 (10.2.0.1.0) software for Linux x86. All downloads are available from the same page.

    As the oracl e user account, extract the three packages you downloaded to a temporary directory. In this example, we will use/ u01/ app/ or acl e/ or ai nst al l .

    Extract the Oracle Clusterware package as follows:

    # su - or acl e$ cd ~oracl e/ orai nst al l$ unzi p 10201_cl ust erware_l i nux32. zi p

    Then extract the Oracle Database Software:

  • 8/12/2019 Build_Your_Own_Oracle_Rac_10G_.pdf

    20/20

    Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireW... http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_2.html...

    $ cd ~oracl e/ orai nst al l$ unzi p 10201_dat abase_l i nux32. zi p

    Finally, extract the Oracle Companion CD Software:

    $ cd ~oracl e/ orai nst al l$ unzi p 10201_compani on_l i nux32. zi p

    Page 1 Page 2 Page 3


Recommended