Setting up ACFS

Post on 11-Jan-2016

45 views 0 download

description

Setting up ACFS. In 11GR2.3 ( an implementation ) Mathijs Bruggink. General. Purpose : Central/ Shared Logging . Shared Location for Exports ( Dumps ). REQUIREMENTS. Setting up standards: One ACFS Disk Group per Database. (_ACFS01). - PowerPoint PPT Presentation

transcript

Setting up ACFSIn 11GR2.3 ( an implementation )

Mathijs Bruggink.

Purpose:

Central/Shared Logging.

Shared Location for Exports (Dumps)

General

Setting up standards:◦ One ACFS Disk Group per Database. (<Database>_ACFS01).◦ ACFS Disk Group set up: NORMAL redundancy. ◦ Set up a volume. The volume name: <Database>_ACFS. ◦ In ASMCA:

create ACFS as a Database Home File system. Cluster File system:/opt/oracle/<DATABASE>.

In Grid Infra: ◦ Create dependency between Database and ACFS FS.◦ Create dependency between Listener and the ACFS

FS. Otherwise your Db and Listener will NOT start

auto via Cluster at cluster reboot/ node reboot.

REQUIREMENTS

Diskgroup

Volume

Volume

•Showing VolumeName entered, •The volume device on Linux and •the Asm Disk group you have selected.

ASM Cluster File System

Note: even though the acfs is used for Logging only. I only got it to work when selecting database home file systems. Cause that way it was registerd properly as resource in the clusterware.

ASM Cluster File System

Node64r

ASM Cluster File System

RUN the script as ROOT user on only One Node. The prompt will return with info similar to below:

cd /opt/oracle/cfgtoollogs/asmca/scripts ./ACFS_script.sh

ACFS file system is running on Node64r, Node65r, Node66r

diagnostic_dest: add dependency similar to this:

Removed current db resource in Grid Infra:srvctl remove database -d DBNAME

Registered the database again:srvctl add database -d DBNAME -o /opt/oracle/product/11203_ee_64/db -c RAC -m test.nl -p +DATA01/dbname/spfiledbname.ora -t IMMEDIATE -a “DBNAME_ACFS01, DBNAME_DATA01, DBNAME_FRA01" -j "/opt/oracle/DBNAME"

Dependency for the DB:

Dependency for listener:

Dependency for the (local) Listener.

• Modify the ACFS resource:crsctl modify resource ora.dbname_ACFS01.dbname_ACFS1.ACFS -attr "START_DEPENDENCIES=hard(ora.DBNAME_ACFS01.dg) pullup(ora.DBNAME_ACFS01.dg) pullup:always(ora.asm)"

• Modify the listener:crsctl modify resource ora.LISTENER_DBNAME.lsnr -attr "START_DEPENDENCIES='hard(type:ora.cluster_vip_net1.type,ora.dbname_ACFS01.dbname_ACFS1.ACFS) pullup(type:ora.cluster_vip_net1.type,ora.dbname_ACFS01.dbname_ACFS1.ACFS)'"

!!!!Note there is single quote just after = and another one just before  last."

Removing ACFS file System:!!!! Tip: You need to do a dismount on each node of the cluster as root User:

/bin/umount -t ACFS -a !!!! Note if you you cannot do this cause the device is busy as root use: lsof |grep <SID>  (most likely you still have running listener(s) writing into ACFS or database activity (audit files etc.) or you still have a session in that mount point which you should leave.)

!!!! Tip: as root /opt/crs/product/112_ee_64/crs/bin/srvctl remove filesystem -d /dev/asm/ACFS_dbname-378 f.e.

Unregister Mont point:

1

2

Delete the ACFS:1

Disable / Delete the volume

1

2

123

1. Remove the ACFS (unregister and Delete)2. Diable an delete the Volume3. Drop Diskgroup (which will drop disks too)