Best Practices: Implementing 10g RAC & 10g ASM with
E-Business Suite 11i
Vamsi MudumbaSenior Manager of Solution Architecture, Product Strategy.Applications Technology Group
Agenda
11i Architecture Overview11i and RAC Reference architectures11i and RAC Architecture Best PracticesLoadbalancing and Failover with 11i RACInstallation and Configuration best practices Parallel Concurrent Processing ( PCP ) SetupASM Overview & Setup Migrating the 11i –10g RAC database to ASMKnown Issues and Customer Pain Points
11i Architecture11i ArchitectureOverviewOverview
Application Portal
GUI Services
Reporting
Business Process Mgmt
Mobile Services
Concurrent Processing
Integration
Application & System Mgmt
Oracle9i/10gOracle9i/10gDatabaseDatabase
Data Storage (OLTP-OLAP)
Data Intensive Logic
Desktop Layer
Application ServerLayer
Database Layer
Presentation
User Interaction
MS Internet ExplorerMS Internet Explorer--
Netscape NavigatorNetscape Navigator
Oracle9iOracle9iApplication ServerApplication Server
Technical Platform: Oracle E-business Suite 11i10
Application Portal
GUI Services
Reporting
Business Process Mgmt
Mobile Services
Concurrent Processing
Integration
Application & System Mgmt
Oracle9i/10gOracle9i/10gDatabaseDatabase
Data Storage (OLTP-OLAP)
Data Intensive Logic
Desktop Layer
Application ServerLayer
Database Layer
Presentation
User Interaction
MS Internet ExplorerMS Internet Explorer--
Netscape NavigatorNetscape Navigator
Oracle9iOracle9iApplication ServerApplication Server
Oracle Applications ManagerOracle Applications Manager
Enterprise Portal & Personal Home PageEnterprise Portal & Personal Home Page
Technical Platform: Oracle E-business Suite 11i10
RAC OverviewRAC Overview
Positioning of Oracle Solutions for High Availability
HumanHumanErrorError
Data FailureData Failure& Disaster& Disaster
SystemSystemFailuresFailures
UnscheduledUnscheduledOutagesOutages
Data GuardData Guard
Data GuardData Guard
Real Application ClustersReal Application Clusters
11i and RAC11i and RACReference Reference
Architectures Architectures
Oracle Applications Architecture and RAC: Single Node/tier Configuration
Oracle Database
High Speed InterconnectFULL Cache Fusion
Read and write data transferred between SGAs
Client Connections
• 11i Applications(6i Forms/Reports
• iAS 1.0.2.2.2)• Oracle10g RAC
(both nodes)
Oracle Applications Architecture and RAC :Multi Node/tier Configuration
Oracle Database
High Speed Interconnect
Apps &Middle-Tier
ClientHardware Load Balancer( Local Director )
(11i Applications,6i Forms/Reports,iAS 1.2.2.2)
Database TierOracle10g RAC
Extranet(DMZ)
Intranet
Internet
Application Servers
Oracle9iApplication Server
Network Load Balancing
Real ApplicationClusters (RAC)
orFail-Over Cluster
Server Cluster
Oracle9iDatabase
Storage
USERSExample: Available andScalable System
Extranet(DMZ)
Intranet
Internet
“With Oracle Real Application
Clusters (RAC)”
Oracle9iApplication Server
Oracle9iDatabase
StorageOracle10g Data Guard
“Physical Standby”
USERSExample: Available andScalable System
ArchitectureArchitectureBest PracticesBest Practices
Architecture Best Practices
Any optimum number of nodes – No optimum number. We have customers from 2 nodes to
6 nodes in production
Interconnect – Custom protocols and network hardware are supported– For eg: RSM, UDP, LLT– Best practice: UDP ( TCP for windows ) over gigabit
PCP a requirement or not– Yes. PCP is a requirement for RAC customers
Architecture Best Practices
Storage– Customers use both NAS and SAN
Other architectural recommendations– Shared application file system recommended for RAC
customers– Concurrent tier should be separate from database tier– Increase cache value of sequences if sequence becomes a
bottleneck
Conversion to RAC before converting to ASM
LoadbalancingLoadbalancing& & FailoverFailover
with 11i RACwith 11i RAC
Loadbalancing Best Practices
Server side loadbalancing for forms and self-service – Out of the box configuration with autoconfig
Batch programs are partitioned by their manager– Current limitation in transaction managers due to
dbms_pipes– Being rearchitected with AQ’s in the near future for 11i
Failover: System Availability After Node Failure
So what really happens after a node failure?
Failed node users receive error
Surviving users online afterbrief delay
New and failed users reconnect to surviving nodes
11i and 10g RAC11i and 10g RACInstall Best PracticesInstall Best Practices
Install Checklist
Requirements in the environment before you start
– Have shared storage– Have a cluster file system ( if not using ASM )– Setup hardware clustering ( if not using CSS )
Installation Roadmap
Installation roadmap– Step 1: install 10g CRS– Step 2: install database and upgrade– Step 3: migrate to RAC– Step 4: run Autoconfig to configure apps on RAC– Step 5: configure PCP and transaction managers– Step 6: migrate to ASM
Step1: Step1: Install10g CRS Install10g CRS
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Pre-Requisites for 10g CRS (10.1.0.3) installation
10.1.0.3 CRS Pre-Requisite Checks– Each node should have at least 2 network adapters– Interface name associated with the network adapters for each
network should be the same on all nodes– Public Network: Adapters should support TCP/IP– Private Network: Must support UDP ( Gigabit Ethernet
recommended for the interface )Verify IP Address requirementsVerify Kernel parameter settings
– Reference: Doc# B10766_07Configure Oracle User’s environment
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS (10.1.0.3)
Create staging areas for oracle cluster ready services 10.1.0.3, oracle database software 10.1.0.3.Start runInstaller from oracle cluster ready services 10.1.0.3 stage area.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install CRS Software (10.1.0.3)
At the end of the installation, installer will prompt for executing root.sh from both the nodes. Execute root.sh from ORACLE_HOME specified for CRS install by logging as root. That will also start all the CRS services on both the cluster nodes. Execute "$oracle_home<CRS>/olsnodes". If this returns all the cluster node names, then your CRS installation is successful.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Administration of CRS Services
Shutdown the CRS services using $CRS_HOME/crs/admin/init.crs stop
Startup the CRS services using $CRS_HOME/crs/admin/init.crs start $CRS_HOME/crs/admin/inittab
For adding resource to CRS use $CRS_HOME/bin/crs.register
For checking the status of CRS resource use $CRS_HOME/bin/crs_stat –t
Step 2:Step 2:Install DatabaseInstall Database
Software Software
Install Database Software (10.1.0.3)
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Start runInstaller from 10.1.0.3 stage area and install 10.1.0.3Verify cluster nodesExecute root.sh to start VIPCA ( VIP Configuration Assistant )After VIP is finished, Net Configuration Assistant ( Netca ) gets executed
– Define listeners
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Upgrade CRS & Database software to 10.1.0.4
Download the Oracle Database 10.1.0.4 patch 4163362.Set the ORACLE_HOME variable to the 10.1.0.3 CRS_HOME installed earlier in step 1.Shutdown all the CRS services on all the nodes in the cluster.Run "runInstaller" from the 10.1.0.4 stage area and upgrade the CRS software to 10.1.0.4. Set the ORACLE_HOME variable to 10.1.0.3 Database home installed in step 2. Run "runInstaller" from the 10.1.0.4 stage area and upgrade the database software to 10.1.0.4.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Upgrade Oracle 9i(9.2.0.5) to 10g (10.1.0.4)
Verify the /etc/oratab file for correct entries for the 9i database and 9i Oracle Home. Set the environment variable
ORACLE_HOME pointing to ORACLE database home 10g installed in previous step. Run dbua from $ORACLE_HOME/bin
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Upgrading database (10.1.0.4)
After the upgrade process is done, run the $ORACLE_HOME/nls/data/old/cr9idata.pl script. This will create the $ORACLE_HOME/nls/data/9idatadirectory. If you have unchecked the “Compile invalid objects”tab then run $ORACLE_HOME/rdbms/admin/utlrp.sql to compile all invalid objects.
Enable Autoconfig on database tier (10g ORACLE_HOME)
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Copy appsutil, appsoui, oui22 directory from 9i ORACLE_HOME to 10g ORACLE_HOME
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Enable Autoconfig on database tier (10g ORACLE_HOME)
Modify the listener.ora and tnsnames.ora as per the sample below
<SID> =(ADDRESS_LIST =(ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROC<SID>))(ADDRESS= (PROTOCOL= TCP)(Host= host2)(Port= db_port)))
SID_LIST_<sid> =(SID_LIST =(SID_DESC =(ORACLE_HOME= <10g oracle home path>)(SID_NAME = <SID>))(SID_DESC =(SID_NAME = PLSExtProc)(ORACLE_HOME = <<10g oracle home path>)(PROGRAM = extproc)))
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Enable Autoconfig on database tier (10g ORACLE_HOME)STARTUP_WAIT_TIME_<SID> = 0CONNECT_TIMEOUT_<SID> = 10TRACE_LEVEL_<SID> = OFF
LOG_DIRECTORY_<SID> = <10g oracle home path>/network/adminLOG_FILE_<SID> =<SID>TRACE_DIRECTORY_<SID> = <10g oracle home path>/network/adminTRACE_FILE_<SID> = <SID>ADMIN_RESTRICTIONS_<SID> = OFF
Sample TNSNAMES.ora file
<SID>=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host2)(PORT=<db_port>))(CONNECT_DATA=(SERVICE_NAME=<database name>)(INSTANCE_NAME=<SID>)))
LISTENER_<SID>=(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=host2)(PORT=<db_port>)))
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Step3:Step3:Convert 10g Convert 10g
to RACto RAC
Convert 10g Database to RAC. Install
10g CRSInstall 10g DB
& UpgradeMigrate to
RACConfigure Apps
on RAC Configure
PCPMigrate to
ASM
Convert to RAC using adclone utilities•perl adpreclone.pl database•perl adcfgclone.pl database
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Execute adcfgclone.pl script Install
10g CRSInstall 10g DB
& UpgradeMigrate to
RACConfigure Apps
on RAC Configure
PCPMigrate to
ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Execute adcfgclone.pl script Install
10g CRSInstall 10g DB
& UpgradeMigrate to
RACConfigure Apps
on RAC Configure
PCPMigrate to
ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Step 4: Step 4: Establishing RACEstablishing RAC
on Apps Tieron Apps Tier
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Establish Applications Environment for RAC on Apps Tier
Logon to the Application Tier NodeSource the applications environmentExecute the autoconfig by using $AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file> Repeat the steps on all Application Tier Nodes
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Setting up Load Balancing
Run the Context Editor through Oracle Applications Manager interface to set the value of "Tools OH TWO_TASK","iAS OH TWO_TASK" and "Apps JDBC Connect Alias" .To load balance the forms based applications database connections, set the value of "Tools OH TWO_TASK" to point to the <database_name>_806_balance alias generated in the tnsnames.ora file. .
– (e.g servicename_806_BALANCE )
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Setting up Load Balancing
To load balance the self-service applications database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the <database_name>_balance alias generated in the tnsnames.ora file
Execute the autoconfig by using $AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file>
Restart the applications processes by using the latest scripts generated after autoconfig execution
Step 5:Step 5:PCP SetupPCP Setup
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Parallel Concurrent Processing Setup: Prerequisites
For setting up PCP for RAC environment the Application must be configured to use GSM (Generic Service Management). Refer tometalink document 210122.1 Ensure that all patches related to parallel concurrent processing are applied on the environment. See Appendix of MetaLink Note312731.1 for more information.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Parallel Concurrent Processing Setup
Edit the applications context file by using context editor and set the value of APPLDCP=ON and "Concurrent Manager TWO_TASK" value to instance alias. APPLFSTT=<semi-colon separated list of TWO_TASK values.> Execute Autoconfig by using COMMON_TOP/admin/scripts/<context_name> /adautocfg.sh on all concurrent nodes.Source the application environment by using $APPL_TOP/APPSORA.env
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Parallel Concurrent Processing Setup
Check the configuration files tnsnames.ora, listener.ora located under 8.0.6 home at $ORACLE_HOME /network/admin/<context>.Ensure that you have information of all the other concurrent nodes for FNDSM and FNDFS entries. Restart the application listener processes on
each application node.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Parallel Concurrent Processing Setup: Define Nodes
Use SysAdmin reponsibility Navigate to Install > Nodes screen and ensure that each node in the cluster is registered.Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for all the concurrent managers according to the desired configuration for each nodes workload. The Internal Concurrent Manager should be defined on the primary PCP node only. Verify whether the Internal Monitor for each node is defined properly with correct primary and secondary node specification and work shift details.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
PCP: Setup Transaction Managers
Define the transaction manager for each node in which RAC instance is running.Rename these transaction mangers with node names (e.g. INV Manager_host2).Define the primary node as the node for which the transaction manager has been definedAssign the standard work shift and processes for
these transaction manager.Repeat the above steps for all the concurrent managers of type transaction manager.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
PCP: Setup Transaction Managers
Update instance_number in fnd_concurrent_queues to the respective instance number of the primary node of Internal Concurrent Manager using following statement.Update fnd_concurrent_queues set instance_number=<corresponding instance_number which is running on primary node of PCP> where concurrent_queue_name='FNDICM';
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
PCP: Setup Transaction Managers
Set the $APPLCSF environment variable on all the CP nodes pointing to a log directory on a shared file system.Stop and start the Concurrent managers. Verify the concurrent manager startup by navigating to Concurrent > Manager > Administer screen .Verify the Actual and target process and node names for each concurrent manager as distributed for each node.
Concurrent Processing: Current Limitations
Limitation due to dbms_pipe– 1:1 RAC nodes to CP nodes– Issues with transaction manager programs when CP node
is not available ( when forms is loadbalanced )Support of service name during startupFNDSM and apps listener failures doesn’t trigger PCPNeed control of specific child processes to run on a particular node: Require support for 10g Listener format ( using VIP )
Automatic Storage Management (ASM) - Overview
Portable, high performance cluster file system Data is spread across disks for optimal performance and resource utilizationIntegrated mirroring across disksRemoves need for third-party volume manager and file system
Automatic StorageManagement
The Operational Stack
ASMTRADITIONAL
OracleASM
Disks
Logical Vol
File System
0010 0010 0010 0010 00100010 0010 0010 0010 0010Files
Tablespace
Tables
Disk Group
TablespaceFiles
File System
Logical Vol
Tables
Networked Storage (SAN, NAS, DAS)
The Operational StackTRADITIONAL ASM
1. Add Disk to OS2. Issue the Add Disk
command
1. Add Disk to OS2. Create volume(s) with
Volume Manager 3. Create File System over
volume 4. Figure out data to move to
new disk 5. Move data to new files6. Rename files in database7. Re-tune I/O
ASM Architecture –Non RAC
Pool of Storage
ASM Instance
Non–RACDatabase
Server
OracleDB Instance
Disk Group
ASM Architecture – RAC
Clustered Pool of Storage
ASM Instance ASM Instance
RACDatabase
Clustered Servers
OracleDB Instance
OracleDB Instance
Disk Group
ASM Disk GroupsA pool of disks managed as a logical unitPartitions total disk space into uniform sized megabyte unitsASM spreads each file evenly across all disks in a disk groupCoarse or fine grain striping based on file type
Disk Group
ASM Failure Groups
Failure Group 1 Failure Group 2
Controller 1 Controller 2
A Failure Group is a set of disks sharing a common resource whose failure needs to be tolerated
– Redundant copies of an extent are stored in separate Failure Groups
Failure Groups assigned by DBAs or automatically by ASM
Disk Group
ASM Best Practices
Manageability
o Create 3 separate, independent storage areas for software, active database files, and recovery-related files.
Disk Group
o Decide on the diskgroup redundancy before creating diskgroup.
o Mount all the required diskgroups at onceo Create two diskgroups (database area , flash recovery area) oCreate diskgroups using large number of similar type disks– same size characteristics– same performance characteristics
ASM Best Practices
ASM Disko Make sure disks span multiple backend disk adapterso Implement multiple access paths to the storage array using two or more HBAso Deploy multi-pathing software over these multiple HBAs to provide I/O load-balancing and failover capabilities.
ASM instance o Start ASM instance before starting database instance
oAlways first shutdown the database instances associated with ASM instance.
oOnly one ASM instance per node
ASM Migration Steps
ASM Migration Roadmap - Step1 : Create ASM instances on all RAC nodes - Step2 : Create ASM Disk groups and mount on all ASM
instances - Step3 : Migrate Database Using RMAN- Note: The Database has already been converted to RAC
Create ASM Instances on All the Database Nodes in the Cluster
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Install 10g software in a separate ORACLE_HOMECreate new init.ora with following parameters
*.Instance_type='asm'*.asm_diskgroups=+dg1*.Large_pool_size=12m*.asm_diskstring='/dev/rdsk/dsk**.asm_power_limit = 11*.Remote_login_passwordfile='shared'*.User_dump_dest='/u01/app/admin/+ASM/udump‘*.Background_dump_dest='/u01/app/admin/+ASM/bdum'*.Core_dump_dest='/u01i/app/admin/+ASM/cdump
Repeat this process on all the database nodes
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Create ASM Diskgroups
Create ASM disk group using “create Diskgroup”command or using the OEM consoleSelect the ASM Diskgroup redundancy set from
– External (no mirroring/ no striping)– Normal (2 way mirroring/ fine or coarse striping )– High (3 way mirroring /fine or coarse striping )
Mount the diskgroup on all ASM instances in the cluster. For automatic mounting of diskgroups, add thediskgroup name in the init.ora of ASM instances
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Create ASM Diskgroups
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Create ASM Diskgroups /Add Disks
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)
Shutdown all the instances of the database that you want to migrate to ASMPerform the following steps as pre-migration activitiesLogon using the operating system user on any of the database nodeStartup the instance and determine the "dbid" by using rman / TARGET.Note the value of this "dbid"Obtain the filenames of the control files, datafiles, and online redo logs for your databaseGenerate RMAN commands to undo the ASM migrationShutdown the instance
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)
Disable the block change tracking for the database using"ALTER DATABASE DISABLE BLOCK CHANGETRACKING;" From SQL prompt
Modify the initialization parameter file of the target database as follows:Set DB_CREATE_FILE_DEST and db_create_online_log_DEST_n (where n=1-4) to refer to the desired ASM disk groups
Connect to target database using "rman / TARGET" and startup the database in "nomount" mode.
RMAN> startup NOMOUNT
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)
Restore the control file into new location from locationspecified in the old SPFILE or PFILE
RMAN> RESTORE CONTROLFILE FROM 'filename_of_old_control_file';
Note: If you have multiple control files then you have to restore themone by one into the new control file.
Mount the database.RMAN> ALTER DATABASE MOUNT;
Copy the database into the ASM disk group using the following command:
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)
RMAN> BACKUP AS COPY DATABASE FORMAT'+disk_group';
Switch all datafiles into new ASM disk group RMAN> SWITCH DATABASE TO COPY;
Open the database RMAN> ALTER DATABASE OPEN;
Add temp files using RMAN> SQL "ALTER TABLESPACE tablespacename ADD TEMPFILE;"
Migrate the Redo logs using "Migrating Online Redo Logs to ASM Storage" script.
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)Repeat the following steps to startup other instances of this database
in the cluster.
Login to each database node using database tier operating system account.
Modify the initialization parameter file of the target database
o Change the pointer of control_files to point to new location and name of control file (e.g. "+DG1/cntrl1")
o Set DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n (where n=1-4) to refer tothe ASM disk groups used in the earlier diskgroup migration
Install 10g CRS
Install 10g DB& Upgrade
Migrate toRAC
Configure Apps on RAC
Configure PCP
Migrate to ASM
Migrate Database to ASM (Using RMAN)
Startup the instance on each database node in the cluster.
Migrate the redo logs corresponding to this instance using "Migrating Online Redo Logs to ASM Storage" script.
Useful ASM V$ Views
V$asm_diskgroup– State of all diskgroups found on the disks in asm_diskstring
V$asm_disk – View of all disks in asm_diskstring
ASMname,OSdevicepath,Spaceusage, Performance statistics,Redundancy
V$asm_file – One row of every asm file in every disk group
V$asm_operation– Information about every long running operation running in asm
instance.
ASM Debugging Techniques
All instances– alert.log– Background .trc files– Console log (/var/log/messages)
Problem instances– .trc file that encountered error(s)
Any installation errors or messagesASM instance init.ora parameters
– show parameter ASM_
ASM Debugging Techniques
If problem mounting disk group, adding/dropping disk:– select * from V$ASM_DISKGROUP– select GROUP_NUMBER, DISK_NUMBER, FAILGROUP,
NAME, MOUNT_STATUS, HEADER_STATUS, MODE_STATUS, STATE, LIBRARY, PATHfrom V$ASM_DISK
Space problems:– V$ASM_ queries from prior slide– ALTER DISKGROUP … REBALANCE
Check V$ASM_OPERATION for completion– ALTER DISKGROUP … CHECK ALL
ASM Debugging Techniques: CRS Stack
CSS daemon logs in $ORA_CRS_HOME/css/log– ocssdN.log– ocssdN.blg
CSS daemon init output files in $ORA_CRS_HOME/initCSS daemon startup files (if any), e.g. /etc/init.d/init.cssdOracle Cluster Repository (OCR)
– output of ‘ocrdump’– Provide output from all nodes to rule out OCR
configuration errors
Known Issues & Known Issues & PainpointsPainpoints
Known IssuesASM migration
– Adcfgclone process limitations – Need to migrate to RAC before migrating to ASM
Cloning– Complete automated solution unavailable for cloning RAC
to Non-RACRaw Devices
– No support out of box for installing on Raw– Raw installation involves migration and is a consulting
effort
Known Issues
Installation–Autoconfig execution required on all Database nodes
No Autoconfig support for ASM configuration
References
Configuring Oracle E-Business Suite Release 11i with 10g RAC and 10g ASM: Metalink Note.312731.1Oracle Applications and Database - FAQ : Metalink# 285267.1Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration : MetaLink Note# 279956.1E-Business Suite 11i on RAC : Configuring Database Load balancing & Failover: MetaLink Note# 294652.1