+ All Categories
Home > Documents > Jul042013.doc

Jul042013.doc

Date post: 18-Dec-2015
Category:
Upload: sravankumarthadakamalla
View: 13 times
Download: 5 times
Share this document with a friend
Popular Tags:
66
Jul042013 What is Oracle RAC Node Eviction What is Oracle RAC Node Eviction One of the most common and complex issue is node eviction issue. A node is evicted from the cluster after it kills itself because it is not able to service the applications. This generally happens during the communication failure between the instances, when the instance is not able to send heartbeat information to the control file and various other reasons. Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some critical problem is detected. A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung or severely degraded machine, or a hung ocssd.bin process. The purpose of this node eviction is to maintain the overall health of the cluster by removing bad members. During failures, to avoid data corruption, the failing instance evicts itself from the cluster group. The node eviction process is reported as Oracle error ORA-29740 in the alert log and LMON trace files. Consolidated AWR report for RAC 11gR2 consolidated AWR report for RAC awrgrpt.sql This is a cluster wide awr report, so you can see a lot of the information from all the nodes in the same section, and you can also see aggregated statistics from all the instances at the same time (You can see totals, averages and standard deviations). awrgdrpt.sql This is a cluster wide stats diff report (like you had awrddrpt.sql in 11gr1), comparing the stats differences between two different snapshot intervals, across all nodes in the cluster. ar282013 basic rman commands
Transcript

Jul042013What is Oracle RAC Node EvictionWhat is Oracle RAC Node Eviction

One of the most common and complex issue is node eviction issue. A node is evicted from the cluster after it kills itself because it is not able to

service the applications.

This generally happens during the communication failure between the instances, when the instance is not able to send heartbeat information to the

control file and various other reasons.

Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some critical problem is detected. A

critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung or severely degraded machine, or

a hung ocssd.bin process. The purpose of this node eviction is to maintain the overall health of the cluster by removing bad members.

During failures, to avoid data corruption, the failing instance evicts itself from the cluster group. The node eviction process is reported as

Oracle error ORA-29740 in the alert log and LMON trace files.

Consolidated AWR report for RAC11gR2 consolidated AWR report for RAC

awrgrpt.sqlThis is a cluster wide awr report, so you can see a lot of the information from all the nodes in the same section, and you can also see aggregated statistics from all the instances at the same time (You can see totals, averages and standard deviations).

awrgdrpt.sqlThis is a cluster wide stats diff report (like you had awrddrpt.sql in 11gr1), comparing the stats differences between two different snapshot intervals, across all nodes in the cluster.

ar282013basic rman commands1. All Backup Sets, Backup Pieces, and Proxy Copies

To list all backup sets, backup pieces, and proxy copies:

RMAN> list backup;

2. Expired Backup List

We can also specify the EXPIRED keyword to identify those backups that were not found during a crosscheck:

RMAN> list expired backup;

3. List Backup by File

You can list copies of datafiles, control files, and archived logs. Specify the desired objects with the listObjList or recordSpec clause. If you do not specify an object, then RMAN displays copies of all database files and archived redo logs. By default, RMAN lists backups in verbose mode, which means that it provides extensive, multiline information.

RMAN> list backup by file;

4. List all Archived Log Files

You can list all archived log files as follows

RMAN> list archivelog all;

5. Summary ListsBy default the LIST output is highly detailed, but you can also specify that RMAN display the output in summarized form. Specify the desired objects with the listObjectList or recordSpec clause. If you do not specify an object, then LIST BACKUP displays all backups. By default, RMAN lists backups in verbose mode.

RMAN> list backup summary;

Dec092012Difference between Raw Device and Block DeviceDifference between Raw Device and Block Device

A RAW Device read/write 0 or more bytes, in a stream and can be opened using Direct I/O. RAW devices can be faster for certain applications like databases because they does not contain a file system, and they dont use cache for the same reason. You dont mount a RAW device.

A BLOCK device would read/write bytes in fixed size blocks, as in disk sectors. The block device is cached. I/O to the device is read into memory, referred to as buffer cache, in large blocks.The block devices are used to mount filesystems. Each disk has a block device interface where the system makes the device byte addressable and you can write a single byte in the middle of the disk.

262012ORA-00354: corrupt redo log block headerORA-00354: corrupt redo log block header

Issue :Normal users could not connect to database.It messaged ORA-00257 :Connect Internal Only until freed.Whenever you try to archive the redo log it returns the message.

ORA-16038: log %s sequence# %s cannot be archivedORA-00354: corrupt redo log block headerORA-00312: online log %s thread %s: %s

Cause:

At alert log you will see the ORA-16038,ORA-00354,ORA-00312 error serial.The error produced as because it failed to archive online redolog due to a corruption in the online redo file.

Solution of The problem:

Making your database running by clear the unarchived redo log.SQL>select * from v$log;This will show you some log is not Archived. It may be the corrupted redo log. We should clear it by unarchive it.

SQL>alter database clear unarchived logfile logilename;

This makes the corruption disappear which causes the contents of the cleared online redo file.

Try to switch the log and confirm it is working fine.

If not then you may have to recreate a redo log for that group only.

Make a complete backup of the database.

252012How to stop and start cluster including ASM and DBLet us say we have two Node

Node 1: operation to stop the CRS

# crsctl stop crs

# crsctl status resources -t

Node 2: Operation to stop the CRS

# crsctl stop crs

# crsctl status resources -t

Confirm the following logs that both nodes related all services are stopped without any problem.

1. $GRID_HOME/log//gipcd/gipcd.log

2. $GRID_HOME/log//agent/ohasd/orarootagent_root/orarootagent_root.log

3. $GRID_HOME/log//hostnamealert.log

4. $GRID_HOME/log//ohasd/ohasd.log

While start the services please make sure you start the CRS of Node 1 first when you run on 11g Rel2 11.2.0.2. If you do start the Second node you may get error and not able to start the ASM with LMON failed status which leads few more error.

When you do deep analysis you may find the private IP assigned to these get assigned with 169.x.x.x which leads

- Snip from ASM Log

Private Interface eth1:1 configured from GPnP for use as a private interconnect.

[name=eth1:1, type=1, ip=169.254.85.248

looks like it is a bug (Oracle Note id 1374360.1 and Bug# 12425730)

Create SR with Oracle and apply the patch accordingly.

We have work around for this is make sure you stop and start the Nodes in the sequence of you stopped.

032012detecting whos causing excessive redo generationSolution I AWR Report:

The first solution that comes in our mind is to go through AWR report for DBA_HIST_SESSMETRIC_HISTORY view. Oracle DBA_HIST_SESSMETRIC_HISTORY displays the history of several important session metrics and we hope to get clue to our problem by analyzing the metrics of past sessions. Unfortunately sometimes we find no rows after querying the view.

Solution II Enabling Oracle Session History Metrics in DBCONSOLE:

This solution suggests you to enable Oracle Session History Metrics in DBCONSOLE. You can set a small value of Redo Writes per second.

Solution III Calculating Metric for Redo Generated:

If you are not using Enterprise Manager then you will have to calculate the metric information manually. You can calculate the metric for Redo Generated per second with the formula DeltaRedoSize / Seconds where DeltaRedoSize is the difference in select value from v$sysstat where name=redo size between end and start of sample period and Seconds is the number of seconds in sample period.

Solution IV Querying Oracle V$SESS_IO View:

When undo is generated in any transaction then it will automatically generate redo as well. This solution examines the amount of undo generated in order to find the sessions that are generating lots of redo.

Oracle V$SESS_IO view lists I/O statistics for each user session. The column BLOCK_CHANGES shows the number of blocks changed by the session. High value for this column means your session is generating lots of redo. You will have to run below query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. If you get large deltas then it shows high redo generation by the session. You use this solution to check for programs that are generating lots of redo when these programs activate more than one transaction.

SQL> SELECT S1.SID, S1.SERIAL_NUM, S1.USER_NAME, S1.PROGRAM, I1.BLOCK_CHANGESFROM V$SESSION S1, V$SESS_IO I1 WHERE S1.SID = I1.SID ORDER BY 5 DESC, 1, 2, 3, 4;

Solution V Querying Oracle V$TRANSACTION View:

Oracle V$TRANSACTION is a Data Dictionary view that lists the active transactions in the system. This view can be used to track undo by session. The USED_UBLK column of this view shows the number of undo blocks used and the USED_UREC column shows the number of undo records used by the transaction.

Below query can help you find out the particular transactions that are generating redo. Running the query multiple times and analyzing the delta between each occurrence of USED_UBLK and SED_UREC will help you infer that large deltas indicate high redo generation by the session.

SQL> SELECT S1.SID, S1.SERIAL_NUM, S1.USER_NAME, S1.PROGRAM, T1.USED_UBLK, T1.USED_URECFROM V$SESSION S1, V$TRANSACTION T1 WHERE S1.TADDR = T1.ADDR ORDER BY 5 DESC, 6 DESC, 1, 2, 3, 4;

Solution VI Tracking Undo Generated By All Sessions:The following statement displays a record for all sessions that have generated undo. It shows both how many undo blocks and undo records a session made.

SELECT S1.SID, S1.USER_NAME, R1.NAME, T1.START_TIME, T1.USED_UBLK , T1.USED_URECFROM V$SESSION S1, V$TRANSACTION T1, V$ROLLNAME R1 WHERE T1.ADDR = S1.TADDR AND R1.USN = T1.XIDUSN;

Solution VII Collecting Statistics from V$SESSTAT to AWR:

Oracle V$SESSTAT view records the statistical data about the session that accesses it. You will have query the V$STATNAME view in order to find the name of the statistic associated with each statistic number. In this solution we will collects statistics from V$SESSTAT view into our private AWR views.

032012Which Sessions Generating more Redo logs in oracle

SELECT s.sid, s.serial#, s.username, s.program,i.block_changesFROM v$session s, v$sess_io iWHERE s.sid = i.sidORDER BY 5 desc;

SELECT s.sid, s.serial#, s.username, s.program,t.used_ublk, t.used_urecFROM v$session s, v$transaction tWHERE s.taddr = t.addrORDER BY 5,6 desc;

Recover Database from ORA-00333: redo log read error

In development environment, it is very common scenario that we have multiple databases in a single machine by using VMware (i.e, each VMware contains one database). Again those machines doesn't have consistant power backup. Therefore we have to face power failure or VMware hang-up. So, we are forced to restart the machine while databases are still up & running. After restarting the machine, we have mostly got he following error:

ORA-00333: redo log read error blockcount.Here are the steps to overcome the error

SQL> startup

ORACLE instance started.

Total System Global Area ***** bytes

Fixed Size ***** bytes

Variable Size ***** bytes

Database Buffers ***** bytes

Redo Buffers ***** bytes

Database mounted.

ORA-00333: redo log read error block *Number* count *Number*Step 1:As the Db is in mount mode, We can query v$log & v$logfile to identify the status of log file group and their member. SQL> select l.status, member from v$logfile inner join v$log l using (group#); STATUS MEMBER

------------- --------------------------------------

CURRENT /oracle/fast_recovery_area/redo01.log

INACTIVE /oracle/fast_recovery_area/redo02.log

INACTIVE /oracle/fast_recovery_area/redo03.logStep 2: Recover the database using ackup controlfile. SQL> recover database using backup controlfile;ORA-00279: change generated at needed for thread 1

ORA-00289: suggestion : /oracle/fast_recovery_area/archivelog/o1_mf_1_634_%u_.arc

ORA-00280: change for thread 1 is in sequence #

Specify log: {=suggested | filename | AUTO | CANCEL}Step3: Give 'CURRENT' log file member along with location as input. If it does not work give other log file members along with location in input prompt. In our casewe give /oracle/fast_recovery_area/redo01.logLog applied.

Media recovery complete.Step 4: Open the database with reset logfileSQL> alter database open resetlogs;Database altered.282011RMAN Show CommandsTheSHOWcommand is used to display the values of current RMAN configuration settings.

RMAN>showall;Shows all parameters.

RMAN>showarchivelog backup copies;Shows the number of archivelog backup copies.

RMAN>showarchivelog deletion policy;Shows the archivelog deletion policy.

RMAN>showauxname;Shows the auxiliary database information.

RMAN>showbackup optimization;Shows whether optimization is on or off.

RMAN>showauxiliary channel;Shows how the normal channel and auxiliary hannel are configured.

RMAN>showcontrolfile autobackup;Shows whether autobackup is on or off.

RMAN>showcontrolfile autobackup format;Shows the format of the autobackup control file.

RMAN>showdatafile backup copies;Shows the number of datafile backup copies being ept.

RMAN>showdefault device type;Shows the default type disk or tape.

RMAN>showencryption algorithm;Shows the encryption algorithm currently in use.

RMAN>showencryption for database;Shows the encryption for the database.

RMAN>showencryption for tablespace;Shows the encryption for the tablespace.

RMAN>showexclude;Shows the tablespaces excluded from the backup.

RMAN>showmaxsetsize;Shows the maximum size for backup sets. The default value is unlimited.

RMAN>showretention policy;Shows the policy for datafile and control file backups and copies that RMAN marks as obsolete.

RMAN>showsnapshot controlfile name;Shows the snapshot control filename.

Note:You can see anynondefault RMAN configured settingsin theV$RMAN_CONFIGURATIONdatabase view.

222011How to find out the Master Node of a RACHowto find out the Master Node of a RACOption1:#ocrconfig-showbackupThe node that store OCR backups is the master node.Option2:$cssd>grep-imaster nodeocssd.log|tail-1[CSSD]CLSS-3001:local node number1,master node number1Abovegrep shows the master node in the cluster is node number1.Option3:$grep master rac3_diag_4217.trcIm the master nodeOption4:Query V$GES_RESOURCE to identified master node.How do I identify the OCR file locationDo simple search for ocr.loc

/var/opt/oracle/ocr.locor/etc/ocr.locor# ocrcheck

How to delete all archive logs in ASMBest option is using RMAN with nocatalog and remove the old archive logs if not required

$ rman nocatalog /

RMAN>delete archivelog all completed before sysdate -3;

How to recover from a DROP or TRUNCATE table by using RMAN.There are three options available:

1. Restore and recover the primary database to a point in time before the drop. This is an extreme measure for one table as the entire database goes back in time.

2. Restore and recover the tablespace to a point in time before the drop. This is a better option, but again, it takes the entire tablespace back in time.

3. Restore and recover a subset of the database as a DUMMY database to export the table data and import it into the primary database. This is the best option as only the dropped table goes back in time to before the drop.

So option 3 is best.

Steps for Option 31. To recover from a dropped or truncated table, a dummy database (copy of primary) will be restored and recovered to point in time so the table can be exported.

2. Once the table export is complete, the table can be imported into the primary database. This dummy database can be a subset of the primary database. However,the dummy database must include the SYSTEM, UNDO (or ROLLBACK), and the tablespace(s) where the dropped/truncated table resides.

The simpliest method to create this dummy database is to use theRMAN duplicate command.

RMAN Duplicate CommandCONNECT TARGET SYS/oracle@trgtCONNECT AUXILIARY SYS/oracle@dupdb

DUPLICATE TARGET DATABASE TO dupdbNOFILENAMECHECK UNTIL TIME SYSDATE-7;

Assuming the following

The target database trgt and duplicate database dupdb are on different hosts but have exactly the same directory structure.

You want to name the duplicate database files the same as the target files.

You are not using a recovery catalog.

You are using automatic channels for disk and sbt, which are already configured.

You want to recover the duplicate database to one week ago in order to view the data in prod1 as it appeared at that time (and you have the required backups and logs to recover the duplicate to that poin tin time).

Difference between locks and latchesLocks are used to protect the data or resourses from the simulteneous use of them by multiple sessions which might set them in inconsistant state Locks are external mechanism, means user can also set locks on objects by using various oracle statements.

Latches are for the same purpose but works at internal level. Latches are used to Protect and control access to internal data structres like various SGA buffers.They are handled and maintained by oracle and we cant access or set it.. this is the main difference

Flashback Database disabled automaticallyIssue 1:Intially Flashback Database was enabled but noticed Flashback was disabled automatically long time ago.

Reason:It could be because the flashback area 100%

Once Flashback Area become 100% full then oracle will log in Alert that Flashback will be disabled and it will automatically turn off Flash Back without user intervention.

ASM LimitationASM Limitation

ASM has the following size limits: 63 disk groups in a storage system 10,000 ASM disks in a storage system 1 million files for each disk group

How can I check if there is anything rolling back?It depends on how you killed the process.

If you did and alter system kill session you should be able to look at the used_ublk block in v$transaciton to get an estimate for the rollback being done.

If you killed to server process in the OS and pmon is recovering the transaction you can look at V$FAST_START_TRANSACTIONS view to get the estimate

RMAN Restore PreviewThe PREVIEW option of the RESTORE command allows you to identify the backups required to complete a specific restore operation. The output generated by the command is in the same format as the LIST command. In addition the PREVIEW SUMMARY command can be used to produce a summary report with the same format as the LIST SUMMARY command. The following examples show how these commands are used:

# Spool output to a log fileSPOOL LOG TO c:oraclermancmdrestorepreview.lst;

# Show what files will be used to restore the SYSTEM tablespaces datafileRESTORE DATAFILE 2 PREVIEW;

# Show what files will be used to restore a specific tablespaceRESTORE TABLESPACE users PREVIEW;

# Show a summary for a full database restoreRESTORE DATABASE PREVIEW SUMMARY;

# Close the log fileSPOOL LOG OFF;

How to Create AWR Report ManuallyStep 1 Create snapshot manuallyexec DBMS_WORKLOAD_REPOSITORY.create_snapshot();Step 2 Create AWR Report$cd $ORACLE_HOME$cd rdbms$cd admin$sqlplus /nologSQL>connect / as sysdbaSQL>@awrrpt.sql..Enter value for begin_snap: 1405..Enter value for end_snap: 1406..Enter value for report_name:awrrpt_1_10405_10406.html

http://www.bestremotedba.com/topics/database-administration/page/7/ADDM(Automatic Database Diagnostic Monitor) in Oracle Database 10gThe Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem.

An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL.

The ADDM analysis includes:

CPU bottlenecksUndersized Memory StructuresI/O capacity issuesHigh load SQL statementsHigh load PL/SQL execution and compilation, as well as high load Java usageRAC specific issuesSub-optimal use of Oracle by the applicationDatabase configuration issuesConcurrency issuesHot objects and top SQL for various problem areas

ADDM analysis results are represented as a set of FINDINGs

Example ADDM ReportFINDING 1: 31% impact (7798 seconds)SQL statements were not shared due to the usage of literals. This resulted in additional hard parses which were consuming significant database time.

RECOMMENDATION 1: Application Analysis, 31% benefit (7798 seconds)

ACTION: Investigate application logic for possible use of bind variablesinstead of literals. Alternatively, you may set the parameter cursor_sharing to force.

RATIONALE: SQL statements with PLAN_HASH_VALUE 3106087033 were found to be using literals. Look in V$SQL for examples of such SQL statements.

In this example, the finding points to a particular root cause, the usage of literals in SQL statements, which is estimated to have an impact of about 31% of total DB time in the analysis period.

In addition to problem diagnostics, ADDM recommends possible solutions. When appropriate, ADDM recommends multiple solutions for the DBA to choose from. ADDM considers a variety of changes to a system while generating its recommendations.

Recommendations include:

Hardware changesDatabase configurationSchema changesApplication changesUsing other advisors

ADDM SettingsAutomatic database diagnostic monitoring is enabled by default and is controlled by the STATISTICS_LEVEL initialization parameter.The STATISTICS_LEVEL parameter should be set to the TYPICAL or ALL to enable the automatic database diagnostic monitoring.The default setting is TYPICAL.Setting STATISTICS_LEVEL to BASIC disables many Oracle features, including ADDM, and is strongly discouraged.

ADDM analysis of I/O performance partially depends on a single argument, DBIO_EXPECTED

The value of DBIO_EXPECTED is the average time it takes to read a single database block in microseconds. Oracle uses the default value of 10 milliseconds

Set the value usingEXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER(ADDM, DBIO_EXPECTED, 8000);

Diagnosing Database Performance Issues with ADDMTo diagnose database performance issues, ADDM analysis can be performed across any two AWR snapshots as long as the following requirements are met:

Both the snapshots did not encounter any errors during creation and both have not yet been purged.

There were no shutdown and startup actions between the two snapshots.

Using Enterprise ManagerThe obvious place to start viewing ADDM reports is Enterprise Manager. The Performance Analysis section on the Home page is a list of the top five findings from the last ADDM analysis task.

Specific reports can be produced by clicking on the Advisor Central link, then the ADDM link. The resulting page allows you to select a start and end snapshot, create an ADDM task and display the resulting report by clicking on a few links.

Executing addmrpt.sql ScriptThe addmrpt.sql script can be used to create an ADDM report from SQL*Plus. The script is called as follows:

@/u01/app/oracle/product/10.1.0/db_1/rdbms/admin/addmrpt.sql

It then lists all available snapshots and prompts you to enter the start and end snapshot along with the report name.

Using DBMS_ADVISOR PackageThe DBMS_ADVISOR package can be used to create and execute any advisor tasks, including ADDM tasks. The following example shows how it is used to create, execute and display a typical ADDM report:

BEGIN Create an ADDM task.DBMS_ADVISOR.create_task (advisor_name => ADDM,task_name => 970_1032_AWR_SNAPSHOT,task_desc => Advisor for snapshots 970 to 1032.);

Set the start and end snapshots.DBMS_ADVISOR.set_task_parameter (task_name => 970_1032_AWR_SNAPSHOT,parameter => START_SNAPSHOT,value => 970);

DBMS_ADVISOR.set_task_parameter (task_name => 970_1032_AWR_SNAPSHOT,parameter => END_SNAPSHOT,value => 1032);

Execute the task.DBMS_ADVISOR.execute_task(task_name => 970_1032_AWR_SNAPSHOT);END;/

Display the report.SET LONG 100000SET PAGESIZE 50000SELECT DBMS_ADVISOR.get_task_report(970_1032_AWR_SNAPSHOT) AS reportFROM dual;SET PAGESIZE 24The value for the SET LONG command should be adjusted to allow the whole report to be displayed.

The relevant AWR snapshots can be identified using the DBA_HIST_SNAPSHOT view.

ADDM ViewsDBA_ADVISOR_TASKS

This view provides basic information about existing tasks, such as the task Id, task name, and when created.

DBA_ADVISOR_LOG

This view contains the current task information, such as status, progress, error messages, and execution times.

DBA_ADVISOR_RECOMMENDATIONS

This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each run. The recommendations should be looked at in the order of the RANK column, as this relays the magnitude of the problem for the recommendation. The BENEFIT column gives the benefit to the system you can expect after the recommendation is carried out.

DBA_ADVISOR_FINDINGS

This view displays all the findings and symptoms that the diagnostic monitor encountered along with the specific recommendation

Bigfile Tablespaces in Oracle 10gBigfile tablespacesare tablespaces with a single large datafile.

In contrastnormal (smallfile) tablespacescan have several datafiles, but each is limited in size.

The system default is to create a smallfile tablespace.The SYSTEM and SYSAUX tablespace types are always created using the system default type.

Bigfile tablespaces must belocally managed

with automatic segment-space management

Exceptions to this rule includetemporary tablespaces and

locally managed undo tablespaces which are all allowed to have manual segment-space management.

Advantages of using Bigfile Tablespaces:By allowing tablespaces to have a single large datafile the total capacity of the database is increased. An Oracle database can have a maximum of 64,000 datafiles which limits its total capacity.It allows you to create a bigfile tablespace of up to eight exabytes (eight million terabytes) in size, and significantly increase the storage capacity of an Oracle database

Using fewer larger datafiles allows the DB_FILES and MAXDATAFILES parameters to be reduced, saving SGA and controlfile space.

It simplifies large database tablespace management by reducing the number of datafiles needed.

The ALTER TABLESPACE syntax has been updated to allow operations at the tablespace level, rather than datafile level.

Considerations:Bigfile Tablespace can be used with:

ASM (Automatic Storage Management)a logical volume manager supporting striping/RAID

Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications for parallel execution and RMAN backup parallelization.

Avoid using bigfile tablespaces if there could possibly be no free space available on a disk group, and the only way to extend a tablespace is to add a new datafile on a different disk group.

Syntax to create Bigfile TablespaceSQL > CREATE BIGFILE TABLESPACEDATAFILE /u01/oradata/datafilename.dbf SIZE 50G

Views:The following views contain a BIGFILE column that identifies a tablespace as a bigfile tablespace:

DBA_TABLESPACESUSER_TABLESPACESV$TABLESPACE

Major RAC Wait EventsIn a RAC environment the buffer cache is global across all instances in the cluster and hence the processingdiffers.The most common wait events related to this are gc cr request and gc buffer busy

GC CR request: the time it takes to retrieve the data from the remote cache

Reason: RAC Traffic Using Slow Connection or Inefficient queries (poorly tuned queries will increase the amount of data blocksrequested by an Oracle session. The more blocks requested typically means the more often a block will need to be read from a remote instance via the interconnect.)

GC BUFFER BUSY: It is the time the remote instance locally spends accessing the requested data block.

SRVCTLsrvctl start instance -d db_name -i inst_name_list [-o start_options]srvctl stop instance -d name -i inst_name_list [-o stop_options]srvctl stop instance -d orcl -i orcl3,orcl4 -o immediatesrvctl start database -d name [-o start_options]srvctl stop database -d name [-o stop_options]srvctl start database -d orcl -o mount

RAC|Aarthi Mudhalvan|Comments (0)rac,SRVCTLApr092010Oracle clusterware toolsOIFCFG allocating and deallocating network interfacesOCRCONFIG Command-line tool for managing Oracle Cluster RegistryOCRDUMP Identify the interconnect being usedCVU Cluster verification utility to get status of CRS resources

RAC|Aarthi Mudhalvan|Comments (0)Oracle clusterware tools,racApr072010OCROracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).

RAC|Aarthi Mudhalvan|Comments (0)OCR,Oracle Cluster Registry,racApr072010CRS ResourceOracle clusterware is used to manage high-availability operations in a cluster.Anything that Oracle Clusterware manages is known as a CRS resource.Some examples of CRS resources are database,an instance,a service,a listener,a VIP address,an application process etc.

FANFast application Notification as it abbreviates to FAN relates to the events related to instances,services and nodes.This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and service level information that includes service status changes such as,UP or DOWN events.Applications can respond to FAN events and take immediate action.

FAN UP and DOWN events:FAN UP and FAN DOWN events can be applied to instances,services and nodes.

Use of FAN events in case of a cluster configuration change:During times of cluster configuration changes,Oracle RAC high availability framework publishes a FAN event immediately when a state change occurs in the cluster.So applications can receive FAN events and react immediately.This prevents applications from polling database and detecting a problem after such a state change.

How do we verify that RAC instances are running?Issue the following query from any one node connecting through SQL*PLUS.$connect sys/sys as sysdba

SQL>select * from V$ACTIVE_INSTANCES;The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME column.

RAC|Aarthi Mudhalvan|Comments (0)verify the RAC instances are running,verify whether RAC instances are runningMar262010VIPVIP Virtual IP address in RAC

VIP is mainly used for fast connection in failover.

Until 9i RAC faileover we used physical IP address of another server. When the connection request come from a client to server, then failure of first server listener then RAC redirect the connection request to second available server using physical IP address. Hence it is physical IP address rediretion to second physical IP address is possible only after we get timeout error from First Physical IP address. So connection should wait a while for getting TCP connection timeout.

From RAC 10g we can use the VIP to save connection timeout wait, Because ONS (Oracle Notification Service) maintains communication between each nodes and listeners. Once ONS found any listener down or node down, it will notify another nodes and listeners. While new connection is trying to establish connection to failure node or listener, virtual IP of failure node automatically divert to surviving node. This Process will not wait for TCP/IP timeout event. So new connection will be faster even one listener/node failed.

A virtual IP address or VIP is an alternate IP address that the client connections use instead of the standard public IP address. To configure VIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.

If a node fails, then the nodes VIP address fails over to another node on which the VIP address can accept TCP connections but it cannot accept Oracle connections.

Situations under which VIP address failover happens:-

VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.

Significance of VIP address failover:-

When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection refused error .They dont have to wait for TCP connection timeout messages.

RAC|Aarthi Mudhalvan|Comments (0)RAC VIP,RAC virtual IP address,vip,virtual IP addressMar242010ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO===== ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO

The above error message is because of the INTERVAL of snapshot setting is Zero. You can change the snapshot setting by using the following command. The below command set the interval to 60 mins

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( INTERVAL=>60 );

Performance|mudhalvan|Comments (0)ORA-13516: AWR Operation failed: INTERVAL Setting is ZEROMar242010How to create AWR Snapshot Manuallyexec DBMS_WORKLOAD_REPOSITORY.create_snapshot();

Performance|mudhalvan|Comments (0)How to create AWR Snapshot ManuallyMar242010Views and Usage Related to AWR and ASH* V$ACTIVE_SESSION_HISTORY Displays the active session history (ASH) sampled every second.* V$METRIC Displays metric information.* V$METRICNAME Displays the metrics associated with each metric group.* V$METRIC_HISTORY Displays historical metrics.* V$METRICGROUP Displays all metrics groups.* DBA_HIST_ACTIVE_SESS_HISTORY Displays the history contents of the active session history.* DBA_HIST_BASELINE Displays baseline information.* DBA_HIST_DATABASE_INSTANCE Displays database environment information.* DBA_HIST_SNAPSHOT Displays snapshot information.* DBA_HIST_SQL_PLAN Displays SQL execution plans.* DBA_HIST_WR_CONTROL Displays AWR settings.

How to Change the Session and Process ValueHow to Change the Session and Process Value

1. backup the spfile$cp -p spfile.ora spfile.ora.

2. check the session and parameter value$ sqlplus /nologSQL> connect / as sysdbaSQL>select NAME, VALUE from v$parameter where NAME = sessions;SQL>select NAME, VALUE from v$parameter where NAME = processes;

3. Change the Process and Session ValuesSQL> alter system set processes=100 scope=spfile;SQL> alter system set sessions=100 scope=spfile;

4. Restart the DatabaseSQL> shutdown immediate;SQL> startup;

5. check the session and parameter valueSQL>select NAME, VALUE from v$parameter where NAME = sessions;SQL>select NAME, VALUE from v$parameter where NAME = processes;

How do I change archive log to noarchive log in cluster environment?Changing the Archiving Mode in Real Application Clusters

After configuring your Real Application Clusters environment for RMAN, you can alter the archiving mode if needed. For example, if your Real Application Clusters database uses NOARCHIVELOG mode, then follow these steps to change the archiving mode to ARCHIVELOG mode:

1. Shut down all instances.2. Reset the CLUSTER_DATABASE parameter to false on one instance. If you are using the server parameter file, then make a sid-specific entry for this.3. Add settings in the parameter file for the LOG_ARCHIVE_DEST_n, LOG_ARCHIVE_FORMAT, and LOG_ARCHIVE_START parameters. You can multiplex the destination to use up to ten locations. The LOG_ARCHIVE_FORMAT parameter should contain the %t parameter to include the thread number in the archived log file name. You must configure an archiving scheme before setting these parameter values.4. Start the instance on which you have set CLUSTER_DATABASE to false.5. Run the following statement in SQL*Plus:

SQL>ALTER DATABASE ARCHIVELOG;

6. Shut down the instance.7. Change the value of the CLUSTER_DATABASE parameter to true.8. Restart your instances.

You can also change the archiving mode from ARCHIVELOG to NOARCHIVELOG. To disable archiving, follow the preceding steps with the following changes:

1. Delete the archiving settings that you created in step 3.2. Specify the NOARCHIVELOG keyword in step 5:

ALTER DATABASE NOARCHIVELOG;

RMAN Notes1.1. Where should the catalog be created?The recovery catalog to be used by rman should be created in a seperate database other than the target database. The reason been that the target database will be shutdown while datafiles are restored.

1.2. How do I create a catalog for rman?First create the user rman.

CREATE USER rman IDENTIFIED BY rmanTEMPORARY TABLESPACE tempDEFAULT TABLESPACE toolsQUOTA UNLIMITED ON tools;

GRANT connect, resource, recovery_catalog_owner TO rman;exit

Then create the recovery catalog:-

rman catalog=rman/rmancreate catalog tablespace tools;exit

Then register the database

oracle@debian:~$ rman target=/ catalog=rman/rman@newdb

Recovery Manager: Release 10.1.0.2.0 Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)connected to recovery catalog database

RMAN> register database;

database registered in recovery catalogstarting full resync of recovery catalogfull resync complete

NoteIf you try rman catalog=rman/rman and try to register the database it will not work.

NoteWe have 2 Databases here 1 is newdb which is solely for catalog and the other is TEST which is our database on which we want to perform all rman operations.

1.3. How many times does oracle ask before dropping a catalog?The default is two times one for the actual command, the other for confirmation.

1.4. How to view the current defaults for the database.rman> show all;

RMAN> show all2> ;

RMAN configuration parameters are:CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;CONFIGURE BACKUP OPTIMIZATION OFF; # defaultCONFIGURE DEFAULT DEVICE TYPE TO DISK; # defaultCONFIGURE CONTROLFILE AUTOBACKUP OFF; # defaultCONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO %F; # defaultCONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # defaultCONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # defaultCONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # defaultCONFIGURE MAXSETSIZE TO UNLIMITED; # defaultCONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # defaultCONFIGURE SNAPSHOT CONTROLFILE NAME TO /u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f; # default

1.5. Backup the database.RMAN> run{configure retention policy to recovery window of 2 days;backup database plus archivelog;delete noprompt obsolete;}tarting backup at 04-JUL-05current log archivedallocated channel: ORA_DISK_1channel ORA_DISK_1: sid=256 devtype=DISKchannel ORA_DISK_1: starting archive log backupset

1.6. How to resolve the ora-19804 errorBasically this error is because of flash recovery area been full. One way to solve is to increase the space available for flashback database.

sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; It can be set to K,M or G.rman>backup database;.channel ORA_DISK_1: specifying datafile(s) in backupsetincluding current controlfile in backupsetincluding current SPFILE in backupsetchannel ORA_DISK_1: starting piece 1 at 04-JUL-05channel ORA_DISK_1: finished piece 1 at 04-JUL-05piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:03Finished backup at 04-JUL-05

Oracle Flashback

After taking a back up resync the database.

Restoring the whole database.

run {shutdown immediate;startup mount;restore database;recover database;alter database open;}

1.7. What are the various reports available with RMANrman>list backup; rman> list archive;

1.8. What does backup incremental level=0 database do?Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;

You can also use backup full database; which means the same thing as level=0;

1.9. What is the difference between DELETE INPUT and DELETE ALL command in backup?Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n will get deleted.

DELETE all applies only to archived logs. delete expired archivelog all;

Chapter 2. RecoveryRecovery involves placing the datafiles in the appropriate state for the type of recovery you are performing. If recovering all datafiles, then mount the database, if recovering a single tablespace or datafile then you can keep the database open and take the tablespace or datafile offline. Perform the required recovery and put them back online.

Put the commands in a rman script .rcv file such as myrman.rcv

run{# shutdown immediate; # use abort if this failsstartup mount;#SET UNTIL TIME Nov 15 2001 09:00:00;# SET UNTIL SCN 1000; # alternatively, you can specify SCNSET UNTIL SEQUENCE 9923; # alternatively, you can specify log sequence numberrestore database;recover database;alter database open;}

Run the myrman.rcv file as :- rman target / @myrman.rcv

After successful restore & recovery immediately backup your database, because the database is in a new incarnation.

ALTER DATABASE open resetlogs; command creates a new incarnation of the database database with a new stream of sequence numbers starting with sequence 1.

Before running RESETLOGS it is a good practice to open the database in read only mode and examining the data contents.

2.1. Simulating media failure.2.1.1. How to simulate media failure and recover a tablespace in the database ?2.1.2. What is the difference between alter database recover and sql*plus recover command?

2.1.1. How to simulate media failure and recover a tablespace in the database ?Firstly create the table in the required tablespace.

CREATE TABLE mytest ( id number(10));

Then insert into the table mytest values(100); execute the insert statement a couple of times but do not commit the results.

Take the tablespace offline, this is possible only if the database is in archivelog mode.

now commit the transaction. by issuing commit.

Now try to bring the tablespace online, at this point you will get the error that datafile 4 needs media recovery.

issue the following command to recover the tablespace, please note that the database itself can remain open.

SQL>recover tablespace users;media recovery completed.

now bring the tablespace online.

SQL>alter tablespace users online;

2.1.2. What is the difference between alter database recover and sql*plus recover command?ALTER DATABASE recover is useful when you as a user want to control the recovery. SQL*PLUS recover command is useful when we prefer automated recovery.

Chapter 3. Duplicate database with control fileWhat are the steps required to duplicate a database with control file?Copy initSID.ora to the new initXXX.ora file. i.e.,

cp $ORACLE_HOME/dbs/inittest.ora $ORACLE_HOME/dbs/initDUP.ora

Edit the parameters that are specific to location and instance:-

db_name = dupinstance_name = dupcontrol_files = change the location to point to dupbackground_dump_dest = change the location to point to dup/bdumpcore_dump_dest = change the location to point to dup/cdumpuser_dump_dest = change the location to point to dup/udumplog_archive_dest_1 = dup/archivedb_file_name_convert = (test, dup)log_file_name_convert = (test, dup)remote_login_passwordfile = exclusive

Actual settings :-

*.background_dump_dest=/u02/app/oracle/admin/DUP/bdump*.compatible=10.1.0.2.0*.control_files=/u02/app/oracle/oradata/DUP/control01.ctl,'/u02/app/oracle/oradata/DUP/control02.ctl,'/u02/app/oracle/oradata/DUP/control03.ctl*.core_dump_dest=/u02/app/oracle/admin/DUP/cdump*.db_block_size=8192*.db_cache_size=25165824*.db_domain=*.db_file_multiblock_read_count=16*.db_name=DUP*.db_recovery_file_dest=/u02/app/oracle/flash_recovery_area*.db_recovery_file_dest_size=2147483648*.dispatchers=(PROTOCOL=TCP) (SERVICE=DUPXDB)*.java_pool_size=50331648*.job_queue_processes=10*.large_pool_size=8388608*.log_archive_dest_1=LOCATION=/u02/app/oracle/oradata/payroll MANDATORY*.open_cursors=300*.pga_aggregate_target=25165824*.processes=250*.shared_pool_size=99614720*.sort_area_size=65536*.undo_management=AUTO*.undo_tablespace=UNDOTBS1*.user_dump_dest=/u02/app/oracle/admin/DUP/udump*.remote_login_passwordfile=exclusive*.db_file_name_convert = (test, dup)*.log_file_name_convert =(test,dup)

Make the directories for the dump destination:-

oracle@debian:/u02/app/oracle/admin/DUP$ mkdir bdumporacle@debian:/u02/app/oracle/admin/DUP$ mkdir cdumporacle@debian:/u02/app/oracle/admin/DUP$ mkdir udump

Make a directory to hold control files, datafiles and such:-

oracle@debian:/u02/app/oracle/oradata/PRD$ cd ..oracle@debian:/u02/app/oracle/oradata$ mkdir DUP

Ensure that the oracle sid is pointing to the right Database. Make an ora password file so that other users can connect too.

export ORACLE_SID=DUP$orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=easypasssqlplus / as sysdbasql>startup nomount;

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/dbs$ sqlplus / as sysdba;

SQL*Plus: Release 10.1.0.2.0 Production on Wed Aug 24 21:05:26 2005

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;ORACLE instance started.

Total System Global Area 188743680 bytesFixed Size 778036 bytesVariable Size 162537676 bytesDatabase Buffers 25165824 bytesRedo Buffers 262144 bytesSQL>

Check net8 connectivity sqlplus sys/easypass@dup if that goes through successfully then exit. The idea is to check for sql*net connectivity.

if you get ORA-12154: TNS:could not resolve the connect identifier specified then more work needs to be done.DUP =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521))(CONNECT_DATA =(SERVICE_NAME = dup))

$tnsping dupUsed TNSNAMES adapter to resolve the aliasAttempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = dup)))OK (0 msec)

Even with this if you are getting ORA-12528: TNS:listener: all appropriate instances are blocking new connections then we have to connect to the auxiliary (the database to be duplicated as / ) and the target database ( source ) with user/pass@teststart duplicating the database. export ORACLE_SID=DUP

rman target sys/easypass@test auxiliary /run{allocate auxiliary channel ch1 type disk;duplicate target database to dup;}

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target sys/kernel@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-00554: initialization of internal recovery manager package failedRMAN-04005: error from target database:ORA-01017: invalid username/password; logon denied

The work around to this is to create a user with dba privileges and connect through that users id .

$export ORACLE_SID=testSQL>grant sysdba to mhg;

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target mhg/mhg@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)connected to auxiliary database: DUP (not mounted)

oracle@debian:~$ rman target mhg/mhg@test auxiliary / @run.rcv

Recovery Manager: Release 10.1.0.2.0 Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)connected to auxiliary database: DUP (not mounted)

RMAN> run{2> allocate auxiliary channel c1 type disk;3> duplicate target database to dup;4> }5>using target database controlfile instead of recovery catalogallocated channel: c1channel c1: sid=270 devtype=DISK

Starting Duplicate Db at 24-AUG-05released channel: c1RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03002: failure of Duplicate Db command at 08/24/2005 21:13:09RMAN-05501: aborting duplication of target databaseRMAN-05001: auxiliary filename /u02/app/oracle/oradata/test/users01.dbf conflicts with a file used by the target database

This error is primarily because the files of the test database are already present, this is a bad thing we have to use db_file_name_convert and long_file_name_convert to overcome these errors.

This is the final run output:-

oracle@debian:~$ rman target mhg/mhg@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)connected to auxiliary database: DUP (not mounted)

RMAN> @run.rcv

RMAN> run{2> allocate auxiliary channel c1 type disk;3> duplicate target database to dup;4> }using target database controlfile instead of recovery catalogallocated channel: c1channel c1: sid=270 devtype=DISK

Starting Duplicate Db at 24-AUG-05

contents of Memory Script:{set until scn 2150046;set newname for datafile 1 to/u02/app/oracle/oradata/DUP/system2.dbf;set newname for datafile 2 to/u02/app/oracle/oradata/DUP/undotbs01.dbf;set newname for datafile 3 to/u02/app/oracle/oradata/DUP/sysaux01.dbf;set newname for datafile 4 to/u02/app/oracle/oradata/DUP/users01.dbf;restorecheck readonlyclone database;}executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 24-AUG-05

channel c1: starting datafile backupset restorechannel c1: specifying datafile(s) to restore from backup setrestoring datafile 00001 to /u02/app/oracle/oradata/DUP/system2.dbfrestoring datafile 00002 to /u02/app/oracle/oradata/DUP/undotbs01.dbf..datafile copy filename=/u02/app/oracle/oradata/DUP/sysaux01.dbf recid=2 stamp=567206656

cataloged datafile copydatafile copy filename=/u02/app/oracle/oradata/DUP/users01.dbf recid=3 stamp=567206656

datafile 2 switched to datafile copyinput datafilecopy recid=1 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/undotbs01.dbfdatafile 3 switched to datafile copyinput datafilecopy recid=2 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/sysaux01.dbfdatafile 4 switched to datafile copyinput datafilecopy recid=3 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/users01.dbf

contents of Memory Script:{Alter clone database open resetlogs;}executing Memory Script

database openedFinished Duplicate Db at 24-AUG-05

RMAN> **end-of-file**

This ends a successful duplication of a database without control file.

Chapter 4. Using rman tocheck logical and physical block corruptionTo generate block corruption you can use the dd unix utilitycaution, it will corrupt your block(s):-$dd if=/dev/null of=/u02/oradata/myrac/anyfile.dbf bs=8192 conv=notrunc seek=10 count=1seek=10 write at block 10, count=1 write to only that blocknow you can run dbv to verify that the blocks are actually corruptand then recover the datafile by using oracles blockrecover command.

export ORACLE_HOME=testrman target /run {allocate channel d1 type disk;backup check logical validate database;release channel d1;}To validatea datafile(s) :-run {allocate channel d1 type disk;backup check logical validate datafile 1,2;release channel d1;}

During this command every block is written to memory and then subsequently rewriten to another portion of the memory, duringthis memory to memory write every block is checked for corruption.RMANs backup command with validate and check logical clause allow to quickly validate for both physical and logical corruption.

Chapter 5. Checking for datafile corruptionA corrupted block requires dropping an object. The message identifies the block in error by file number and block number. The cure has always been to run a query such as: SELECT owner, segment_name, segment_type FROM dba_extents WHERE file_id =ANDBETWEEN block_id AND block_id + blocks 1; whereandwere the numbers from the error message. This query indicates which object contains the corrupted block. Then, depending on the object type, recovery is either straightforward (for indexes and temporary segments), messy (for tables), or very messy (for active rollback segments and parts of the data dictionary). In Oracle 9i Enterprise Edition, however, a new Recovery Manager (RMAN) command, BLOCKRECOVER, can repair the block in place without dropping and recreating the object involved. After logging into RMAN and connecting to the target database, type: BLOCKRECOVER DATAFILE filenumber BLOCK blocknumber; A new view, V$DATABASE_BLOCK_CORRUPTION, gets updated during RMAN backups, and a block must be listed as corrupt for a BLOCKRECOVER to be performed. To recover all blocks that have been marked corrupt, the following RMAN sequence can be used: BACKUP VALIDATE DATABASE; BLOCKRECOVER CORRUPTION LIST; This approach is efficient if only a few blocks need recovery. For large-scale corruption, its more efficient to restore a prior image of the datafile and recover the entire datafile, as before. As with any new feature, test it carefully before using it on a production database.

run {allocate channel ch1 type;blockrecover datafileblock;}

1. What are the steps to start the database from a text control file?1.1. What are the steps required to start a database from text based control file?1.2. Give a complete scenario of backup, delete and restore.1.3. How do I backup archive log?1.4. How do I do a incremental backup after a base backup?1.5. What is ORA-002004 error?1.6. What Information is Required for RMAN TAR?1.7. How To turn Debug Feature on in rman?1. What are the steps to start the database from a text control file?1.1. What are the steps required to start a database from text based control file?1.2. Give a complete scenario of backup, delete and restore.1.3. How do I backup archive log?1.4. How do I do a incremental backup after a base backup?1.5. What is ORA-002004 error?1.6. What Information is Required for RMAN TAR?1.7. How To turn Debug Feature on in rman?

1.1. What are the steps required to start a database from text based control file?ALTER DATABASE BACKUP CONTROLFILE TO /oracle/backup/cf.bak REUSE; or to a file name on the OS. With this command you will get a text based version of your control file. REUSE clause specifies Oracle to overwrite the control files. If we ignore this option Oracle will not overwrite the control file if it is already present in the directory specified by the initSID.ora file.

Start the database in nomount mode. If you have 3 control file entries in pfile / spfile you will get 3 new control files.

Now run the control file script to create your control files.

recover database using backup controlfile until cancel

1.2. Give a complete scenario of backup, delete and restore.Given that you want to take a base level backup, simulate complete failure by removing controlfile, datafile, redo log, archive log, these are the steps to be followed.

First take a base level backup of the database.

backup incremental level=0 database;

Simulate media failure by removing the control file and data file. sqlplus / as sysdba; shutdown immediate; exit; rm control* system*

When we dont have a control file the problem becomes quite complex the reason been that the rman backup information is stored in the control file. So when we dont have the control file we wont have the information about backups. First step should be towards restoring the control file. Fortunately we can do a listing in our flash recovery area and guess which backup piece has the information about our control file. In my box following is the listing on the flash recovery area:-

/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2_.bkpo1_mf_nnnd0_TAG20050730T130722_1gqmzdjz_.bkpnow I am assuming xxcnn0xxx has the control file information in it.

We have to use a nifty pl/sql program to recover our control file, once it is done successfully then we can go on our merry way using rman to recover the rest of the database.

DECLAREv_devtype VARCHAR2(100);v_done BOOLEAN;v_maxPieces NUMBER;TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer;v_pieceName t_pieceName;BEGIN Define the backup pieces (names from the RMAN Log file)v_pieceName(1) :=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2_.bkp;v_maxPieces := 1; Allocate a channel (Use type=>null for DISK, type=>sbt_tape for TAPE)v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>d1); Restore the first Control FileDBMS_BACKUP_RESTORE.restoreSetDataFile; CFNAME mist be the exact path and filename of a controlfile taht was backed-upDBMS_BACKUP_RESTORE.restoreControlFileTo(cfname=>/u02/app/oracle/oradata/test/control01.ctl);dbms_output.put_line(Start restoring ||v_maxPieces|| pieces.);FOR i IN 1..v_maxPieces LOOPdbms_output.put_line(Restoring from piece ||v_pieceName(i));DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done, params=>null);exit when v_done;END LOOP; Deallocate the channelDBMS_BACKUP_RESTORE.deviceDeAllocate(d1) ;EXCEPTIONWHEN OTHERS THENDBMS_BACKUP_RESTORE.deviceDeAllocate;RAISE;END;/

Pl/SQL completed successfully. I had 3 control files, the above command will restore only one control file so I will do a operating system copy to restore the rest of the control files. cp control01.ctl control02.ctl cp control01.ctl control03.ctl

After control file is restored launch rman and list all the backup information.,

rman target /rman>sql alter database mount;rman>list backup;BS Key Type LV Size Device Type Elapsed Time Completion Time- - - 21 Incr 0 2G DISK 00:02:39 30-JUL-05BP Key: 21 Status: AVAILABLE Compressed: NO Tag: TAG20050730T130722Piece Name: /u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_nnnd0_TAG20050730T130722_1gqmzdjz_.bkp List of Datafiles in backup set 21File LV Type Ckp SCN Ckp Time Name- - - -1 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/system2.dbf2 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/undotbs01.dbf3 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/sysaux01.dbf4 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/users01.dbf

The above command indicates that the backup set key is 21 and tag is blah and time is blah.

Connect to rman target / restore database; recover database; exit;

sqlplus / as sysdba; startup; Now the database should have been recovered to your current SCN at which time we encountered a complete media failure.

1.3. How do I backup archive log?In order to backup archivelog we have to do the following:-

run {backup(archivelog all delete input);}

If you want to delete archive logs ignoring those that were inaccesible after backup you have to use (archivelog all skip inaccessible delete input);

1.4. How do I do a incremental backup after a base backup?RMAN> backup incremental level=1 database plus archivelog delete all input;

This will take a incremental backup of the database and make a copy of archivelog and delete all input.

1.5. What is ORA-002004 error?A disk I/O failure was detected on reading the controlfile.

Basically check whether the control file is available, permissionsare right on the control file,spfile/init.ora right to the right location, if all checks weredone still you are getting the error, then from the multiplexedcontrol file overlay on the corrupted one, let us say you havethree control files control01.ctl, control02.ctl and control03.ctland now you are getting errors on control03.ctl then just cp control01.ctlover to control03.ctl and you should be all set.In order to issueALTER DATABASE BACKUP CONTROLFILE TO TRACE;database should be mounted and in our case it is not mounted then the onlyother option available is to restore control file from backup or cp themultiplexed control file over to the bad one.

1.6. What Information is Required for RMAN TAR?Hardware Configuration* The name of the node that hosts the database* The make and model of the production machine* The version and patch of the operating system* The disk capacity of the host* The number of disks and disk controllers* The disk capacity and free space* The media management vendor (if you use a third-party media manager)* The type and number of media management devicesSoftware Configuration* The name of the database instance (SID)* The database identifier (DBID)* The version and patch release of the Oracle database server* The version and patch release of the networking software* The method (RMAN or user-managed) and frequency of database backups* The method of restore and recovery (RMAN or user-managed)* The datafile mount points

You should keep this information both in electronic and hardcopy form. For example, if you save this information in a text file on the network or in an email message, then if the entire system goes down, you may not have this data available.

1.7. How To turn Debug Feature on in rman?run {allocate channel c1 type disk;debug on;}rman>list backup of database;

You will see a output similar toDBGMISC: ENTERED krmkdftr [18:35:11.291]

DBGSQL: EXEC SQL AT TARGET begin dbms_rcvman . translateDataFile (:fno ) ; end ; [18:35:11.291]DBGSQL: sqlcode=0 [18:35:11.300]DBGSQL: :b1 = 1DBGMISC: ENTERED krmkgdf [18:35:11.301]DBGMISC: ENTERED krmkgbh [18:35:11.315]DBGMISC: EXITED krmkgbh with status Not required no flags[18:35:11.315] elapsed time [00:00:00:00.000]DBGMISC: EXITED krmkgdf [18:35:11.315] elapsed time [00:00:00:00.014]DBGMISC: EXITED krmkdftr [18:35:11.315] elapsed time [00:00:00:00.024]DBGMISC: EXITED krmknmtr with status DF [18:35:11.315] elapsed time[00:00:00:00.024]DBGMISC: EXITED krmknmtr with status DFILE [18:35:11.315] elapsed time[00:00:00:00.024]DBGMISC: EXITED krmknmtr with status backup [18:35:11.315] elapsed time[00:00:00:00.024]DBGMISC: krmknmtr: the parse tree after name translation is:[18:35:11.315]DBGMISC: EXITED krmknmtr with status list [18:35:11.316] elapsed time[00:00:00:00.078]DBGMISC: krmkdps: this_reset_scn=1573357 [18:35:11.316]DBGMISC: krmkdps: this_reset_time=19-AUG-06 [18:35:11.316]DBGMISC: krmkdps: untilSCN= [18:35:11.317]

You can always turn debug off by issuing

rman>debug off;To check if flashback is enabled or notselect flashback_on from v$database;

How to rename/move data file in oracleMethod 1 (Easy Method)1) shutdown2) COPY the dbf files where you want them3) startup mount4) alter database rename file orig location to target loc; for each file including system.5) This stated method of moving the redo will work fine6) alter database open7) once everything has been verifed, you can delete the dbf files from their original location

Method 2Moving datafiles of a database: The datafiles reside under /home/oracle/OraHome1/databases/ora9 and have go to /home/oracle/databases/ora9.

SQL> select tablespace_name, substr(file_name,1,70) from dba_data_files;

TABLESPACE_NAME SUBSTR(FILE_NAME,1,70) -SYSTEM /home/oracle/OraHome1/databases/ora9/system.dbfUNDO /home/oracle/OraHome1/databases/ora9/undo.dbfDATA /home/oracle/OraHome1/databases/ora9/data.dbfSQL> select member from v$logfile;

MEMBER/home/oracle/OraHome1/databases/ora9/redo1.ora/home/oracle/OraHome1/databases/ora9/redo2.ora/home/oracle/OraHome1/databases/ora9/redo3.ora

SQL> select name from v$controlfile;

NAME/home/oracle/OraHome1/databases/ora9/ctl_1.ora/home/oracle/OraHome1/databases/ora9/ctl_2.ora/home/oracle/OraHome1/databases/ora9/ctl_3.oraNow, the files to be moved are known, the database can be shut down:

SQL> shutdown

The files can be copied to their destination:$cp /home/oracle/OraHome1/databases/ora9/system.dbf /home/oracle/databases/ora9/system.dbf$cp /home/oracle/OraHome1/databases/ora9/undo.dbf /home/oracle/databases/ora9/undo.dbf$cp /home/oracle/OraHome1/databases/ora9/data.dbf /home/oracle/databases/ora9/data.dbf$cp /home/oracle/OraHome1/databases/ora9/redo1.ora /home/oracle/databases/ora9/redo1.ora$cp /home/oracle/OraHome1/databases/ora9/redo2.ora /home/oracle/databases/ora9/redo2.ora$cp /home/oracle/OraHome1/databases/ora9/redo3.ora /home/oracle/databases/ora9/redo3.ora$cp /home/oracle/OraHome1/databases/ora9/ctl_1.ora /home/oracle/databases/ora9/ctl_1.ora$cp /home/oracle/OraHome1/databases/ora9/ctl_2.ora /home/oracle/databases/ora9/ctl_2.ora$cp /home/oracle/OraHome1/databases/ora9/ctl_3.ora /home/oracle/databases/ora9/ctl_3.ora

The ##A(init.ora,/ora/admin/init_ora.html) file is also copied because it references the control files. I name the copied file just init.ora because it is not in a standard place anymore and it will have to be named explicitely anyway when the database is started up.

$cp /home/oracle/OraHome1/dbs/initORA9.ora /home/oracle/databases/ora9/init.ora

The new location for the control files must be written into the (copied) init.ora file:/home/oracle/databases/ora9/init.oracontrol_files = (/home/oracle/databases/ora9/ctl_1.ora,/home/oracle/databases/ora9/ctl_2.ora,/home/oracle/databases/ora9/ctl_3.ora)$ sqlplus / as sysdba

SQL> startup exclusive mount pfile=/home/oracle/databases/ora9/init.ora

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/system.dbf to /home/oracle/databases/ora9/system.dbf;

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/undo.dbf to /home/oracle/databases/ora9/undo.dbf;

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/data.dbf to /home/oracle/databases/ora9/data.dbf;

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo1.ora to /home/oracle/databases/ora9/redo1.ora;

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo2.ora to /home/oracle/databases/ora9/redo2.ora;

SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo3.ora to /home/oracle/databases/ora9/redo3.ora;

SQL> shutdown

SQL> startup pfile=/home/oracle/databases/ora9/init.ora

How to Increase Size of Redo Log1. Add new log files (groups) with new size

ALTER DATABASE ADD LOGFILE GROUP

2. Switch with alter system switch log file until a new log file group is in state current

3. Now you can delete the old log fileALTER DATABASE DROP LOGFILE MEMBER

Row chaining and Row MigrationConcepts:There are two circumstances when this can occur, the data for a row in a table may be too large to fit into a single data block. This can be caused by either row chaining or row migration.

Chaining:Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.

Migration:Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the blocks free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change.When a row is chained or migrated, performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for that row.

o INSERT and UPDATE statements that cause migration and chaining perform poorly, because they perform additional processing.

o SELECTs that use an index to select migrated or chained rows must perform additional I/Os.

Detection:Migrated and chained rows in a table or cluster can be identified by using the ANALYZE command with the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and places this information into a specified output table. To create the table that holds the chained rows,execute script UTLCHAIN.SQL.SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;SQL> SELECT * FROM chained_rows;You can also detect migrated and chained rows by checking the table fetch continued row statistic in the v$sysstat view.SQL> SELECT name, value FROM v$sysstat WHERE name = table fetch continued row;NAME VALUE- table fetch continued row 308

Although migration and chaining are two different things, internally they are represented by Oracle as one. When detecting migration and chaining of rows you should analyze carrefully what you are dealing with.

Resolving:o In most cases chaining is unavoidable, especially when this involves tables with large columns such as LONGS, LOBs, etc. When you have a lot of chained rows in different tables and the average row length of these tables is not that large, then you might consider rebuilding the database with a larger blocksize.

e.g.: You have a database with a 2K block size. Different tables have multiple large varchar columns with an average row length of more than 2K. Then this means that you will have a lot of chained rows because you block size is too small. Rebuilding the database with a larger block size can give you a significant performance benefit.o Migration is caused by PCTFREE being set too low, there is not enough room in avoid migration, all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.SQL Script to eliminate row migration : Get the name of the table with migrated rows:ACCEPT table_name PROMPT Enter the name of the table with migrated rows:

Clean up from last executionset echo offDROP TABLE migrated_rows;DROP TABLE chained_rows; Create the CHAINED_ROWS table@/rdbms/admin/utlchain.sqlset echo onspool fix_mig List the chained and migrated rowsANALYZE TABLE &table_name LIST CHAINED ROWS;

Copy the chained/migrated rows to another tablecreate table migrated_rows asSELECT orig.*FROM &table_name orig, chained_rows crWHERE orig.rowid = cr.head_rowidAND cr.table_name = upper(&table_name);

Delete the chained/migrated rows from the original tableDELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);

Copy the chained/migrated rows back into the original tableINSERT INTO &table_name SELECT * FROM migrated_rows;

spool off

Tips1. Analyze the table and check the chained count for that particular table8671 Chain Count

analyze table tbl_tmp_transaction_details compute statistics

select table_name,chain_cnt,pct_free,pct_used from dba_tables where table_name=TBL_TMP_TRANSACTION_DETAILS

2. Increase the pctfree size to 30

alter table tbl_tmp_transaction_details pctfree 30

3. Regenerate Report (When rows get updated only we will have Chained rows)

tbl_report_generation_status

begin dbms_job.run(190); end;

4. Analyze the table and check the chained count for that particular table0 Chain Count

analyze table tbl_tmp_transaction_details compute statistics

select table_name,chain_cnt,pct_free,pct_used from dba_tables where table_name=TBL_TMP_TRANSACTION_DETAILS

Note:If we want to do the procedure to delete the chained rows from original table and insert the same again, then we need chained_rows tableTo create chained rows we need to run the utlchain.sql from $ORACLE_HOME/rdbms

Find out the chained rows.

analyze table tbl_tmp_transaction_details list chained count;

The above command will move the chained rows to chained_row tableBased on the rowid in chained_row table we can move those record to temp table and delete those chained rows from original table then insert the same again into original table.

select * from tbl_tmp_transaction_details where rowid=AAAG8DAAGAAAGOKABD:

Oracle Database Architectural Overview (In Depth)Oracle Architectural OverviewThe architecture of Oracle is configured in such a way as to ensure that client requests for data retrieval and modification are satisfied efficiently while maintaining database integrity. The architecture also ensures that, should parts of the system become unavailable, mechanisms of the architecture can be used to recover from such failure and, once again, bring the database to a consistent state, ensuring database integrity. Furthermore, the architecture of Oracle needs to provide this capability to many clients at the same time so performance is a consideration when the architecture is configured.

Oracle Instance-The instance is a combination of a memory structure shared by all clients accessing the data, and a number of background processes that perform actions for the instance as a whole.

The shared memory structure is called the SGA, which stands for System Global Area or Shared Global Area, depending on who you ask. Either term is equally acceptable and the acronym SGA is the most common way to refer to this memory structure.

Oracle also includes a number of background processes that are started when the instance is started. These include the database writer (DBW0), system monitor (SMON), process monitor (PMON), log writer (LGWR), and checkpoint process (CKPT). Depending on the configuration of your instance and your requirements, others may also be started. An example of this is the archiver process (ARC0), which will be started if automatic archiving of log files is turned on.

Oracle Database -The database consists of three types of files. Datafiles, of which there can be many depending on the requirements of the database, are used to store the data that users query and modify. The control file is a set of one or more files that keeps information about the status of the database and the data and log files that make it up. The redo log files are used to store a chronological record of changes to the data in the datafiles.

User Process -The user process is any application, either on the same computer as the database or on another computer across a network that can be used to query the database. For example, one of the standard Oracle query tools is SQL*Plusa user process. Another example of a user process is Microsoft Excel or an Oracle Financials application. Any application that makes use of the database to query and modify data in the database is considered a user process. The user process does not have to come from Oracle Corporation; it only needs to make use of the Oracle database.

Server Process -The server process is a process launched when a user process makes a connection to the instance. The server process resides on the same computer as the instance and database and performs all of the work that the user process requests. As you will find out in more detail later in this chapter, the server process receives requests from the user process in the form of SQL commands, checks their syntax, executes the statements and returns the data to the user process. In a typical Oracle configuration, each user process will have a corresponding server process on the Oracle server to perform all the work on its behalf.

Oracle Instance -As shown earlier in Figure 1-2, the Oracle instance is made up of a shared memory structure (the SGA), which is composed of a number of distinct memory areas. The other part of the instance is the set of background processes, both required and optional, that perform work on the database.

The instance is always associated with one, and only one, database. This means that when an instance is started, the DB_NAME parameter in the Oracle parameter (INIT.ORA) file specifies which database the instance will be connected to, while the INSTANCE_NAME parameter (which defaults to the value of the DB_NAME parameter) specifies the name of the instance. The configuration of the instance is always performed through parameters specified in the INIT.ORA file and one environment variableORACLE_SIDwhich is used to determine which instance to start and perform configuration operations on when on the same server as the database.

One of the main objectives of the instance is to ensure that connections by multiple users to access database data are handled as efficiently as possible. One way it accomplishes this is by holding information in the datafiles in one of its shared memory structuresthe database buffer cacheto allow multiple users reading the same data to retrieve that data from memory instead of disk since access to memory is about a thousand times quicker than access to a disk file.

Another reason that the instance is important is that, when multiple users access Oracle data, allowing more than one to make changes to the same data can cause data corruption and cause the integrity of the data to become suspect. The instance facilitates locking and the ability for several users to access data at the same time.

Note:It is important to remember that a user process, when attempting to access data in the database does not connect to the database but to the instance. When specifying what to connect to from a user process, you always specify the name of the instance and not the name of the database. The instance, in this way, is the gatekeeper to the database. It provides the interface to the database without allowing a user to actually touch the various files that make up the database.System Global Area (SGA) -The SGA is a shared memory structure that is accessed by all processes in order to perform database activity, such as read and write data, log changes to the log files, and keep track of frequently executed code and data dictionary objects. The SGA is allocated memory from the operating system on which the instance is started, but the memory that is allocated to it is managed by various Oracle processes. The SGA is composed of several specific memory structures, as shown earlier in Figure 1-2.

These include:

Shared Pool -The Shared Pool is an area of SGA memory whose size is specified by the INIT.ORA parameter SHARED_POOL_SIZE. The default value for SHARED_POOL_SIZE is 3,000,000 bytes, or just under 3MB in versions of Oracle prior to 8.1.7, and 8,000KB bytes, or just under 8MB in Oracle 8.1.7. The size of the shared pool remains constant while the instance is running and can only be changed by shutting down and restarting the instance, after modifying the value in the INIT.ORA file.

The shared pool is divided into two main areas of memory the data dictionary cache(also called the dictionary cache or row cache) andthe library cache. The data dictionary cache is used to store a cached copy of information on frequently accessed data dictionary objects. The information cached includes the name of the object, permissions granted on the object, dependency information, and so on. The data dictionary cache also includes information on the files that make up the database and what tablespaces they belong to, as well as other important information.

When a server process needs to determine what the name Students refers to, it queries the data dictionary cache for that information and, if the information cannot be found, it reads the information from the datafile where the data dictionary is located and then places it in a cache for others to read. The information in the cache is stored using a least-recently-used (or LRU) algorithm. This means that information that is frequently requested remains in the cache while information that is only occasionally required is brought into the cache and flushed out if space is required to bring other information in.

You cannot manually size the data dictionary cacheOracle does this dynamically and automatically. If more memory is required to cache data dictionary information, which may be the case in a database with many objects, the cache is made larger to accommodate the requests. If the memory is needed by the library cache component of the shared pool, some memory may be freed up and removed from the data dictionary cache.

The other major component of the shared pool is the library cache. The library cache is used to store frequently executed SQL statements and PL/SQL program units such as stored procedures and packages. Storing the parsed statement along with the execution plan for the commands sent to the server in memory allows other users executing the same statement (that is, identical in every way including case of statement text, spaces and punctuation) to reduce the work required by not having to re-parse the code. This improves performance and allows the code to appear to run quicker.

The library cache is broken up into a series of memory structures calledshared SQL areas that store three elements of every command sent to the server: the text of the SQL statement or anonymous PL/SQL block itself, the parse tree or compiled version of the statement, and the execution plan for the statement that outlines the steps to be followed to perform the actions for the SQL statement of PL/SQL block.

Each shared SQL area is assigned a unique value that is based upon the hash calculated by Oracle from the text of the statement, the letters and case thereof in it, spacing, and other factors. Identical statements always hash out to the same value whereas different statements, even returning the same result, hash out to different values. For example, the following two statements use two shared SQL areas because their case is different, though they both return the same information:

SELECT * FROM DBA_USERS;select * from dba_users;

One of the goals for ensuring good performance of the applications accessing data in the database is to share SQL areas by ensuring that the statement returning the same result are identical, thereby allowing each subsequent execution following the first of the same SQL statement to use the execution plan created the first time a command is run. The preceding two statements are considered inefficient because they would need to allocate two shared SQL areas to return the same results. This would consume more memory in the shared pool (once for each statement), as well as cause the server process to build the execution plan each time. Like the data dictionary cache, the library cache also works on an LRU algorithm that ensures that statements that are frequently executed by users remain in the cache while those executed infrequently or just once be aged out when space is required. Also like the data dictionary cache, you cannot specifically size the library cacheOracle sizes it automatically based upon the requirements of the users and statements sent to the server, as well as memory allocated to the shared pool with the SHARED_POOL_SIZE parameter.

Database Buffer Cache -The database buffer cache is used to store the most recently used blocks from the datafiles in memory. Because Oracle does not allow a server process to read data from the database directly before returning it to the user process, the server process always checks to see if a block it needs to read is in the database buffer cache, and, if so, retrieve it from the cache and return the rows required to the user. If the block the server process needs to read is not in the database buffer cache, it reads the block from the datafile and places it in the cache.

The database buffer cache also uses an LRU algorithm to determine which blocks should be kept in memory and which can be flushed out. The type of access can also have an impact on how long a block from the datafile is kept in the cache. If the situation where a block is placed in the cache as a result of an index lookup, the block is placed higher in the list of blocks to be kept in the cache than if it were retrieved as a result of a full table scan, where every block of the table being queried is read. Both placing the datafile blocks in the database buffer cache in the first place, and their importance in being kept there for a long or short period, is designed to ensure that frequently accessed blocks remain in the cache.

The database buffer cache is sized by a couple of Oracle initialization parameters. The INIT.ORA parameter DB_BLOCK_SIZE determines the size, in bytes, of each block in the database buffer cache and each block in the datafile. The value for this parameter is determined when the database is created and cannot be changed. Essentially, each block in the database buffer cache is exactly the same size, in bytes, as each database block in the datafiles. This makes it easy to bring datafile blocks into the database buffer cachethey are the same size. The default for DB_BLOCK_SIZE is 2048KB too small in almost all cases.

The other INIT.ORA parameter that is used to determine the size of the database buffer cache is DB_BLOCK_BUFFERS. This parameter defaults to 50, which is also its minimum value. The total amount of memory that will be used for the database buffer cache is DB_BLOCK_BUFFERS * DB_BLOCK_SIZE. For example, setting DB_BLOCK_BUFFERS to 2,000 when the DB_BLOCK_SIZE is 8,192, allocates 20008192 or 16MB of RAM for the database buffer cache. When sizing the database buffer cache, you need to consider the amount of physical memory available on the server and what the database is being used for. If users of the database are going to make use of full table scans, you may be able to have a smaller database buffer cache than if they frequently accessed the data with index lookups. The right number is always the balance between a sufficiently high number so that physical reads of the datafiles are minimized and not too high a number so that memory problems take place at the operating system level.

Redo Log Buffer -Before a change to a row or the addition or removal of a row from a table in the database is recorded to the datafiles, it is recorded to the redo log files and, even before that, to the redo log buffer. The redo log buffer is essentially a temporary storage area for information to be written to the redo log files.

When a server process needs to insert a row into a table, change a row, or delete a row, it first records the change in the redo log buffer so that a chronological record of changes made to the database can be kept. The information and the size of the information, that will be written to the redo log buffer, and then to the redo log files, depends upon the type of operation being performed. On an INSERT, the entire row is written to the redo log buffer because none of the data already exists in the database, and all of it will be needed in case of recovery. When an UPDATE takes place, only the changed columns values is written to the redo log buffernot the entire row. If a DELETE is taking place, then only the ROWID (unique internal identifier for the row) is written to the redo log buffer, along with the operation being performed. The whole point of the redo log buffer is to hold this information until it can be written to the redo log file. The redo log buffer is sized by the INIT.ORA parameter LOG_BUFFER. The default size depends on the operating system that is being used but is typically four times the largest operating system block size supported by the host operating system. It is generally recommended that the LOG_BUFFER parameter be set to 64KB in most environments since transactions are generally shor


Recommended