Post on 13-Jan-2016
description
transcript
1
PL/SQL Tuning Case StudySydney Oracle Meetup
Arun Birla,
Principal Oracle DBA
Orasoft Consulting Pty Ltd
2
Agenda
Background Methodology Tools Key PL/SQL Recommendations
3
Background
My Background• Certified Oracle10g DBA (OCP)• Sixteen years experience as Oracle DBA• Worked at Oracle Corporation, Australia for four years.
Case Study Background• July 2009• Utilities company• Technical Lead to a virtual Application Performance team
consisting of Business and IT people • PL/SQL batch process to generate data for printing Bills
failing to meet SLA• Presenting recommendations related to PL/SQL tuning
only
4
Methodology
Collected 10046 SQL Trace data Analysed trace data using Trace Analyzer Analysed PL/SQL code for opportunities Developed test case and improved
performance Made recommendations to Developers Future – Explore using ASH data thus
eliminating the need to pre-engineer application to collect SQL trace data
5
Why time is not the best KPI to measure improvement for Test case ?
6
Trace Analyzer vs Tkprof Trace Analyzer is miles ahead of TKPROF
• HTML report more readable than the text output from TKPROF• Quite extensive report providing all performance related info in one place. No
need to run SQL queries. Eg, for each SQL, it shows• Wait event times• Explain plan shows index columns and predicate causing usage of index• All tables and all indexes on them and their optimiser stats• SQL hierarchy eg triggers• Segment I/O Wait Summary• Actual value of Bind variables for the costly executions
• Provides Hot blocks• Provides detailed Response time summary and wait event information.
Disadvantages• Needs to be installed in database – new TRCANLZR schema• Time consuming to generate report
Trace Analyzer Report Quick 2 minute Demo
7
Key PL/SQL Recommendations
Reduce Commit frequency Use Bulk collect for fetching data Use Bulk update/insert/delete for
updating/inserting/deleting data Make updates more efficient Reduce Work
8
Overheads of Committing too often On commit, LGWR synchronously writes redo log entries in log
buffer to redo log files and the processing has to wait (log file sync wait event)
Committing too often results in increased redo size as the redo entries may not fill block completely and more ‘log file sync’ cumulative wait time
more undo as the undo block may not be full when trx is committed
The internal transaction table for the undo tablespace records the unique system change number (SCN) of the transaction
Oracle releases locks on rows and tables, therefore unnecessary latching and contention for shared resources when relocking
9
Reduce Commit frequency
10
Bulk Collect Basics One of the best tools available for performance improvement Retrieves large quantity of data in one operation rather than looping
through a result set one row at a time Reduces context switches between PL/SQL and SQL engines. Requires collection such as nested table Uses BULK COLLECT clause in the SELECT INTO, FETCH INTO
statements, or RETURNING INTO clause Example
DECLARETYPE dnames_tab IS TABLE OF VARCHAR2(30);v_depts dnames_tab;
BEGINSELECT dept_names BULK COLLECT INTO v_depts FROM depts;
END; PL/SQL manual (Rel 9.2) gives an example of 90% improvement with
a batch size of 5000 for Insert. I have typically used 100 to 1000.
11
Use Bulk collect for fetching data
12
Bulk Update Basics One of the best tools available for performance improvement Executes multiple DML statements, rather than one at a time with a
FOR loop Reduces context switches between PL/SQL and SQL engines. Requires collection such as nested table Uses FORALL statement to execute INSERT, UPDATE, and DELETE
statements in batches PL/SQL manual - If the DML statement affects four or more database
rows, the use of bulk SQL can improve performance considerably. Example DECLARE
TYPE NumList IS VARRAY(20) OF NUMBER;depts NumList := NumList(10, 30, 70);
BEGINFORALL i IN depts.FIRST..depts.LAST
DELETE FROM employees_temp WHERE department_id = depts(i);…
13
Use Bulk update for updating data
14
Make updates more efficient – eliminate index accessSELECT UABPYMT_BALANCEINTO v_amountFROM UABPYMTWHERE UABPYMT_AR_TRANS =
ATransId;
…
UPDATE UABPYMT SET UABPYMT_BALANCE =
UABPYMT_BALANCE - AAmount,UABPYMT_APPL_IND =
(DECODE( ( ABalance ), 0, 'Y', 'N' ), UABPYMT_USER_ID = v_user_idWHERE UABPYMT_AR_TRANS = ATransId;
SELECT UABPYMT_BALANCE, rowidINTO v_amount, vrowidFROM UABPYMTWHERE UABPYMT_AR_TRANS =
ATransId;
…
UPDATE UABPYMT SET UABPYMT_BALANCE =
UABPYMT_BALANCE - AAmount,UABPYMT_APPL_IND =
DECODE( ( ABalance ), 0, 'Y', 'N' ), UABPYMT_USER_ID = v_user_idWHERE rowid = vrowid
15
Reduce Work CREATE OR REPLACE TRIGGER
U$_UABOPEN_DB BEFORE INSERT OR UPDATE ON UABOPEN FOR EACH ROW…
DECLARE CURSOR IsSpecial IS SELECT COUNT(*) FROM UZRDQUP WHERE UZRDQUP_STATE_CODE = ( SELECT UCBPREM_STAT_CODE_ADDR FROM UCBPREM WHERE UCBPREM_CODE =:new.uabopen_prem_code) AND UZRDQUP_ORIGIN = :new.uabopen_origin;…
OPEN IsSpecial; FETCH IsSpecial INTO v_temp; CLOSE IsSpecial; IF (v_temp > 0) THEN ...
END IF;
CREATE OR REPLACE TRIGGER U$_UABOPEN_DBBEFORE INSERT OR UPDATE ON UABOPEN FOR EACH ROW…
DECLARE …
SELECT COUNT(*) INTO v_temp FROM UZRDQUP a, UCBPREM b
WHERE a.UZRDQUP_STATE_CODE=b.UCBPREM_STAT_CODE_ADDR
and b.UCBPREM_CODE = :new.uabopen_prem_code AND a.UZRDQUP_ORIGIN = :new.uabopen_origin
and rownum = 1;IF (v_temp > 0) THEN...
END IF;
16
Questions ?