+ All Categories
Home > Documents > BI Apps11g Perf Tech Note V1

BI Apps11g Perf Tech Note V1

Date post: 05-Oct-2015
Category:
Upload: younome
View: 55 times
Download: 1 times
Share this document with a friend
Description:
Oracle Business Intelligence Applications Version 11g Performance Recommendations
32
 1 Oracle Business Intelligence Applications Version 11g Performance Recommendations  An Oracle Technical Note, 1 st  Edition  Jan 2015 Primary Author: Pavel Buynitsky Contributors: Eugene Perkov, Amar Batham, Oksana Stepaneeva, Rakesh Kumar, Wasimraja Abdulmajeeth, Valery Enyukov
Transcript
  • 1

    Oracle Business Intelligence

    Applications Version 11g

    Performance Recommendations

    An Oracle Technical Note, 1st Edition

    Jan 2015

    Primary Author: Pavel Buynitsky

    Contributors: Eugene Perkov, Amar Batham, Oksana Stepaneeva,

    Rakesh Kumar, Wasimraja Abdulmajeeth, Valery Enyukov

  • 2

    Copyright 2014, Oracle. All rights reserved.

  • 3

    Oracle Business Intelligence Applications Version 11g Performance

    Recommendations

    Table of Contents Introduction ......................................................................................................................................................................... 5

    Hardware Recommendations for Implementing Oracle BI Applications............................................................................. 5

    Storage Considerations for Oracle Business Analytics Warehouse ................................................................................. 6

    Shared Storage Impact Benchmarks ........................................................................................................................... 7

    Conclusion ................................................................................................................................................................... 8

    Application Server Sizing and Capacity Planning ................................................................................................................. 8

    Introduction ..................................................................................................................................................................... 8

    WebLogic Admin Server: Memory Settings ..................................................................................................................... 9

    Managed BI Server: Memory Settings ............................................................................................................................. 9

    Managed ODI Server: Memory Settings .......................................................................................................................... 9

    Managed BI Server: Other Recommended Settings ...................................................................................................... 10

    Source Environments Recommendations for Better Performance ................................................................................... 11

    Introduction ................................................................................................................................................................... 11

    Change Data Capture Considerations for Source Databases ........................................................................................ 11

    Source Dependent DataStore with Oracle Golden Gate ........................................................................................... 12

    Materialized View Logs ............................................................................................................................................. 12

    Database Triggers on Source Tables ......................................................................................................................... 13

    Extract Workload Impact on Data Sources .................................................................................................................... 14

    Allocate Sufficient TEMP space OLTP Data Sources .................................................................................................. 14

    Replicate Source Tables to SDS Schema on Target Tier ............................................................................................ 14

    Custom Indexes in Oracle EBS for Incremental Loads Performance ............................................................................. 14

  • 4

    Introduction .............................................................................................................................................................. 14

    Custom OBIEE indexes in EBS 11i and R12 systems .................................................................................................. 14

    Custom EBS indexes in EBS 11i source systems ........................................................................................................ 18

    Oracle EBS tables with high transactional load ......................................................................................................... 19

    Additional Custom EBS indexes in EBS 11i source systems ...................................................................................... 20

    Oracle Warehouse Recommendations for Better Performance ....................................................................................... 20

    Database configuration parameters .............................................................................................................................. 20

    REDO Log Files Sizing Considerations ............................................................................................................................ 21

    Oracle RDBMS System Statistics .................................................................................................................................... 22

    Parallel Query configuration.......................................................................................................................................... 23

    Oracle Business Analytics Warehouse Tablespaces ...................................................................................................... 23

    Oracle BI Applications Best Practices for Oracle Exadata .................................................................................................. 25

    Database Requirements for Analytics Warehouse on Exadata ..................................................................................... 25

    Handling BI Applications Indexes in Exadata Warehouse Environment ....................................................................... 26

    Gather Table Statistics for BI Applications Tables ......................................................................................................... 26

    Oracle Business Analytics Warehouse Storage Settings in Exadata .............................................................................. 26

    Parallel Query Use in BI Applications on Exadata.......................................................................................................... 27

    Compression Implementation Oracle Business Analytics Warehouse in Exadata ........................................................ 27

    OBIEE Queries Performance Considerations on Exadata .............................................................................................. 27

    Exadata Smart Flash Cache ............................................................................................................................................ 28

    Oracle BI Applications High Availability ............................................................................................................................. 28

    Introduction ................................................................................................................................................................... 28

    High Availability with Oracle Data Guard and Physical Standby Database ................................................................... 28

    Conclusion .......................................................................................................................................................................... 31

  • 5

    Oracle Business Intelligence Applications Version 11g Performance

    Recommendations

    Introduction Oracle Business Intelligence (BI) Applications Version 11g delivers a number of adapters to various business applications on

    Oracle database. Each Oracle BI Applications implementation requires very careful planning to ensure the best performance

    during ETL, end user queries and dashboard executions.

    This article discusses performance topics for Oracle BI Applications 11g (11.1.1.7.1 and higher), using Oracle Data Integrator

    (ODI) 11g 11.1.1.7.1, and using Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.x. Most of the recommendations

    are generic for BI Applications 11g contents and its BI techstack. Release specific topics refer to exact version numbers.

    Note: The document is intended for experienced Oracle BI Administrators, DBAs and Applications implementers. It covers

    advanced performance tuning techniques in ODI, OBIEE and Oracle RDBMS, so all recommendations must be carefully verified

    in a test environment before applied to a production instance. Customers are encouraged to engage Oracle Expert Services to

    review their configurations prior to implementing the recommendations to their BI Applications environments.

    Hardware Recommendations for Implementing Oracle BI Applications Depending on the target DB size, Oracle BI Applications Version 11g implementations can be categorized as small, medium and

    large. This chapter covers hardware recommendations primarily for ensuring ETL performance, OBIEE reports scalability, and

    future growth. Refer to Oracle BI Analytic Applications documentation for minimum hardware requirements and Oracle

    Business Intelligence Enterprise Edition (OBIEE) for OBIEE hardware deployment and scalability topics.

    Oracle Exadata has delivered the best performance for BI Applications ETL and OBIEE queries performance. This document

    covers BI Applications / Exadata specific topics in a separate chapter. Refer to Oracle Exadata documents for hardware

    configuration and specifications, which will work the best for your BI Applications implementation.

    Oracle Exalytics platform can effectively scale up for OBIEE end user queries performance. The Exalytics topics and best

    practices are covered in a separate document.

    The table below summarizes hardware recommendations for Oracle BI Applications tiers by target volume ranges.

    Configuration SMALL MEDIUM LARGE

    Target Database Tier (Oracle RDBMS)

    Target Volume Up to 200 Gb 200 Gb to 1 Tb 1 Tb and higher

    # CPU cores 16 32 64*

    Physical RAM 32-64 Gb 64-128 Gb 256+ Gb*

    Storage Space Up to 400 Gb 400 Gb 2 Tb 2T b and higher

  • 6

    Storage System

    Local (PATA, SATA, iSCSI), or

    NAS, preferred RAID

    configuration

    High performance SCSI or SAN with

    16 Gbps HBA or higher, connected

    over fiber channel / 2xGb Ethernet

    NIC

    High performance SCSI or SAN with

    24 Gbps HBA or higher, connected

    over fiber channel / 2xGb Ethernet

    NIC

    Application Server Tier (OBIEE, ODI, BIACM, LPG)

    # CPU cores 8 16 32

    Physical RAM 24 Gb 32 Gb 64 Gb

    Storage Space 100 Gb local 200 Gb local 400 Gb local

    * Consider implementing Oracle RAC with multiple nodes to accommodate large numbers of concurrent users, accessing web

    reports and dashboards.

    Important!

    The configurations above cover ODI Agent, Load Plan Generator (LPG) and BIA Configuration Manager (BIACM), all collocated on the same hardware as the OBIEE. The recommended specifications above accommodate primarily for OBIEE workload. Neither ODI nor LPG and CM would generate noticeable overhead. If you plan to deploy OBIEE on a separate server (farm), then you can use less powerful configuration for ODI, LPG and CM. Refer to Oracle Weblogic documentation for more hardware requirements.

    The internal benchmarks did not show any noticeable workload from ODI agent processes. Oracle to Oracle configurations can effectively use a database link knowledge module, even further minimizing the impact from ODI processes.

    LPG and CM deployments on the Warehouse DB tier are not recommended.

    ODI deployments with agent processes, running on separate servers, or agents load balancing on multiple servers, are not covered in this document. Refer to BI Applications and ODI documentation for more information for such configurations.

    Depending on the number of planned concurrent users running OBIEE reports, you may have to plan for more memory on the target tier to accommodate for the queries workload.

    To ensure the queries scalability on OBIEE tier, consider implementing OBIEE Cluster or Oracle Exalytics. Refer to OBIEE and Exalytics documentation for more details.

    It is recommended to set up all Oracle BI Applications tiers in the same local area network. Deploying any of its tiers over Wide Area Network (WAN) may cause additional delays during ETL Extract mappings execution and impact Load Plan windows.

    Storage Considerations for Oracle Business Analytics Warehouse

    Oracle BI Applications ETL execution plans are optimized to maximize hardware utilization on ETL and target tiers and reduce

    ETL runtime. Usually a well-optimized infrastructure consumes higher CPU and memory on an ETL tier and causes rather heavy

    storage I/O load on a target tier during an ETL execution. The storage could easily become a major bottleneck as the result of

    such actions as:

    Setting excessive parallel query processes (refer to Parallel Query Configuration section for more details)

    Running multiple I/O intensive applications, such as databases, on a shared storage

  • 7

    Choosing sub-optimal storage for running BI Applications tiers.

    Shared Storage Impact Benchmarks

    Sharing storage among heavy I/O processes could easily degrade ETL performance and result in extended ETL runtime. The

    following benchmarks helped to measure the impact from sharing the same NetApp filer storage between two target

    databases, concurrently loading data in two parallel ETL executions.

    Configuration description:

    Linux servers #1 and #2 have the following configurations:

    2 quad-core 1.8 GHz Intel Xeon CPU

    32 GB RAM

    Shared NetApp filer volumes, volume1 and volume2, are mounted as EXT3 file systems:

    o Server #1 uses volume1

    o Server #2 uses volume2

    Execution test description:

    Set record block size for I/O operations to 8K or 16K, the recommended block size in the target database.

    Execute parallel load using eight child processes to imitate average workload during ETL run.

    Run the following test scenarios:

    o Test#1: execute parallel load above on NFS volume1 using Linux server #1; keep Linux server #2 idle.

    o Test#2: execute parallel load above on both NFS volume1 and volume2 using Linux server #1 and server #2.

    The following benchmarks describe performance measurements in KB / sec:

    - Initial Write: write a new file.

    - Rewrite: re-write in an existing file.

    - Read: read an existing file.

    - Re-Read: re-read an existing file.

    - Random Read: read a file with accesses made to random locations in the file.

    - Random Write: write a file with accesses made to random locations in the file.

    - Mixed workload: read and write a file with accesses made to random locations in the file.

    - Reverse Read: read a file backwards.

    - Record Rewrite: write and re-write the same record in a file.

    - Strided Read: read a file with a strided access behavior, for example: read at offset zero for a length of 4 Kbytes, seek

    200 Kbytes, read for a length of 4 Kbytes, then seek 200 Kbytes and so on.

    The test summary:

    Test Type Test #1 Test #2

    "Initial write " 46087.10 KB/sec 30039.90 KB/sec

    "Rewrite " 70104.05 KB/sec 30106.25 KB/sec

    "Read " 3134220.53 KB/sec 2078320.83 KB/sec

  • 8

    "Re-read " 3223637.78 KB/sec 3038416.45 KB/sec

    "Reverse Read " 1754192.17 KB/sec 1765427.92 KB/sec

    "Stride read " 1783300.46 KB/sec 1795288.49 KB/sec

    "Random read " 1724525.63 KB/sec 1755344.27 KB/sec

    "Mixed workload " 2704878.70 KB/sec 2456869.82 KB/sec

    "Random write " 68053.60 KB/sec 25367.06 KB/sec

    "Pwrite " 45778.21 KB/sec 23794.34 KB/sec

    "Pread " 2837808.30 KB/sec 2578445.19 KB/sec

    Total Time 110 min 216 min

    Initial Write, Rewrite, Initial Read, Random Write, and Pwrite (buffered write operation) were impacted the most, while

    Reverse Read, Stride Read, Random Read, Mixed Workload and Pread (buffered read operation) were impacted the least by

    the concurrent load.

    Read operations do not require specific RAID sync-up operations therefore read requests are less dependent on the number of

    concurrent threads.

    Conclusion

    You should carefully plan for storage deployment, configuration and usage for the Oracle BI Applications environment. Avoid

    sharing the same RAID controller(s) across multiple databases. Set up periodic monitoring of your I/O system during both ETL

    and end user queries load for any potential bottlenecks.

    Application Server Sizing and Capacity Planning

    Introduction

    The Application Server tier may generate heavy impact on hardware from various one-time administration tasks, such as ODI

    Load Plan generation, and from daily workload, such as OBIEE end user queries. So it needs to be sized to accommodate for

    combined workload from various BI Domain components below, as well as future scalability to support more business users.

    The Application Server uses Oracle BI Domain in Oracle Weblogic Server for the following services:

    Weblogic Admin Server

    Managed BI Server bi_server1 with the following deployed components:

    o Oracle BI Enterprise Edition (OBIEE)

    o Oracle BI Applications Configuration Manager (BIACM)

    o Functional Setup Manager (FSM)

    o Load Plan Generator (LPG)

    Managed ODI Server odi_server1

    o ODI Console

    o ODI Agent

    Important! It is not recommended to collocate the Application Server tier with the Data Warehouse tier for the same reasons.

  • 9

    The next sections cover the above components sizing parameters for better performance.

    WebLogic Admin Server: Memory Settings Weblogic Admin Server requires initial 256Mb and Max Heap size 1Gb to run Admin Server, Enterprise Manager and EM Console:

    java -server -Xms256m -Xmx1024m -XX:MaxPermSize=512m -Dweblogic.Name=AdminServer

    Managed BI Server: Memory Settings

    Managed BI Server components consume most of Application Server tier memory, so it needs to be sized to accommodate for initial

    rollout and future workload. The total recommended memory heap size for managed server bi_server1: Initial 2048Mb, Max

    6144Mb.

    java -server -Xms2048m Xmx6144m -Dweblogic.Name=bi_server1

    OBIEE and LPG are the most critical components deployed under bi_server1, which may require the largest amounts of

    memory. They are covered in the section below.

    OBIEE Memory Recommendations

    Oracle BI Enterprise Edition end-user reports and dashboards, executed concurrently, may result in higher memory

    consumption. OBIEE Caching can mitigate the workload impact and reduce the memory footprint. Business-wide BI

    implementations with complex end user query patterns, ad-hoc reports from BI Answers, regular ibot queries, as well as

    stored reports and dashboards, running against multiple functional areas, typically get 25-40% OBIEE cache hit ratio.

    Important! Oracle has published a separate paper (Oracle BI EE 11g Architectural Deployment: Capacity Planning Doc ID

    1323646.1) with BI Server sizing calculations, so OBIEE scalability benchmarks are outside the scope of this technote.

    The internal benchmarks for 150 VUsers with think time = 5sec, running non-cached reports against medium-sized data

    warehouse, showed ~5.5Gb peak of used memory by OBIEE. So, the recommended memory allocation for OBIEE with

    min=2Gb and max = 6Gb should be sufficient for most initial rollouts. Make sure you monitor your Application Server tier for

    workload and memory usage during BI reports querying, and increase the memory settings as needed.

    Load Plan Generator Memory Recommendations

    LPG generates an execution plan for a chosen adapter and functional areas. Usually it gets run one or two times, primarily

    during BI Applications implementation phase. LPG requires up to 3Gb max heap size to complete generating a Load Plan.

    Otherwise the LPG process may hang up, running out of memory.

    Note, Load Plan generation is expected to be performed during implementation only, so LPG should not compete with OBIEE

    for available memory.

    Managed ODI Server: Memory Settings

    Managed ODI Server requires initial heap size of 1Gb and max heap size of 2Gb to ensure its Agent handing ETL workload and

    concurrency in a generated Load Plan. Refer to the screenshot below for the configuration settings in Weblogic UI:

  • 10

    Managed BI Server: Other Recommended Settings

    Review additional recommended configuration parameters for your OBIEE server.

    Maximum number of rows fetched by BI Presentation Server = 65000

    65000

    OBIEE Logging properties:

    BI Server loglevel max age days = 90

    Query loglevel max age days = 90

    Loglevel = Fine

  • 11

    Source Environments Recommendations for Better Performance

    Introduction

    Oracle BI Applications data loads may cause additional overhead for CPU and memory on a source tier. There may be a larger

    impact on the I/O subsystem, especially during full ETL loads. Using several I/O controllers or a hardware RAID controller with

    multiple I/O channels on the source side would help to minimize the impact on Business Applications during ETL runs and

    speed up data extraction into a target data warehouse. This chapter covers important topics how to minimize OLTP impact

    from ETL.

    Change Data Capture Considerations for Source Databases Oracle BI Analytic Applications can use different techniques for capturing changing data in the source databases, minimizing

    the impact from ETL extracts on OLTP and improving incremental ETL performance. It effectively uses indexes on

    LAST_UPDATE_DATE columns in Oracle EBS and Image tables in Siebel CRM. However some source databases may not have

    the required logic for capturing changing rows. As the result, incremental mappings would scan large tables, causing

    unnecessary workload on the source databases and extending incremental ETL runtime.

    This chapter discusses the following custom Change Data Capture (CDC) options:

    Source Dependent DataStore (SDS) with Golden Gate

    Materialized View Logs (Oracle RDBMS)

    Database Triggers

  • 12

    Note: you have to update your ODI repository to use the replicated persistent staging tables or materialized views instead of

    the original source tables in ODI scenarios.

    Source Dependent DataStore with Oracle Golden Gate

    Oracle BI Applications 11g provide support for Source Dependent DataStore(SDS) with Oracle Golden Gate (GG) option out-of-

    the box. This is the recommended configuration, especially if Source OLTP Applications do not have native CDC support. Such

    option provides the best flexibility and performance for CDC, and the least impact on source databases. Golden Gate parses

    each captured record and marks it as an insert, update or delete. Refer to Golden Gate / OLPT Source documentation for more

    details on integrating and configuring Golden Gate for your Source database.

    Materialized View Logs

    Introduction

    Oracle Materialized View (MV) Logs capture the changing data in base source tables and supply the critical CDC volumes to the

    extract mappings.

    Important! MV Logs present additional challenges, when used in OLTP environments. You should carefully test MV Log based

    CDC before implementing it in your production environment.

    Review the following constraints for using MV Logs:

    1. MV Logs can cause additional overhead on business transactions performance, if created on heavy volume

    transactional tables in busy OLTP sources.

    2. Ensure regular MV refresh to purge MV Logs. Otherwise they will grow in size and generate even more overhead for

    OLTP applications.

    3. Avoid sharing an MV Log between two or more fast refreshable MVs. The MV Log will not be purged until all

    depending MVs are refreshed.

    Refer to Oracle documentation for more details on MV and MV Logs implementation.

    The next sections will use an example for using an MV Log on PS_PROJ_RESOURCE in PeopleSoft to speed up incremental

    extract for SDE_PSFT_ProjectBudgetFact mapping.

    MV Log CDC Implementation

    PeopleSoft ESA Application does not maintain DTTM_STAMP column in PS_PROJ_RESOURCE, which is used in

    SDE_PSFT_ProjectBudgetFact extract logic. As the result, the optimizer uses an expensive full table scan for during an

    incremental extract SQL execution.

    The following steps describe the CDC implementation using MV Log approach:

    1. Create An MV log on PS_PROJ_RESOURCE source table:

    CREATE MATERIALIZED VIEW LOG ON PS_PROJ_RESOURCE NOCACHE LOGGING NOPARALLEL WITH SEQUENCE;

    2. Create a primary key (PK) constraint, based on PS_PROJ_RESOURCEs unique index

    ALTER TABLE PS_PROJ_RESOURCE ADD CONSTRAINT PS_PROJ_RESOURCE_PK PRIMARY KEY

    (BUSINESS_UNIT,PROJECT_ID,ACTIVITY_ID,RESOURCE_ID) USING INDEX PS_PROJ_RESOURCE;

    3. Create a Materialized View using PS_PROJ_RESOURCE definition and an additional LAST_UPDATE_DT column. The latter will be populated using SYSDATE values:

  • 13

    CREATE TABLE OBIEE_PS_PROJ_RESOURCE_MV AS SELECT * FROM PS_PROJ_RESOURCE WHERE 1=2;

    ALTER TABLE OBIEE_PS_PROJ_RESOURCE_MV ADD (LAST_UPDATE_DT DATE DEFAULT SYSDATE);

    CREATE MATERIALIZED VIEW OBIEE_PS_PROJ_RESOURCE_MV

    ON PREBUILT TABLE

    REFRESH FAST ON DEMAND

    AS SELECT * FROM PS_PROJ_RESOURCE;

    4. Create an index on the MV LAST_UPDATE_DT:

    CREATE INDEX OBIEE_PS_PROJ_RESOURCE_I1 ON OBIEE_PS_PROJ_RESOURCE_MV(LAST_UPDATE_DT);

    5. Create a database view on the MV, which will be used in the SDE Fact Source Qualifier query:

    CREATE VIEW OBIEE_PS_PROJ_RESOURCE_VW AS SELECT * FROM OBIEE_PS_PROJ_RESOURCE_MV;

    6. Run the complete refresh for the MV. The subsequent daily ETLs will perform fast refresh using the MV Log.

    exec dbms_mview.refresh(OBIEE_PS_PROJ_RESOURCE_MV,C);

    7. Update the SDE fact extract logic and replace the original table with the MV, and add an additional filter:

    LAST_UPDATE_DT > to_date('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS')\

    8. Create an additional step in your ODI Scenario to add MV refresh call:

    BEGIN

    DBMS_MVIEW.REFRESH('getTableName()', 'F');

    END;

    1. Save the changes and re-generate the updated scenario in ODI Studio.

    Database Triggers on Source Tables

    You can consider database triggers to capture new and updated records and populate auxiliary tables in a source database.

    This option requires careful implementation to minimize the overhead on OLTP environments, especially for high volume

    transaction tables.

    Here is an example of such trigger for Oracle database:

    CREATE OR REPLACE TRIGGER CDC_Trigger

    AFTER UPDATE OR INSERT ON Base_Table

    FOR EACH ROW

    BEGIN

    IF INSERTING THEN

    INSERT INTO AUX_TABLE VALUES(:new.TEST_ID, SYSTIMESTAMP);

    END IF;

    IF UPDATING THEN

    UPDATE AUX_TABLE SET LAST_UPDATE_DATE = SYSTIMESTAMP WHERE TEST_ID = :new.TEST_ID;

    END IF;

    END;

    /

    Review the additional considerations below:

    Ensure data integrity between the primary source and auxiliary CDC tables in your design.

  • 14

    Consider adding a unique index on the auxiliary CDC table primary column will speed up updates.

    Measure carefully the impact on your source OLTP workload before you choose the trigger CDC approach, as it can

    easily generate significant overhead and impact transactional business users.

    Extract Workload Impact on Data Sources ETL workload impact on OLTP Data Sources is one of the critical factors in ETL optimization and performance. ETL

    Administrators may face the constraints for creating additional custom indexes on source tables, or employ database parallel

    processing for speeding up their incremental ETLs. On the other hand, the target hardware sized to handle much larger

    workload from end user queries, can utilize more resources to offload data source and deliver the critical improvements

    during incremental ETL windows.

    When you find any critical extract mappings and you cannot use more OLTP data source resources, consider using SDS option,

    which replicates all source tables to SDS schema on the target tier.

    Allocate Sufficient TEMP space OLTP Data Sources

    Oracle BI Analytic Applications Extract mappings may operate with large data volumes, compared to the small changes from

    OLTP transactional activities. As the result, OLTP Data Sources could run out TEMP space during heavy volume initial extracts.

    The source TEMP space varies by OLTP size and processed volumes. So, the recommended TEMP space for BI Applications ETL

    ranges from 100Gb to 1Tb. You should allocate sufficient storage for additional TEMP space in an OLTP environment. It is

    more practical to reclaim unused TEMP space after large volume ETL extracts completion, than restart long running mappings

    from the very beginning, because of TEMP space shortage during the ETL.

    Replicate Source Tables to SDS Schema on Target Tier

    If you observe significant load from some Extract mappings on the OLTP Source environment and you face constraints for

    implementing change data capture mechanism, consider replicating the participating source objects to the target warehouse.

    BI Applications supports Source Dependent DataStore (SDS) with Golden Gate (GG) option, fully supported in Load Plans for

    both initial and incremental ETLs. Refer to BI Applications documentation for more details on SDS with GG deployment.

    Custom Indexes in Oracle EBS for Incremental Loads Performance

    Introduction

    Oracle EBS source database tables contain mandatory LAST_UPDATE_DATE columns, which are used by Oracle BI Applications

    for capturing incremental data changes. Some source tables used by Oracle BI Applications do not have an index on

    LAST_UPDATE_DATE column, which hampers performance of incremental loads. There are three categories of such source EBS

    tables:

    - Tables that do not have indexes on LAST_UPDATE_DATE in the latest EBS releases, but there are no performance

    implications reported with indexes on LAST_UPDATE_DATE column.

    - Tables that have indexes on LAST_UPDATE_DATE columns, introduced in Oracle EBS Release 12.

    - Tables that cannot have indexes on LAST_UPDATE_DATE because of serious performance degradations in the source

    EBS environments.

    Custom OBIEE indexes in EBS 11i and R12 systems

    The first category covers tables, which do not have indexes on LAST_UPDATE_DATE in any EBS releases. The creation of

    custom indexes on LAST_UPDATE_DATE columns for tables in this category has been reviewed and approved by Oracles EBS

    Performance Group. All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided

    below.

  • 15

    If your source system is on of the following:

    - EBS R12

    - EBS 11i release 11.5.10

    - EBS 11i release 11.5.9 or lower and it has been migrated to OATM*

    then replace with APPS_TS_TX_IDX prior to running the DDL.

    If your source system is EBS 11i release 11.5.9 or lower and it has not been migrated to OATM*, replace

    with X, where is an owner of the table which will be indexed on LAST_UPDATE_DATE column.

    DDL script for custom index creation:

    CREATE index AP.OBIEE_AP_EXP_REP_HEADERS_ALL ON AP.AP_EXPENSE_REPORT_HEADERS_ALL(LAST_UPDATE_DATE)

    tablespace ;

    CREATE index AP.OBIEE_AP_INVOICE_PAYMENTS_ALL ON AP.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE)

    tablespace ;

    CREATE index AP.OBIEE_AP_PAYMENT_SCHEDULES_ALL ON AP.AP_PAYMENT_SCHEDULES_ALL(LAST_UPDATE_DATE)

    tablespace ;

    CREATE index AP.OBIEE_AP_INVOICES_ALL ON AP.AP_INVOICES_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AP.OBIEE_AP_HOLDS_ALL ON AP.HOLDS_ALL(LAST_UPDATE_DATE) tablespace ;

    CREATE index AP.OBIEE_AP_AE_HEADERS_ALL ON AP.AP_AE_HEADERS_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index CST.OBIEE_CST_COST_TYPES ON CST.CST_COST_TYPES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index GL.OBIEE_GL_JE_HEADERS ON GL.GL_JE_HEADERS(LAST_UPDATE_DATE) tablespace ;

    CREATE index AR.OBIEE_HZ_ORGANIZATION_PROFILES ON AR.HZ_ORGANIZATION_PROFILES(LAST_UPDATE_DATE)

    tablespace ;

    CREATE index AR.OBIEE_HZ_CONTACT_POINTS ON AR.HZ_CONTACT_POINTS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_CUST_SITE_USES_ALL ON AR.HZ_CUST_SITE_USES_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_LOCATIONS ON AR.HZ_LOCATIONS(LAST_UPDATE_DATE) tablespace ;

    CREATE index AR.OBIEE_HZ_RELATIONSHIPS ON AR.HZ_RELATIONSHIPS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_CUST_ACCT_SITES_ALL ON AR.HZ_CUST_ACCT_SITES_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_CUST_ACCOUNTS ON AR. HZ_CUST_ACCOUNTS(LAST_UPDATE_DATE) tablespace

    ;

  • 16

    CREATE index AR.OBIEE_HZ_CUST_ACCOUNT_ROLES ON AR.HZ_CUST_ACCOUNT_ROLES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_PARTY_SITES ON AR.HZ_PARTY_SITES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_HZ_PERSON_PROFILES ON AR.HZ_PERSON_PROFILES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index ONT.OBIEE_OE_ORDER_HEADERS_ALL ON ONT.OE_ORDER_HEADERS_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index ONT.OBIEE_OE_ORDER_HOLDS_ALL ON ONT.OE_ORDER_HOLDS_ALL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE index PER.OBIEE_PAY_INPUT_VALUES_F ON PER.PAY_INPUT_VALUES_F (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index PER.OBIEE_PAY_ELEMENT_TYPES_F ON PER.PAY_ELEMENT_TYPES_F (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index PO.OBIEE_RCV_SHIPMENT_LINES ON PO.RCV_SHIPMENT_LINES (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index PO.OBIEE_RCV_SHIPMENT_HEADERS ON PO.RCV_SHIPMENT_HEADERS (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index AR.OBIEE_AR_CASH_RECEIPTS_ALL ON AR.AR_CASH_RECEIPTS_ALL (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index WSH.OBIEE_WSH_DELIVERY_DETAILS ON WSH.WSH_DELIVERY_DETAILS (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index WSH.OBIEE_WSH_NEW_DELIVERIES ON WSH.WSH_NEW_DELIVERIES (LAST_UPDATE_DATE) tablespace

    ;

    CREATE index FND.OBIEE_FND_LOOKUP_VALUES ON FND.FND_LOOKUP_VALUES (LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PA.OBIEE_PA_PROJECT_STATUS ON PA.PA_PROJECT_STATUSES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PA.OBIEE_PA_PROJECT_TYPES_ALL ON PA.PA_PROJECT_TYPES_ALL(LAST_UPDATE_DATE) tablespace

    ;;

    CREATE INDEX INV.OBIEE_MTL_SECONDARYINVENTORIES ON INV.MTL_SECONDARY_INVENTORIES(LAST_UPDATE_DATE)

    tablespace ;

    CREATE INDEX PA.OBIEE_PA_PROJECT_ROLE_TYPES_B ON PA.PA_PROJECT_ROLE_TYPES_B(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PA.OBIEE_PA_PROJECT_ROLE_TYPES_TL ON PA.PA_PROJECT_ROLE_TYPES_TL(LAST_UPDATE_DATE)

    tablespace ;

  • 17

    CREATE INDEX PA.OBIEE_PA_CLASS_CATEGORIES ON PA.PA_CLASS_CATEGORIES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX FND.OBIEE_FND_DESCR_FLEXCONTEXTSTL ON FND.FND_DESCR_FLEX_CONTEXTS_TL(LAST_UPDATE_DATE)

    tablespace ;

    CREATE INDEX ONT.OBIEE_OE_ORDER_SOURCES ON ONT.OE_ORDER_SOURCES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX JTF.OBIEE_JTF_RS_RESOURCE_EXTNS ON JTF.JTF_RS_RESOURCE_EXTNS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX AR.OBIEE_HZ_LOCATIONS ON AR.HZ_LOCATIONS(LAST_UPDATE_DATE) tablespace ;

    CREATE INDEX PER.OBIEE_PER_ADDRESSES ON PER.PER_ADDRESSES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX INV.OBIEE_MTL_TRANSACTION_TYPES ON INV.MTL_TRANSACTION_TYPES(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PA.OBIEE_PA_PROJ_ELEMENTS ON PA.PA_PROJ_ELEMENTS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PA.OBIEE_PA_TASKS ON PA.PA_TASKS(LAST_UPDATE_DATE) tablespace ;

    CREATE INDEX HXT.OBIEE_HXT_TASKS ON HXT.HXT_TASKS(LAST_UPDATE_DATE) tablespace ;

    CREATE INDEX IBY.OBIEE_IBY_PAYMENT_METHODS_TL ON IBY.IBY_PAYMENT_METHODS_TL(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX INV.OBIEE_MTL_UOM_CONVERSIONS ON INV.MTL_UOM_CONVERSIONS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX PER.OBIEE_HR_ORGANIZATION_INF ON PER.HR_ORGANIZATION_INFORMATION(LAST_UPDATE_DATE)

    tablespace ;

    CREATE INDEX XLA.OBIEE_XLA_AE_HEADERS ON XLA.XLA_AE_HEADERS(LAST_UPDATE_DATE) tablespace

    ;

    CREATE INDEX XLA.OBIEE_XLA_AE_LINES ON XLA.XLA_AE_LINES(LAST_UPDATE_DATE) tablespace ;

    CREATE INDEX HXC.OBIEE_HXC_TRANSACTION_DETAILS ON HXC.HXC_TRANSACTION_DETAILS(LAST_UPDATE_DATE)

    tablespace ;

    CREATE INDEX HXC.OBIEE_HXC_TIME_BUILDING_BLOCKS ON HXC.HXC_TIME_BUILDING_BLOCKS(LAST_UPDATE_DATE)

    tablespace ;

    CREATE INDEX HXC.OBIEE_HXC_TIME_ATTRIBUTE_USAGES ON HXC.HXC_TIME_ATTRIBUTE_USAGES(LAST_UPDATE_DATE)

    tablespace ;

    There is one more custom index, recommended for Supply Chain Analytics on AP_NOTES.SOURCE_OBJECT_ID column:

    CREATE index AP.OBIEE_AP_NOTES ON AP.AP_NOTES (SOURCE_OBJECT_ID) tablespace ;

  • 18

    Important! You must use FND_STATS to compute statistics on the newly created indexes and update statistics on

    newly indexed table columns in the EBS database.

    Important! All indexes introduced in this section have the prefix OBIEE_ and they do not follow the standard Oracle

    EBS Index naming conventions. If a future Oracle EBS patch creates an index on LAST_UPDATE_DATE columns for the

    tables listed below, Oracle EBSs Autopatch may fail. In such cases the conflicting OBIEE_ indexes must be dropped,

    and the Autopatch can be restarted.

    Custom EBS indexes in EBS 11i source systems

    The second category covers tables, which have indexes on LAST_UPDATE_DATE, officially introduced Oracle EBS Release 12.

    All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided below. Do not change

    the index name avoid any future patch or upgrade failures on the source EBS side.

    If your source system is one of the following:

    - EBS R12

    - EBS 11i release 11.5.10

    - EBS 11i release 11.5.9 or lower and it has been migrated to OATM*

    then replace with APPS_TS_TX_IDX prior to running the DDL.

    If you source system is EBS 11i release 11.5.9 or lower and it has not been migrated to OATM*, replace

    with X, where is an owner of the table which will be indexed on LAST_UPDATE_DATE

    column.

    DDL script for custom index creation:

    CREATE index PO.RCV_TRANSACTIONS_N23 ON PO.RCV_TRANSACTIONS (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

    MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace ;

    CREATE index PO.PO_DISTRIBUTIONS_N13 ON PO.PO_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

    MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace ;

    CREATE index PO.PO_LINE_LOCATIONS_N11 ON PO.PO_LINE_LOCATIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

    MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace ;

    CREATE index PO.PO_LINES_N10 ON PO.PO_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4K MINEXTENTS 1

    MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace ;

    CREATE index PO.PO_REQ_DISTRIBUTIONS_N6 ON PO.PO_REQ_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K

    NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace

    ;

    CREATE index PO.PO_REQUISITION_LINES_N17 ON PO.PO_REQUISITION_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K

    NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace

    ;

    CREATE index PO.PO_HEADERS_N9 ON PO.PO_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 1M MINEXTENTS 1

    MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace ;

    CREATE index PO.PO_REQUISITION_HEADERS_N6 ON PO.PO_REQUISITION_HEADERS_ALL (LAST_UPDATE_DATE)

    INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10

    tablespace ;

    CREATE index AR.RA_CUSTOMER_TRX_N14 ON AR.RA_CUSTOMER_TRX_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4M

    MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace ;

  • 19

    Important! You should use FND_STATS to compute statistics on the newly created indexes and update statistics on

    newly indexed table columns in the EBS database.

    Since all custom indexes above follow Oracle EBS index standard naming conventions, any future upgrades would not be

    affected.

    *) Oracle Applications Tablespace Model (OATM):

    Oracle EBS release 11.5.9 and lower uses two tablespaces for each Oracle Applications product, one for the tables and

    one for the indexes. The old tablespace model standard naming convention for tablespaces is a product's Oracle

    schema name with the suffixes D for Data tablespaces and X for Index tablespaces. For example, the default

    tablespaces for Oracle Payables tables and indexes are APD and APX, respectively.

    Oracle EBS 11.5.10 and R12 use the new Oracle Applications Tablespace Model. OATM uses 12 locally managed

    tablespaces across all products. Indexes on transaction tables are held in a separate tablespace APPS_TS_TX_IDX,

    designated for transaction table indexes.

    Customers running pre-11.5.10 releases can migrate to OATM using OATM Migration utility. Refer to Oracle Support

    Note 248857.1 for more details.

    Oracle EBS tables with high transactional load

    The following Oracle EBS tables are used for high volume transactional data processing, so introduction of indexes on

    LAST_UPDATE_DATE may cause additional overhead for some OLTP operations. For the majority of all customer

    implementations the changes will not have any significant impact on OLTP Applications performance. Oracle BI Applications

    customers may consider creating custom indexes on LAST_UPDATE_DATE for these tables only after benchmarking

    incremental ETL performance and analyzing OLTP applications impact.

    To analyze the impact on EBS source database, you can generate an Automatic Workload Repository (AWR) report during the

    execution of OLTP batch programs, producing heavy inserts / updates into the tables below, and review Segment Statistics

    section for resource contentions caused by custom LAST_UPDATE_DATE indexes. Refer to Oracle RDBMS documentation for

    more details on AWR usage.

    Make sure you use the following pattern for creating custom indexes on the listed tables below:

    CREATE index .OBIEE_ ON . (LAST_UPDATE_DATE) tablespace

    ;

    Prod Table Name

    AP AP_EXPENSE_REPORT_LINES_ALL

    AP AP_INVOICE_DISTRIBUTIONS_ALL

    AP AP_AE_LINES_ALL

    AP AP_PAYMENT_HIST_DISTS

    AR AR_PAYMENT_SCHEDULES_ALL

    AR AR_RECEIVABLE_APPLICATIONS_ALL

    AR RA_CUST_TRX_LINE_GL_DIST_ALL

    AR RA_CUSTOMER_TRX_LINES_ALL

    BOM BOM_COMPONENTS_B

    BOM BOM_STRUCTURES_B

    CST CST_ITEM_COSTS

    GL GL_BALANCES

    GL GL_DAILY_RATES

  • 20

    GL GL_JE_LINES

    INV MTL_MATERIAL_TRANSACTIONS

    INV MTL_SYSTEM_ITEMS_B

    ONT OE_ORDER_LINES_ALL

    PER PAY_PAYROLL_ACTIONS

    PO RCV_SHIPMENT_LINES

    WSH WSH_DELIVERY_ASSIGNMENTS

    WSH WSH_DELIVERY_DETAILS

    Additional Custom EBS indexes in EBS 11i source systems

    Oracle EBS source database tables contain another mandatory column CREATION_DATE, which can be used by Oracle BI

    Applications for capturing initial data subsets. You may consider creating custom indexes on CREATION_DATE if your initial

    ETL extracts a subset of historic data. You can use the same guidelines for creating custom indexes on CREATION_DATE

    columns for improving initial ETL performance after careful benchmarking of EBS source environment performance.

    Additionally the following two indexes help to improve ETL performance for Order Management subject area:

    create index OBIEE_OE_ORDER_LINES_ALL_N1 on OE_ORDER_LINES_ALL(LINE_ID, TOP_MODEL_LINE_ID, CRE

    ATION_DATE, BOOKED_FLAG);

    create index OBIEE_OE_PRICE_ADJUSTMENTS_N1 ON OE_PRICE_ADJUSTMENTS(LINE_ID, LIST_LINE_ID, APPL

    IED_FLAG);

    Oracle Warehouse Recommendations for Better Performance

    Database configuration parameters

    Oracle Business Intelligence Applications version 11g is certified with 64-bit Oracle RDBMS 11g and 12c. Since Oracle BI

    Applications extensively use bitmap indexes, partitioned tables, and other database features in both ETL and front-end queries

    logic, it is important that Oracle BI Applications customers install the latest database releases for their Data Warehouse tiers:

    - Oracle 11g customers should use Oracle 11.2.0.4 or higher.

    - Oracle 12c customers should use Oracle 12.1.0.1 or higher.

    Both 11g and 12c customers can use the following template:

    db_name =

    control_files = //ctrl01.dbf, //ctrl02.dbf

    db_block_size = 8192 # or 16K depending on Operating system specifics

    processes = 500

    db_files = 1024

    cursor_sharing = EXACT

    cursor_space_for_time = FALSE

    session_cached_cursors = 500

    open_cursors = 500

    nls_sort = BINARY

    trace_enabled = FALSE

    audit_trail = NONE

    _trace_files_public = TRUE

  • 21

    timed_statistics = TRUE

    statistics_level = TYPICAL

    sga_target = 8G # Resize SGA & PGA targets to fit into avail RAM

    pga_aggregate_target = 4G # SGA + PGA should not exceed 70% of total RAM

    workarea_size_policy = AUTO

    db_block_checking = FALSE

    db_block_checksum = TYPICAL

    db_writer_processes = 2

    log_checkpoint_timeout = 1800

    log_checkpoints_to_alert = TRUE

    undo_management = AUTO

    undo_tablespace =

    undo_retention = 90000

    job_queue_processes = 10

    parallel_adaptive_multi_user = FALSE

    parallel_max_servers = 16

    parallel_min_servers = 0

    star_transformation_enabled = TRUE

    query_rewrite_enabled = TRUE

    query_rewrite_integrity = TRUSTED

    _b_tree_bitmap_plans = FALSE

    plsql_code_type = NATIVE

    disk_asynch_io = FALSE

    fast_start_mttr_target = 3600

    Review the template file above and adjust your target database parameters specific to your data warehouse tier hardware.

    Note: init.ora template for Exadata / 11gR2 is provided in Exadata section of this document.

    REDO Log Files Sizing Considerations

    Initial ETL may cause higher than usual generation of REDO logs, when loading large data volumes in a data warehouse

    database. If your target database is configured to run in ARCHIVELOG mode, you can consider two options:

    1. Switch the database to NOARCHIVELOG mode, execute Initial ETL, take a cold backup and then switch the database

    back to ARCHIVELOG mode.

    2. Allocate up to 10-15% of additional space to accommodate for archived REDO logs during Initial ETL.

    Below is a calculation of generated REDO amount in an internal initial ETL run:

    redo log file sequence:

    start : 641 (11 Jan 21:10)

    end : 1624 (12 Jan 10:03)

    total # of redo logs : 983

    log file size : 52428800

    redo generated: 983*52428800 = 51537510400 (48 GB)

    Data Loaded in warehouse: SQL> select sum(bytes)/1024/1024/1024 Gb from dba_segments where owner='DWH' and segment_type='TABLE';

  • 22

    Gb

    ----------

    280.49

    Most of ODI scenario SQLs perform conventional inserts into i$ interface tables during ETL runs. With sub-optimal size of REDO logs you may get a lot of log file switch (checkpoint incomplete) wait events in your AWR reports during ETL runs.

    To minimize the impact from log file switch (checkpoint incomplete) wait events and improve performance for conventional inserts, increase the size for your REDO files. You can query your database dictionary to find the optimal size (in Mb):

    select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;

    If your data warehouse hardware does not support asynchronous I/O, then you can improve the conventional inserts by setting DBWR_IO_SLAVES in init.ora to non-zero value.

    The internal benchmarks for running large inserts into an i$ table without asynchronous I/O support showed the best performance for conventional inserts with 2-3 DB Writer processes, 3 x 1Gb Redo Logs, and DBWR_ID_SLAVES = 1:

    Number DBWR_IO_SLAVES 0 1 2 4

    Runtime In Sec

    insert append (2 dbwr and 6x100M Redo log) 113 insert append (2 dbwr and 3x1G Redo log) 71 67

    insert append (3 dbwr and 3x1G Redo log) 80 69 insert new rows (conventional - 2 dbwr and 6x100M Redo Logs) 530 251 255 339

    insert new rows (conventional - 4 dbwr and 6x100M Redo Logs) 587 257 252 258

    insert new rows (conventional - 3 dbwr and 6x100M Redo Logs) 524 254 259 254

    insert new rows (conventional - 3 dbwr and 3x1G Redo Logs) 236 160 insert new rows (conventional - 3 dbwr and 4x500M Redo Logs)

    178

    insert new rows (conventional - 3 dbwr and 6x500M Redo Logs)

    175 insert new rows (conventional - 2 dbwr and 3x1G Redo Logs)

    166

    Oracle RDBMS System Statistics

    Oracle has introduced workload statistics in Oracle 9i to gather important information about system such as single and

    multiple block read time, CPU speed, and various system throughputs. Optimizer takes system statistics into account, when it

    computes the cost of query execution plans. Failure to gather workload statistics may result in sub-optimal execution plans for

    queries, excessive temporary space consumption, and ultimately impact BI Applications performance.

    Oracle BI Applications customers are required to gather workload statistics on both source and target Oracle databases prior

    to running initial ETL.

    Oracle recommends two options to gather system statistics:

    Run the dbms_stats.gather_system_stats('start') procedure at the beginning of the workload window, then the

    dbms_stats.gather_system_stats('stop') procedure at the end of the workload window.

    Run dbms_stats.gather_system_stats('interval', interval=>N) where N is the number of minutes when statistics

    gathering will be stopped automatically.

    Important! Execute dbms_stats.gather_system_stats, when the database is not idle. Oracle computes desired system statistics

    when database is under significant workload. Usually half an hour is sufficient to generate the valid statistic values.

  • 23

    Parallel Query configuration

    BI Applications Load Plans may use Oracle Parallel Query option for running some scenarios, computing statistics, building

    indexes on target tables. By default, none of the target tables have defined degree of parallelism. ODI generated SQL plans

    could get easily skewed with enabled parallelism, so by default, none of the target tables have defined a degree of parallelism.

    Oracle BI Applications init.ora templates have PARALLEL_ADAPTIVE_MULTI_USER = FALSE and manually defined, smaller parallel

    query settings.

    Important! You should carefully monitor your environment workload before changing any parallel query parameters.

    It could easily lead to increased resource contention, creating I/O bottlenecks, and increasing response time when the

    resources are shared by many concurrent transactions.

    Since ODI EXEC_TABLE_MAINT_PROC creates indexes and computes statistics on target tables in parallel, concurrent

    execution may cause performance problems, if the values parallel_max_servers and parallel_threads_per_cpu are too high.

    You can monitor the system load from parallel operations by executing the following query:

    SQL> select name, value from v$sysstat where name like 'Parallel%';

    Reduce the "parallel_threads_per_cpu" and "parallel_max_servers" value if the system is overloaded.

    Oracle Business Analytics Warehouse Tablespaces

    By default, Oracle BI Applications installation uses a single Data tablespace for creating all target segments. To better manage

    your ETL data load and improve performance, consider using multiple tablespaces for different types of target segments.

    Oracle BI Applications segments can be categorized and deployed into the following tablespaces:

    Tablespace OBJECTS PURPOSE

    Temporary Database temporary

    segments

    Initial ETL scenarios process very complex SQLs with multiple join operations, which

    actively use temporary segments, stored in TEMP tablespace. TEMP can get filled very

    fast, when processing multiple concurrent SQLs with heavy joins

    SDS SDS segments, tables

    and indexes

    If you implemented SDS option, use a separate tablespace for replicated source objects

    and their indexes. SDS tablespace should be sized based on the source tables footprint

    in OLTP.

    Interface

    ODI interface tables

    (c$, i$, e$) and their

    indexes

    ODI interface tables are dropped and re-created for each ETL run. By separating them

    into a dedicated tablespace you can resize your interface tablespace after initial ETL, or

    create the tablespace as compressed.

    Stage

    BI Apps stage tables

    (_DS, _FS, HS, etc),

    persistent stage (_PS)

    tables, and their

    indexes

    Staging tables are always truncated in each ETL run, so they can be deployed in a

    separate tablespace. Note, that Stage tablespace can grow very large in size during

    initial ETL.

    Persistent Stage tables (_PS) are not truncated in incremental runs.

    Target Data Target Data Segments Target Data tablespace should be used for data warehouse objects, such as facts,

    dimensions, hierarchies, etc., as well as aggregate tables, populated by ODI scenarios.

    Target Index Target Index Segments Target Index tablespace stores all indexes on Data warehouse tables.

    Review the additional capacity planning considerations:

  • 24

    1. BI Applications ODI scenarios use ODI interface tables (c$, i$, e$, etc) for data processing, transformations and error

    logging operations. The typical data movement in 11g ETL can presented as: source -> c$ -> stage table -> i$ -> target

    table. When sizing your Data Warehouse, you need to plan for additional space for ODI interface tables:

    BI Applications Load Plan Initial executions bypass i$ tables and load data directly into the target tables, so i$

    segments do not consume any space in initial ETLs.

    Important! You can conserve additional space and improve performance for your extract scenarios by

    switching from default JDBC Load Knowledge Module (LKM) to Database Link KM. DBLink KM creates views on

    source and c$ synonyms to the source views over a database link. The use of DBLink KM further reduces ETL

    data movement, saves space and improves extract (SDE) scenarios performance. Refer to ODI KMs

    documentation for more details.

    BI Apps Load Plans drop and re-create the interface tables within every single scenario for each ETL run.

    If you use JDBC LKM, estimate the extracted volumes for your largest facts, executed in parallel, and then sum

    up the volumes to find out the maximum space, consumed by c$ tables in your initial ETLs. Ignore this step, if

    you use DBLink KM.

    Estimate your facts incremental volumes, processed concurrently, and then sum them up to find out the

    maximum space, consumed by i$ tables.

    2. While stage segments consume space, almost equivalent to target segments footprint during initial ETL, they get

    truncated in subsequent incremental runs, so their space allocation will be driven by the incremental volumes. If all

    scenarios support incremental logic, then stage objects space may consume from 5 to 20% of its initially allocated

    tablespace. So, you can resize your stage tablespace after completing initial ETL.

    3. Depending on your hardware configuration, you may consider isolating staging tablespace target Data tablespace on

    different controllers. Such configuration would help to speed up Target Load (SIL) mappings for fact tables by

    balancing I/O load on multiple RAID controllers.

    4. Temporary tablespace needs to be sized to accommodate for initial ETL. Since BI Applications scenarios do all

    transformations in database, they may produce heavy joins, and often use temporary segments for storing interim

    result sets, while going through execution plan operations. Make sure you allocate enough space in your Temporary

    tablespace(s) to accommodate for parallel processing during initial ETL. Typically Fact tables processing in parallel

    consumes the most TEMP space in initial loads.

    5. SDS tablespace sizing is not covered in this document, since its footprint depends on implemented functional areas.

    You can estimate its size by checking space of source tables and indexes, which will be replicated to SDS.

    6. During incremental loads, by default, Load Plan drops and rebuilds indexes, so you should separate all indexes in a

    dedicated tablespace and, if you have multiple RAID / IO Controllers, move the INDEX tablespace to a separate

    controller.

    7. Note that the Target INDEX Tablespace may increase, if you enable more query indexes in your data warehouse.

    The following table summarizes uncompressed space allocation estimates in a data warehouse by its target data volume

    range:

    Target Data Volume SMALL MEDIUM LARGE

    Target DATA Tablespace 50 Gb and higher 300 Gb and higher 1 Tb and higher

    Target INDEX Tablespace 30+ Gb 150+ Gb 400+ Gb

    Temporary Tablespace 50+ Gb 100+ Gb 300+ Gb

  • 25

    Stage Tablespace 50+ Gb 200+ Gb 1+ Tb

    Interface Tablespace 20 Gb 50 Gb 100+ Gb

    Total Warehouse Size 200 Gb and higher 800 Gb and higher 2.8 Tb and higher

    Important! You should use Locally Managed tablespaces with AUTOALLOCATE clause. DO NOT use UNIFORM extents size, as it

    may cause excessive space consumption and result in queries slower performance.

    Use standard (primary) block size for your warehouse tablespaces. DO NOT build your warehouse on non-

    standard block tablespaces.

    Oracle BI Applications Best Practices for Oracle Exadata

    Database Requirements for Analytics Warehouse on Exadata

    Important! Oracle BI Applications Exadata customers must install Oracle RDBMS 11.2.0.3 or higher on Exadata hardware.

    Use the template file below for your init.ora parameter file for Business Analytics Warehouse on Oracle Exadata.

    ########################################################################### # Oracle BI Applications - init.ora template # This file contains a listing of init.ora parameters for 11.2 / Exadata ########################################################################### db_name = control_files = //ctrl01.dbf, //ctrl02.dbf db_block_size = 8192 # or 16384 (for better compression) db_block_checking = FALSE db_block_checksum = TYPICAL deferred_segment_creation = TRUE user_dump_dest = //admin//udump background_dump_dest = //admin//bdump core_dump_dest = //admin//cdump max_dump_file_size = 20480 processes = 500 sessions = 4 db_files = 1024 session_max_open_files = 100 dml_locks = 1000 cursor_sharing = EXACT cursor_space_for_time = FALSE session_cached_cursors = 500 open_cursors = 1000 db_writer_processes = 2 aq_tm_processes = 1 job_queue_processes = 2 timed_statistics = true statistics_level = typical sga_max_size = 45G sga_target = 40G shared_pool_size = 2G shared_pool_reserved_size = 100M workarea_size_policy = AUTO pre_page_sga = FALSE pga_aggregate_target = 16G log_checkpoint_timeout = 3600 log_checkpoints_to_alert = TRUE log_buffer = 10485760 undo_management = AUTO undo_tablespace = UNDOTS1 undo_retention = 90000 parallel_adaptive_multi_user = FALSE parallel_max_servers = 128

  • 26

    parallel_min_servers = 32 # ------------------- MANDATORY OPTIMIZER PARAMETERS ---------------------- star_transformation_enabled = TRUE query_rewrite_enabled = TRUE query_rewrite_integrity = TRUSTED b_tree_bitmap_plans = FALSE optimizer_autostats_job = FALSE

    Handling BI Applications Indexes in Exadata Warehouse Environment

    Oracle Business Analytic Applications Suite uses two types of indexes:

    ETL indexes for optimizing ETL performance and ensuring data integrity

    Query indexes, mostly bitmaps, for end user star queries

    Exadata Storage Indexes functionality cannot be considered as unconditional replacement for BI Apps indexes. You can employ

    storage indexes only in those cases when BI Applications query indexes deliver inferior performance and you ran the

    comprehensive tests to ensure no regressions for all other queries without the query indexes.

    Do not drop any ETL indexes, as you may not only impact your ETL performance but also compromise data integrity in your

    warehouse.

    The best practices for handling BI Applications indexes in Exadata Warehouse:

    Turn on Index usage monitoring to identify any unused indexes and drop / disable them in your env. Refer to the

    corresponding section in the document for more details.

    Consider pinning the critical target tables in smart flash cache

    Consider building custom aggregates to pre-aggregate more data and simplify queries performance.

    Drop selected query indexes and disable them in ODI LPs to use Exadata Storage Indexes / Full Table Scans only after

    running comprehensive benchmarks and ensuring no impact on any other queries performance.

    Gather Table Statistics for BI Applications Tables

    The early versions of BI Applications 11g used FOR INDEXED COLUMNS syntax for computing BI Applications table statistics.

    Such syntax does not cover statistics for non-indexed columns participating in end user query joins. If you choose to drop

    some indexes in Exadata environment, then there would be more critical columns with NULL statistics. As the result, Optimizer

    may choose sub-optimal execution plan and result in slower performance.

    You should consider switching to FOR ALL COLUMNS SIZE AUTO syntax in DBMS_STATS.GATHER_TABLE_STATS call in

    EXEC_TABLE_MAINT_PROC if your implementation still uses FOR INDEXED COLUMNS. Contact Oracle Support Services for

    the corresponding patch with such change.

    Oracle Business Analytics Warehouse Storage Settings in Exadata

    The recommended database block size (db_block_size parameter) is 8K. You may consider using 16K block size as well,

    primarily to increase for better compression rate, as Oracle applies compression at block level. Refer to init.ora

    template in the section below.

    Make sure you use locally managed tablespaces with AUTOALLOCATE option. DO NOT use UNIFORM extent size for

    your warehouse tablespaces.

    Use your primary database block size 8k (or 16k) for your warehouse tablespaces. It is NOT recommended to use non-

    standard block size tablespaces for deploying production warehouse.

    Use 8Mb large extent size for partitioned fact tables and non-partitioned large segments, such as dimensions,

    hierarchies, etc. You will have to manually specify INITIAL and NEXT extents size of 8Mb for non-partitioned segments.

  • 27

    Set deferred_segment_creation = TRUE to defer a segment creation until the first record is inserted. Refer to init.ora

    section below.

    Parallel Query Use in BI Applications on Exadata

    All BI Applications tables are created without any degree of parallelism in BI Applications schema. Since ODI LP manages

    parallel jobs, such as ODI scenarios or indexes creation, during an ETL, the use of Parallel Query in ETL mappings could

    generate more I/O overhead and cause performance regressions for ETL jobs.

    Exadata hardware provides much better scalability for I/O resources, so you can consider turning on Parallel Query for slow

    queries by setting PARALLEL attribute for large tables participating in the queries. For example:

    SQL> ALTER TABLE W_GL_BALANCE_F PARALLEL;

    You should benchmark the query performance prior to implementing the changes in your Production environment.

    Compression Implementation Oracle Business Analytics Warehouse in Exadata

    Table compression can significantly reduce a segment size, and improve queries performance in Exadata environment.

    However, depending on the nature DML operations in ETL mappings, it may result in their slower mapping performance and

    larger consumed space. The following guidelines will help to ensure successful compression implementation in your Exadata

    environment:

    Consider implementing compression after running an Initial ETL. The initial ETL plan contains several mappings with heavy

    updates, which could impact your ETL performance.

    Implement large facts table partitioning and compress inactive historic partitions only. Make sure that the active ones

    remain uncompressed.

    Choose Basic, Advanced or HCC compression types for your compression candidates.

    Review periodically the allocated space for a compressed segment, and check such stats as num_rows, blocks and

    avg_row_len in user_tables view. For example, the following compressed segment needs to be re-compressed, as it

    consumes too many blocks:

    Num_rows Avg_row_len Blocks Compression

    541823382 181 13837818 ENABLED

    The simple calculation (num_rows * avg_row_len / 8k block size) + 16% (block overhead) gives ~13.8M blocks for an

    uncompressed segment. This segment should be re-compressed reduce its footprint and improve its queries performance.

    Refer to Table Compression Implementation Guidelines section in this document for additional information on compression

    for BI Applications Warehouse.

    OBIEE Queries Performance Considerations on Exadata

    As mentioned before, BI Analytic Applications use query indexes, mostly BITMAPs for ensuring effective star transformation.

    As an alternative to star queries, you can consider taking advantage of Exadata and Oracle database features to deliver better

    performance for your end user reports.

    1. Implement large fact table partitioning. It is critical to ensure effective partitioning pruning, and Oracle Optimizer will

    choose partitioning pruning based on the filtering conditions and partitioning keys.

    2. Consider compressing your historic partitions with hybrid columnar compression or advanced compression. Be careful

    with applying compression to the latest partitions, as they could explode in size after heavy updates.

    3. Enable degree of parallelism (DOP) for your fact tables. Do not set extremely high values for DOP; usually 8 16 is

    more than enough for speeding up parallel processing on Exadata without impacting the overall system I/O.

  • 28

    4. Verify that your generated explain (and execution) plan use hash join operators, rather than nested loops.

    Important! You should conduct comprehensive testing with all recommended techniques in place before dropping your query

    indexes.

    Exadata Smart Flash Cache

    The use of Smart Flash Cache in Oracle Business Analytics Warehouse can significantly improve end user queries performance.

    You can consider pinning most frequently used dimensions which impact your queries performance. To manually pin a table in

    Exadata Smart Flash Cache, use the following syntax:

    ALTER TABLE W_PARTY_D STORAGE (CELL_FLASH_CACHE KEEP);

    The Exadata Storage Server will cache data for W_PARTY_D table more aggressively and will try to keep the data from this

    table longer than cached data from other tables.

    Important! Use manual Flash Cache pinning only for the most common critical tables.

    Oracle BI Applications High Availability

    Introduction

    Both initial and incremental data loads into Oracle BI Applications Data Warehouse must be executed during scheduled

    maintenance or blackout windows for the following reasons:

    - End user data could be inconsistent during ETL runs, causing invalid or incomplete results on dashboards

    - ETL runs may result in significant hardware resource consumption, slowing down end user queries

    The time to execute periodic incremental loads depends on a number of factors, such as number of source databases, each

    source database incremental volume, hardware specifications, environment configuration, etc. As the result, incremental

    loads may not always complete within a predefined blackout window and cause extended downtime.

    Global businesses, operating 24 hours around oclock not always could afford few hours downtime. Such customers can

    consider implementing high availability solution using Oracle Data Guard with a physical Standby database.

    High Availability with Oracle Data Guard and Physical Standby Database

    Oracle Data Guard configuration contains a primary database and supports up to nine standby databases. A standby database

    is a copy of a production database, created from its backup. There are two types of standby databases, physical and logical.

    A physical standby database must be physically identical to its primary database on a block-for-block basis. Data Guard

    synchronizes a physical standby database with its primary one by applying the primary database redo logs. The standby

    database must be kept in recovery mode for Redo Apply. The standby database can be opened in read-only mode in-between

    redo synchronizations.

    The advantage of a physical standby database is that Data Guard applies the changes very fast using low-level mechanisms and

    bypassing SQL layers.

    A logical standby database is created as a copy of a primary database, but it later can be altered to a different structure. Data

    Guard synchronizes a logical standby database by transforming the data from the primary database redo logs into SQLs and

    executing them in the standby database.

    A logical standby database has to be open all the times to allow Data Guard to perform SQL updates.

    Important! A primary database must run in ARCHIVELOG mode all the times.

    Data Guard with Physical Standby Database option provides both efficient and comprehensive disaster recovery as well as

    reliable high availability solution to Oracle BI Applications customers. Redo Apply for Physical Standby option synchronizes a

  • 29

    Standby Database much faster compared to SQL Apply for Logical Standby. OBIEE does not require write access to BI

    Applications Data Warehouse for either executing end user logical SQL queries or developing additional contents in RPD or

    Web Catalog.

    The internal benchmarks on a low-range outdated hardware have showed four times faster Redo Apply on a physical standby

    database compared to ETL execution on a primary database:

    Step Name Row Count Redo Size Primary DB Run

    Time

    Redo Apply time

    SDE_ORA_SalesProductDimension_Full 2621803 621 Mb 01:59:31 00:10:20

    SDE_ORA_CustomerLocationDimension_Full 4221350 911 Mb 04:11:07 00:16:35

    SDE_ORA_SalesOrderLinesFact_Full 22611530 12791 Mb 09:17:19 03:16:04

    Create Index W_SALES_ORDER_LINE_F_U1 Index n/a 610 Mb 00:24:31 00:08:23

    Total 29454683 14933 Mb 15:52:28 03:51:22

    The target hardware was configured intentionally on a low-range Sun server, with both Primary and Standby databases

    deployed on the same server, to imitate heavy incremental load. The modern production systems with primary and standby

    database, deployed on separate servers, are expected to deliver up to 8-10 times better Redo Apply time on a physical

    standby database, compared to the ETL execution time on the primary database.

    The diagram below describes Data Guard configuration with Physical Standby database:

    - The primary instance runs in FORCE LOGGING mode and serves as a target database for routine incremental ETL or

    any maintenance activities such as patching or upgrade.

    - The Physical Standby instance runs in read-only mode during ETL execution on the Primary database.

    - When the incremental ETL load into the Primary database is over, DBA schedules the downtime or blackout window

    on the Standby database for applying redo logs.

    - DBA shuts down OBIEE tier and switches the Physical Standby database into RECOVERY mode.

    - DBA starts Redo Apply in Data Guard to apply the generated redo logs to the Physical Standby Database.

    - DBA opens Physical Standby Database in read-only mode and starts OBIEE tier:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    SQL> ALTER DATABASE OPEN;

  • 30

    Easy-to-manage switchover and failover capabilities in Oracle Data Guard allow quick role reversals between primary and

    standby, so customers can consider switching OBIEE from the Standby to Primary, and then start applying redo logs to the

    Standby instance. In such configuration the downtime can be minimized to two short switchovers:

    - Switch OBIEE from Standby to Primary after ETL completion into Primary database, and before starting Redo Apply

    into Standby database.

    - Switch OBIEE from Primary to Standby before starting another ETL.

    Additional considerations for deploying Oracle Data Guard with Physical Standby for Oracle BI Applications:

    1. FORCE LOGGING mode would increase the incremental load time into a Primary database, since Oracle would logging

    index rebuild DDL queries.

    2. Primary database has to be running in ARCHIVELOG mode to capture all REDO changes.

    3. Such deployment results in more complex configuration; it also requires additional hardware to keep two large

    volume databases and store daily archived logs.

    However it offers these benefits:

    1. High Availability Solution to Oracle BI Applications Data Warehouse

    2. Disaster recovery and complete data protection

    3. Reliable backup solution

  • 31

    Conclusion This document consolidates the best practices and recommendations for improving performance for Oracle Business

    Intelligence Applications Version 11g.This list of areas for performance improvements is not complete. If you observe any

    performance issues with your Oracle BI Applications implementation, you should trace various components, and carefully

    benchmark any recommendations or solutions discussed in this article or other sources, before implementing the changes in

    the production environment.

  • 32

    Oracle Business Intelligence Applications Version 11g.x Performance Recommendations

    January 2015

    Primary Author: Pavel Buynitsky

    Contributors: Eugene Perkov, Amar Batham, Oksana Stepaneeva,

    Wasimraja Abdulmajeeth, Rakesh Kumar, Scott Lowe, Valery Enyukov

    Oracle Corporation

    World Headquarters

    500 Oracle Parkway

    Redwood Shores, CA 94065

    U.S.A.

    Worldwide Inquiries:

    Phone: +1.650.506.7000

    Fax: +1.650.506.7200

    oracle.com

    Copyright 2015, Oracle. All rights reserved.

    This document is provided for information purposes only and the

    contents hereof are subject to change without notice.

    This document is not warranted to be error-free, nor subject to any

    other warranties or conditions, whether expressed orally or implied

    in law, including implied warranties and conditions of merchantability

    or fitness for a particular purpose. We specifically disclaim any

    liability with respect to this document and no contractual obligations

    are formed either directly or indirectly by this document. This document

    may not be reproduced or transmitted in any form or by any means,

    electronic or mechanical, for any purpose, without our prior written permission.

    Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle

    Corporation and/or its affiliates. Other names may be trademarks

    of their respective owners.


Recommended