Date post: | 12-Dec-2015 |
Category: |
Documents |
Upload: | anandg7720 |
View: | 218 times |
Download: | 3 times |
OUM
TA.100 SYSTEM MANAGEMENT GUIDE
HUS
Exadata and Exalogic Platform
Author: Oracle Consulting, HUS, CGI, Deloitte, Accenture
Creation Date: August 15, 2013
Last Updated: April 18, 2023
Document Ref:
Version: DRAFT 1e
TA.100 System Management Guide
1 DOCUMENT CONTROL
1.1 Change Record4
Date Author Version Change Reference
15.08.2013 Marja Kärmeniemi, Oracle
Draft 1a No Previous Document. Added Chapter External Support (TS-040_PRODUCTION_SUPPORT_INFRASTRUCTURE_DESIGN)
21.08.2013 Marja Kärmeniemi, Oracle
Draft 1b Added tags (*) to show first version chapters plus initial party to insert content
2-4.12.2013 Ursula Koski & Marja Kärmeniemi, Oracle
10.12.2013 Timo Paananen / Accenture
Draft 1c Minor updates to document readability.
12.12.2013 Timo Paananen Draft 1d Updated based on comments received in review held 12.12.2013
18.12.2013 Ursula Koski & Marja Kärmeniemi, Oracle
Draft 1e Updated based on comments received in review held 12.12.2013. Few updates to Timo’s comment boxes for clarification
1.2 Reviewers
Name Position
Timo Paananen, AccentureMarja Kärmeniemi, OracleHeikki Kähkönen, HUSVirve Honkanen, HUS
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2iixl36ii
TA.100 System Management Guide
Contents
1 Document Control..................................................................................................... ii
1.1 Change Record........................................................................................................... ii1.2 Reviewers.................................................................................................................... ii
2 Introduction...............................................................................................................1
2.1 Purpose....................................................................................................................... 12.2 Scope.......................................................................................................................... 12.3 Related documents - All..............................................................................................12.4 Glossary – All..............................................................................................................22.5 List of Figures..............................................................................................................32.6 List of Tables...............................................................................................................3
3 System Management Procedures............................................................................4
4 Database Tier.............................................................................................................5
4.1 CPU Allocation............................................................................................................54.2 Memory Allocation.......................................................................................................54.3 Controlfiles.................................................................................................................. 64.4 Archive Log Management...........................................................................................64.5 Tablespaces................................................................................................................74.6 Datafiles...................................................................................................................... 84.7 Data/Index Segments..................................................................................................84.8 Initialisation Files.........................................................................................................84.9 Schemas..................................................................................................................... 94.10 Database Options.....................................................................................................104.11 Object Growth Management.....................................................................................124.12 Partition Maintenance...............................................................................................124.13 Purging and Archiving...............................................................................................134.14 Statistics Collection...................................................................................................134.15 Backup and Recovery Responsibility........................................................................144.16 Database Backups....................................................................................................14
5 Maintenance............................................................................................................18
5.1 Patching....................................................................................................................185.2 Purging log/trace files................................................................................................21
6 Monitoring and Alerting..........................................................................................22
6.1 Oracle Enterprise Manager.......................................................................................226.2 Metrics and Alerts.....................................................................................................236.3 Incident Management................................................................................................246.4 Performance Monitoring............................................................................................246.5 Configuration Management.......................................................................................24
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2iiixl36iii
TA.100 System Management Guide
7 Applications Tier.....................................................................................................26
7.1 Web Server Monitoring..............................................................................................267.2 Error Log Management (*) SysAdm ?.......................................................................267.3 Management of Servers in the DMZ.........................................................................27
8 Desktop Client Tier.................................................................................................28
9 Security and Accounts Management (*) - all.........................................................29
9.1 Security.....................................................................................................................299.2 Accounts...................................................................................................................29
10 Hardware and Network Management (*) - HUS.....................................................30
11 Software Management (*) – all installing software...............................................31
11.1 Scope........................................................................................................................ 3111.2 Procedures................................................................................................................31
12 Performance and Availability Management..........................................................33
12.1 Procedures................................................................................................................3312.2 Testing Fault Tolerance.............................................................................................33
13 Capacity Planning...................................................................................................34
14 External Support – parties applyning patches.....................................................35
15 Open and Closed Issues.........................................................................................36
15.1 Open Issues..............................................................................................................3615.2 Closed Issues............................................................................................................36
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2ivxl36iv
TA.100 System Management Guide
2 INTRODUCTION
2.1 Purpose
The purpose of this document is to provide the specific procedures required to operate, manage and monitor the new systems environment.
This document is used to consolidate decisions and implementation details at high level. Some content can later be moved to a more specific document. Then a document reference is used in this document.
Note: Since production hardware and software installations have not been completed at the time this document is written, this document describes to-be architecture when discussing production.
2.2 Scope
This document covers the management of the following systems, applications and technical infrastructure:
Oracle Exadata
Oracle ExaLogic
Enterprise IDM
Oracle Fusion Applications
This document does not include the management of the following systems:
Loadbalancer
Any other systems
2.3 Related documents - All
This document refers to documention listed below:
Id Document Location
1 TA-070_INITIAL_ARCHITECTURE_AND_APPLICATION_MAPPING.docx
HUS Sharepoint
2 Installation documents HUS Sharepoint3 Password document Passwords sent to HUS (Jussi Torkkola)
after installation4 Oracle Exadata and Exalogic Security Hardening HUS Sharepoint5 Oracle Exadata X3-2 Data Sheet.pdf HUS Sharepoint6 Oracle Exadata X4-2 Data Sheet.pdf HUS Sharepoint
7 exalogic-elastic-cloud-x3-2-ds-1863831.pdf HUS Sharepoint
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2140361
TA.100 System Management Guide
2.4 Glossary – All
English Finnish Description
Basic Installation Perusasennus Installation of hardware, operating system and firmware to Exadata and Exalogic plus one Oracle database to Exadata
Hardware (HW) Laitteisto Hardware, including processor and disk systems
Network Verkko Networks, both internal between HW subsystems and external i.e. VLAN, WAN
Operating system (OS) Käyttöjärjestelmä Part of system softwareFirmware Rautaohjelmisto, firmis Program code to control HW. “Between”
HW and OS.System Software Varusohjelmisto Software which is used to run business
applicationInfrastructure Infra-alusta Includes hardware (HW), Operating system
(OS) and middleware softwareMiddleware Varusohjelmistojen
välikerrosIncludes Database (DB), Application Server, SOA Suite, Identity Management (IDM), Business Intelligence (BI)
Application Software Sovellusohjelmisto Business Application like Fusion Applications (FA)
System Management Software Järjestelmän hallinnan ohjelmisto
System management software like Oracle Enterprise Manager (OEM)
Table English - Finnish Glossary
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2240362
TA.100 System Management Guide
2.5 List of Figures
This is a list of all illustrations in this document.
Figure 1 Schema Design for Custom Applications 8Figure 2 Database Backup Design 12Figure 3 Incremental Update Strategy 13Figure 4 Oracle Enterprise Manager 19Figure 5 Customer Exadata Estate 20
2.6 List of Tables
This is a list of all tables in this document.
Table 1 Proactive Monitoring Approach 15Table 2 Recommended Patching Schelude 16Table 3 Statistics Gathering Schedule 18
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2340363
TA.100 System Management Guide
3 SYSTEM MANAGEMENT PROCEDURES
The organizational management processes utilized to support EXA*/Fusion Apps –environments need to defined and documented here. A separate task/sub-project needs to be created to facilitate this.
Chapters 5 to 10 and 12 to 13 must also be reviewed and updated as necessary when maintenance procedures are defined.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2440364
TA.100 System Management Guide
4 DATABASE TIER
Since production hardware and environments have not been setup when this document version is being written, this chapter describes to-be architecture when discussing production.
In the production environment, every database will maintain a minimum of one ‘standby’ copy located in the other Data Center.
Oracle DataGuard will be used to keep the standby copy in sync with the primary database using either ‘Maximum Availability’ or ‘Maximum Performance’ option.
Where a business application has a degree of read-only functionality, this can be directed at the ‘standby’ copy by using Oracle Active DataGuard.
4.1 CPU Allocation
Where multiple database instances run on the same database server, it is recommended to use ‘Instance Caging’ to prevent processes and jobs from over consuming the available CPU.
In Development environment there are on each of the X3-2 Database processing servers 16 CPU cores and 512 GB memory for database processing (8 CPU cores per Database Server are enabled with 256 GB memory per Database Server). On the storage servers there are 36 CPU cores for SQL processing (18 cores enabled) and 12 PCI flash cards (6 cards enabled) with 2.4 TB Exadata Smart Flash Cache and 18 x 3 TB 7,200 RPM High Capacity disks (6 disks per storage server enabled). Related to Oracle Exadata X3-2 Data Sheet.pdf.
In the production environment there are on each of the X3-4 Database processing servers 48 CPU cores and up to 1TB memory for database processing (24 CPU cores and up to 512 GB memory per Database Server) 24 CPU threads (2 * 6 cores, 2 threads per core). On the storage servers there are 36 CPU cores for SQL processing and 12 PCI flash cards with 9.6 TB (raw) Exadata Smart Flash Cache 36 x 4 TB 7,200 RPM High Capacity disks. In order to keep up with the licensing rules we need to use instance caging to restrict the amount of CPU that we can use in the production environment. Related to Oracle Exadata X4-2 Data Sheet.pdf.
Most lightweight applications are idle and consume less than 3 CPUs. Larger OLTP and DW applications can be CPU intensive and consume all the available CPU. By setting the ‘CPU_COUNT’ database instance parameter, the numbers of CPUs used by the foreground processes of an instance can be constrained (note: it does not affect Oracle background processes).
The following guidelines should be applied to all the database instances running on a database server:
Mission critical:sum cpu_count for all instances <= 75% total number of CPUs = 18 CPUs
CPU Intensive:sum cpu_count for all instances <= total number of CPUs = 24 CPUs
Light-weight:sum cpu_count for all instance <= 3 * total number of CPUs = 72 CPUs
4.2 Memory Allocation
Total memory allocation for all instances running on a database server must not exceed 75% of the total physical memory (96GB). That includes both SGA and PGA for each database and note that PGAs can exceed their target.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2540365
TA.100 System Management Guide
These memory allocations would apply to both the Primary (for preferred services) and Secondary (for available services) instances.
Unless the standby databases are to be used for reporting purposes (using Oracle Active DataGuard) then the memory requirements may be reduced considerably – 2GB SGA and 1GB PGA. However, when a database is switched over to it’s standby, the memory allocations will need to be increased to bring them in line with the values used by the old Primary database instance
The memory usage will be monitored using Oracle Enterprise Manager and the Automatic Workload Repository (AWR) reports to monitor cache hit ratios etc. to determine if the initial values need adjusting.
Once databases have ‘settled down’ (e.g. after 2-3 months of live running), then the memory key parameters (DB_CACHE_SIZE, SHARED_POOL etc.) should be fixed for each database and SGA_TARGET set to zero, to remove the overhead of automatic space management.
4.3 Controlfiles
There will be a minimum of two controlfiles per database, each one in a separate ASM DiskGroup (+DATA, +RECO).
The controlfile names will be generated by Oracle Managed Files (OMF).
The controlfiles will be backed up each night as part of the standard backup. In addition, a ‘backup controlfile to trace’ copy will be taken each time a change is made to the database structure (e.g. adding an additional datafile).
4.4 Archive Log Management
Responsibility
Exadata DBA
Frequency
Tools Used
Enterprise Manager
RMAN
SQL*Plus
Key Metric(s)
All database changes are recorded in the online log files. They are needed for recovery in case of instance failure. In an Exadata (RAC) environment each database instance is allocated its own redo thread using the online log files created for that instance. All online logs should be accessible to each database instance running in the cluster. To minimize the risk of data failure online log files should be mirrored on different disks. Inadequately sized redo logs, or slow access to log files, can be a major potential bottleneck in a database. Additionally, all RAC databases should be configured to run in archivelog mode.
Most log switches should be more than 15 minutes apart.
If frequent 'Checkpoint not complete' messages are seen in the alert.log, it is recommended to recreate the online redo log files with a larger size.
Redo log files should be mirrored across different volumes/diskgroups.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2640366
TA.100 System Management Guide
Archive logging should be turned on and archive logs generated for each redo thread should be accessible to all instances at all times.
There will be a minimum of 4 redo log groups 4GB in size, with a minimum of 2 members per group with each member in a different ASM DiskGroup (+DATA, +RECO).
The Redo log filenames will be generated by Oracle Managed Files.
The design goal is have redo log switches taking place every 15-30 minutes for busier databases. During the Performance Test phase, the redo log switching will be monitored and the size and number of redo logs groups adjusted to meet the design goal.
During production running, the database instance alert files will be regularly checked for any occurrences of ‘checkpoint incomplete’ message that will indicate the redo log configuration will need revising upwardly.
Procedure Steps
Database Archivelog Mode and Enable Database Force Logging
Archivelog Mode and Database Force Logging are prerequisites to insure all changes are captured in the redo and applied during database recovery operation. Enable database archive logging and database force logging to prevent any nologging operations.
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE FORCE LOGGING;
Optionally you may set logging attributes at the tablespace level if it is possible to isolate data that does not require recovery into a separate tablespace. For more information refer to “Reduce Overhead and Redo Volume During ETL Operations” in the technical white paper, Oracle Data Guard: Disaster Recovery for Oracle Exadata Database Machine.
Error or Failure Condition
4.5 Tablespaces
All tablespaces will use default block size of eight kilobytes (8K).
All tablespaces will be locally managed and have a uniform extent size of 4MB.
Tablespaces can be ‘SMALLFILE’ tablespaces or ‘BIGFILE’ tablespaces.
A SMALLFILE tablespace consists of one or more datafiles, each datafile being a maximum of 32GB in size (based on 8K block size). The maximum number of files being 1,022 per tablespace.
A BIGFILE tablespace consists of a single datafile of up to 32TB in size.
A database can support a mix of SMALLFILE and BIGFILE tablespaces. BIGFILE tablespace are simpler to manage and reduce the overall number of datafiles to be managed by each Oracle instance. With Oracle Recovery Manager 11g onwards (RMAN), BIGFILE tablespaces can now be backed up/restored/copied in parallel in chunks, using the ‘section size <size>’ option.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2740367
TA.100 System Management Guide
4.5.1 DATABASE TABLESPACES
Every Oracle database will have at a minimum four tablespaces:
The SYSTEM tablespace is reserved for Oracle internal use and contains the internal data dictionary. It must NOT be used by applications.
The SYSAUX tablespace is reserved for use by Oracle utilities and toolsets, such as the Automatic Workload Repository (AWR) and Application Express (APEX). It must NOT be used by applications.
The UNDO tablespace is used by Oracle processes for maintaining a ‘before image’ of changed database blocks. There will be one UNDO tablespace for each instance of the database, so for a database running on a ½ Rack Database Machine, there will be four UNDO tablespaces (UNDOTBS1, UNDOTBS2, UNDOTBS3, UNDOTBS4). These tablespaces will be BIGFILE tablespaces with an initial allocation of 1GB with autoextend set to a maximum of 100GB and has uniform extents of 4MB
The TEMP tablespace is used by Oracle processes for storing temporary segments generated, for example, as part of an index creation or by a SQL statement with an ‘ORDER BY’. This tablespace will be a BIGFILE tablespace with an initial allocation of 1GB with autoextend set to a maximum of 100GB and has uniform extents of 4MB
4.5.2 APPLICATION TABLESPACES
Each application will have a mimum of two tablespaces, one for tables and one for indexes. Where the application also uses Large Objects (BLOBs or CLOBS), a third tablespace will be used.
Where an application has a high number of reference/lookup tables and/or large transaction based tables, separate tablespaces should be used (Note: this is a joint application design and Database administration decision).
4.6 Datafiles
If SMALLFILE tablespaces are used, datafile sizes are limited to 32GB, due to 8K block size.
Datafiles will be of a pre-determined size (calculated based on capacity planning figures during the Database Design phase) with autoextend set to off.
Datafiles should be evenly spread across all of the ASM disks with the +DATA DiskGroup so their size needs to be a factor of 4MB * No of ASM Disks (e.g. if there 64 disks in the +DATA DiskGroup then the minimum datafile size should be 256M and incremental thereafter).
When SMALLFILE tablespaces will grow to over 32GB in size, multiple datafiles will need to be created.
4.7 Data/Index Segments
All tables, indexes should use an extent size of 8MB, or a multiple of 8MB, to maximise the I/O capabilities of the Exadata Storage Servers.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2840368
TA.100 System Management Guide
4.8 Initialisation Files
Every database instance will use a shared initialisation file (‘spfile’) that is stored in ASM with local ‘pfile’ pointing to the shared ‘spfile’. Instance specific parameters (e.g. UNDO tablespace to use) will be prefixed with the instance name.
4.9 Schemas
The rules below apply to custom developed applications. Out-of-the-box application packages use their respective schema design.
To ensure ‘separation of duties’ three types of database schema will be used:
All data related objects (tables, partitions, indexes etc.) will be owned by one or more ‘DATA’ schemas (e.g. <appln>_DATA_SCHEMA). The ‘DATA’ schema(s) will grant access to these objects only to the relevant ‘CODE’ schema.
All code related objects (stored procedures, functions, types etc.) will be owned by a ‘CODE’ schema (e.g. <appln>_CODE_SCHEMA). This schema will be granted the necessary privileges to read and write to the relevant data objects owned by the ‘DATA’ schema.
All user/process access to the database will be through a ‘USER’ or ‘PROCESS’ based schema (e.g. <appln>_USER_SCHEMA). These schema(s) will be granted access to the appropriate code objects owned by the ‘CODE’ schema – i.e. no user will be given direct access to the underlying data structures as access will only be available through pre-defined stored procedures or stored functions.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 2940369
TA.100 System Management Guide
Figure 1 Schema Design for Custom Applications
NOTE: No sql & pl/sql code should ‘hardcode’ schema names into the code. Instead, the ‘DATA’ schema will ‘grant’ the required privileges to the code schema(s) and the ‘CODE’ schema will ‘create synonym’s to point to the ‘DATA’ schema.
For example:
SQL> grant select, insert, update, delete on APPL_DATA_SCHEMA.SEC_USERS to APPL_CODE_SCHEMA;
SQL> create synonym APPL_CODE_SCHEMA.SEC_USERS for APPL_DATA_SCHEMA.SEC_USERS;
4.10 Database Options
4.10.1 PARTITIONING
Partitioning can be used by applications on larger tables. This is an application specific decision and is outside of the scope of this document.
4.10.2 COMPRESSION
Traditionally, data has been organized within a database block in a ‘row’ format, where all column data for a particular row is stored sequentially within a single database block. Having data from
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 210403610
TA.100 System Management Guide
columns with different data types stored close together limits the amount of storage savings achievable with compression technology. An alternative approach is to store data in a ‘columnar’ format, where data is organized and stored by column.
Storing column data together, with the same data type and similar characteristics, dramatically increases the storage savings achieved from compression. However, storing data in this manner can negatively impact database performance when application queries access more than one or two columns, perform even a modest number of updates, or insert small numbers of rows per transaction.Oracle’s Hybrid Columnar Compression technology is a new method for organizing data within a database block. As the name implies, this technology utilizes a combination of both row and columnar methods for storing data. This hybrid approach achieves the compression benefits of columnar storage, while avoiding the performance shortfalls of a pure columnar format.
A logical construct called the compression unit is used to store a set of hybrid columnar-compressed rows. When data is loaded, column values for a set of rows are grouped together and compressed. After the column data for a set of rows has been compressed, it is stored in a compression unit.
Conceptual Illustration of a Logical Compression Unit
Another view on the same Logical Compression Unit
The compression units used by Hybrid Columnar Compression typically span several data blocks and inside, organize data into sets of columns, with each compression unit holding the data for the columns for several rows.
To maximize storage savings with Hybrid Columnar Compression, data must be loaded using data warehouse bulk loading techniques. Hybrid columnar compressed tables can still be modified using conventional Data Manipulation Language (DML) operations, such as INSERT and UPDATE. Examples of bulk load operations commonly used in data warehouse environments are:
Insert statements with the APPEND hint
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 211403611
TA.100 System Management Guide
Parallel DML
Direct Path SQL*LDR
Create Table as Select (CTAS)
Queries on hybrid columnar compressed data often run in the Exadata storage cells with Smart Scans, using a high performance query engine that utilizes special columnar processing techniques. Data sent back to the database server(s) is usually compressed (and is typically much less data than is read from disk) and the compressed data is subsequently processed by the database server(s). Note that data remains compressed not only on disk, but also remains compressed in the Exadata Smart Flash Cache, on Infiniband, in the database server buffer cache, as well as when doing back-ups or log shipping to Data Guard.
One of the key benefits of the hybrid columnar approach is that it provides both the compression and performance benefits of columnar storage without sacrificing the robust feature set of the Oracle Database. For example, while optimized for scan-level access, Oracle is still able to provide efficient row-level access, with entire rows typically being retrieved with a single I/O, because row data is self-contained within compression units.
In contrast, pure columnar formats require at least one I/O per column for row-level access. With data warehousing tables generally having hundreds of columns, it is easy to see the performance benefits of Hybrid Columnar Compression on Exadata. Further, tables using Hybrid Columnar Compression on Exadata still benefit from all of the high availability, performance, and security features of the Oracle Database.
4.11 Object Growth Management
Not yet covered in the project.
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
4.12 Partition Maintenance
Not yet covered in the project.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 212403612
TA.100 System Management Guide
Responsibility
DBA
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
4.13 Purging and Archiving
Not yet covered in the project.
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
4.14 Statistics Collection
Not yet covered in the project.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 213403613
TA.100 System Management Guide
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
4.15 Backup and Recovery Responsibility
To ensure maximum availability with minimal downtime, every database will be capable of running on any of the two servers of an Exadata 1/8 rack Database Machine.
Oracle Data Guard will be used to replicate all changes from the primary database to a standby database that will be located on the Database Machine in the other Data Center.
Every primary database will have a minimum of one ‘running’ database instance. One instance will act as the ‘preferred’ instance for all application related Oracle Services, with the second acting as the ‘available’ instance, should the primary instance fail. The second instance will also be used by the Backup service.
Every standby database will have one ‘running’ database instance. This instance will be used by Data Guard and Backup services.
Backup Storage
All database backups will be stored on ASM or the exalogic Oracle ZFS Storage Appliance.
There will be at least one ZFS Storage Appliance (Exalogic) in each Data Center, connected to the Database Machines via InfiniBand.
The ZFS Storage Appliance will be ‘zoned’ to ensure that storage is only presented to a single domain (Production, Pre-production, Development/Test) at a time.
Each Exadata Database Machine will be presented with a specified amount of ZFS based storage through Direct NFS mounted filesystems.
All the backups will be transported to NetBackup using appropriate agents and NetBackup MediaServers.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 214403614
TA.100 System Management Guide
Frequency
Tools Used
Key Metric(s)
4.16 Database Backups
Backups will be performed daily on all databases, both on the primary database and the standby database so a backup copy is available for restore, should one Data Center become unavailable.
Oracle Recovery Manager (RMAN) will be used to perform the backups. Details of all backups, and restores, will be maintained in control files..
Figure 2 Database Backup Design
All Production and pre-production backups will be stored on the exalogic NFS (to the Data Center) Oracle ZFS Storage Appliance that is connected to the Database Machine with InfiniBand.
The ZFS Appliance will present its storage as eight NFS-based filesystems. Each RMAN backup will allocate eight channels for the backup – one for each of the backup filesystems.
Oracle Block Change Tracking will be enabled, so only those blocks that have been changed since the last backup will be backed up to the Backup Storage.
The Oracle recommended ‘incremental update’ strategy is:
Day 1: Full copy of the database is created
All archived redo logs from start of full database copy
Day 2: Blocks changed since the Day 1 backup are backed up
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 215403615
TA.100 System Management Guide
All archived redo logs since Day 1 are backed up
Day 3: Changed blocks from Day 2 are applied to full backup copy
Blocks changed since Day 2 backup are backed up
All archived redo logs since Day 2 are backed up
Figure 3 Incremental Update Strategy
Day 4: Changed blocks from Day 3 are applied to full backup copy as of Day 2
Blocks changed since Day 3 backup are backed up
All archived redo logs since Day 3 are backed up
All archived redo logs older than 72 hours are deleted from the Flash Recovery Area
Day n: Changed blocks from Day n-1 are applied to full backup copy
Blocks changed since Day n-1 backup are backed up
All archived redo logs since Day n-1 are backed up
All archived redo logs older than 72 hours are deleted from the Flash Recovery Area
The backups for each database will be run from Oracle Recovery Manager, connecting to the target database using a pre-defined Oracle ‘backup’ database service. Oracle Recovery Manager will also connect to the Enterprise RMAN Catalog to record the details of the backup.
The ‘backup’ service for each database should be assigned to a ‘backup’ Database Resource Plan, to ensure the backups do not impact on other services using the same database nor impact on overall I/O across the Database Machine.
Primary database backup service will run on the database server for the ‘available’ (rather than ‘preferred’) instance.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 216403616
TA.100 System Management Guide
The standby database backup service will run on the same database server as the ‘Data Guard’ service.
Database backups will be run sequentially, so two backups will not be running at the same time on the same Database Machine. This will be managed through Oracle Enterprise Manager.
Error or Failure Condition
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 217403617
TA.100 System Management Guide
5 MAINTENANCE
This chapter must be reviewed and updated when maintenance processes are defined.
In order to maximise the high availability of the applications, it is important to monitor and pro-actively maintain the databases and the infrastructure on which they run.
As a prerequisite before any patching is done, it must be verified that any installed applications are certified to be compatible with upgraded/patched versions.
The table below lists the recommended approach to pro-actively monitoring:
Hourly Daily Weekly Monthly
Check Cloud Control console for AlertsCheck services running on correct nodeCheck archiving of log fils to diskCheck overall performanceCheck Top SQL for anomaliesCheck application specific log files and error/log tables for anomalies
Check backups have run successfullyCheck statistics have been gatheredCheck tablespace growth Check ASM disk group space usageRun AWR report for last day and compare/contrast with previous day(s) and/or baselinesReview application and service performance Review poorly performing SQL and tuneCheck Oracle Job queue for errorsReview progress on outstanding Oracle SRs
As Daily +Run AWR report for last week and compare/contrast with previous week(s) and/or baselinesReview policy violations and resolveArchive/purge Oracle clusterware, ASM and database log and trace filesReview CPU load across RAC nodes and adjust services accordinglyReview memory utilisation (based on AWR) and adjust accordinglyReview I/O throughput and bottlenecksArchive old txn data (> 6 months)Purge old Log data (> 2 weeks)Review application changes and applyRebuild key indexes
As weekly +Change database schema passwordsReview storage requirements for future growthReview server capacity for future growthReview metrics settings and adjust accordinglyReview patching (see next section)Compare/contrast Dev,Test and Live schemas/code setsCompare/contrast Dev, Test and Live SQL performance profiles
Table 1 Proactive Monitoring Approach
5.1 Patching
Check My Oracle Support (MOS) Note 888828.1 at least once a month to check for new patches related to Exadata – in particular, those that are deemed critical. Critical patches can be found in MOS Note 1270094.1
5.1.1 REGULAR PATCHING ACTIVITIES
The table below contains the recommended patching schedule:
Monthly Quarterly Half Yearly Yearly Other
Review latest recommended patches for Exadata Database Machine (Note 888828.1)Review latest Critical Issues for Exadata (Note: 1270094.1) Download latest ARU metadata files for Oracle Cloud ControlDownload latest OPatch patch (6880880)
Review the latest Critical Patch updates from the current CPU release.Review the contents of the latest Quarterly Database Patch for Exadata (QDPE – see Note 888828.1)Apply latest QDPE if appropriateReview latest Exadata Storage Server patches and apply if appropriate
Apply the latest (or latest but one) QDPE.Apply the latest (or latest but one) Storage Server patches (see Note 888828.1)Review latest InfiniBand Switch patches with a view to applying (see Note 888828.1).
Review Major Releases and/or Patchset releases and plan upgrades for the following yearReview latest patches for additional Exadata components (KVM, PDU etc.) - see see Note 888828.1.
Upgrade to latest Release to remain within Premier Support
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 218403618
TA.100 System Management Guide
Monthly Quarterly Half Yearly Yearly Other
Table 2 Recommended Patching Schelude
5.1.2 CPUS AND PSUS
Critical Patch Updates (CPUs) and Patchset Updates (PSUs) are released quarterly on the Tuesday closest to the 15th day of January, April, July and October, which allows time to plan for the testing and application of the identified patches. The schedule for release, and information about all previous CPUs, is available at http://www.oracle.com/technology/deploy/security/alerts.htm
5.1.3 OPATCH UTILITY
The Oracle Patch utility is used to apply database and Grid Infrastructure (clusterware and ASM) software changes. The latest version of OPatch should always be downloaded and applied prior to applying other patches.
To check on the latest version of OPatch that is available, login to My Oracle Support and navigate to the ‘Patches and Updates’ page, enter ‘6880880’ as the patch number, set the platform to the appropriate value (e.g. Linux x86-64) and click on ‘Search’. Click on the patch number for the release of Oracle you want, then click on the ‘Download’ button.
5.1.4 EXADATA SPECIFIC PATCHING
Recommended bundle patches, named Quarterly Database Patch for Exadata (QDPE), will be released quarterly, aligned with and including the database Critical Patch Update (CPU) and Patch Set Update (PSU) releases. QDPE releases are targeted for planned patching activities and are the only patch bundles that will be applied. Oracle will also continue to make important fixes available in monthly or bimonthly interim bundle patches. These should only be installed if they are recommended by Oracle Support to address a specific issue that is fixed in an interim bundle patch prior to the next QDPE containing the fix.
There are three layers of patching on Oracle Database Machines:
Database Server (Oracle RAC and RDBMS related) – released monthly
Storage Server (Oracle Exadata related) – released quarterly
Infiband switch – released every 6-12 months
5.1.5 DATABASE SERVER PATCHING
Database Servers may have two types of patches – firmware and Operating System (Oracle Linux).
Firmware downloads for Sun Fire database servers can be obtained from Oracle Technology Network (OTN) at http://www.oracle.com/technetwork/systems/patches/firmware/index.html.
Note: do not update Exadata Storage Servers with firmware downloaded from OTN. Use only Exadata Storage Server patches.
Updates for Oracle Linux must only be obtained from the Unbreakable Linux Network (ULN). Do not automatically apply kernel updates because it may break compatibility with OFED software (see below). ULN channels are as follows:
OL 5.3 channel el5_u3_x86_64_patch
OL 5.5 channel el5_u5_x86_64_patch
Care should be taken when installing packages that reconfigure operating system parameters, such as the oracle-validated RPM. Parameter changes may remove Exadata-specific best practice configuration provided when the system was deployed.
5.1.6 DATABASE SOFTWARE PATCHING
Patches for Oracle Database 11gR2 and Oracle Clusterware 11gR2 are called Oracle Database Server software patches. Oracle Database Server software patches contain updates to Oracle
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 219403619
TA.100 System Management Guide
Database and Oracle Clusterware software. They are supplied as one or more downloadable patches from MOS.
Most updates are delivered in bundle patches created specifically for Exadata for Oracle Database and Oracle Clusterware. Bundle patches contain no Exadata-specific code and may be installed on non-Exadata systems. Note, however, that priority for requests to merge one-off fixes into Exadata bundle patches is given to Exadata customers.
Bundle patches are cumulative. Bundle patches will contain the most recently released Patch Set Update (PSU), which, in turn, contain the most recently released Critical Patch Update (CPU). Review MOS Note 888828.1 for details.
Oracle Database Server software patches are installed using the OPatch utility into the Oracle software home directory and/or the Grid Infrastructure software home directory. Prerequisites and instructions for installing a patch are provided in a README supplied with the patch. There may be supplementary documentation for a patch that is referenced in a MOS note. Oracle Database Server software patches may be installed in any order unless otherwise indicated in a patch README.
Note: Oracle Database Server software may require a minimum Exadata Storage Server software version. Refer to the Version Compatibility section in MOS Note 888828.1.
5.1.7 STORAGE SERVER PATCHING
An Exadata Storage Server patch contains updates to firmware, operating system, and/or Exadata Storage Server software. Updates to Exadata Storage Server must occur only with an Exadata Storage Server patch provided as a single downloadable patch from My Oracle Support (MOS). Do NOT manually update firmware or software on storage servers.
An Exadata Storage Server patch may contain updates to firmware and operating system to apply to database servers. This is known as a database server minimal pack.
Exadata Storage Server patches are typically installed using a script supplied with the patch called ‘patchmgr’. They may be installed in one of two ways: rolling and non-rolling:
Rolling - Storage server patching performed in a rolling manner is applied one storage server at a time while databases remain operational until all storage servers are patched.
Non-rolling - Storage server patching performed in a non-rolling manner is applied to all storage servers simultaneously with databases offline.
Prerequisites and instructions for installing the patch are provided in a README supplied with the patch. There may be supplementary documentation for a patch that is referenced in a MOS note.
Exadata Storage Server patches are supplied independent of Oracle Database Server patches (i.e. patches applied to RDBMS or Grid Infrastructure homes using OPatch). However, an Exadata Storage Server patch may require a specific Oracle Database Server patch level, or database server firmware or operating system version. Details are provided in the patch README
5.1.8 INFINIBAND SWITCH PATCHES
An InfiniBand switch patch contains updates to the software and/or firmware for InfiniBand switches. Updates to InfiniBand switch software and/or firmware must occur only with the patch downloaded from MOS.
Prerequisites and instructions for installing a patch are provided in a README supplied with the patch.
InfiniBand switch software version has no dependency on Exadata Storage Server software version, unless otherwise indicated in this document or the InfiniBand switch software or Exadata Storage Server software patch README.
Statistics Gathering
By default, Oracle gathers database statistics every day at 22:00 for every database objects who’s data has changed by more than ten percent.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 220403620
TA.100 System Management Guide
If multiple databases will be sharing the same RAC cluster, the default scheduling will need to be changed to ensure that each statistics gathering takes place at a different time to the other databases in the cluster. In addition, the statistics gathering will be run from the ‘available’ instance for each primary database rather than the ‘preferred’ instance (Note: will need to define/register a suitable ‘service’ to run Gather Statistics.)
For example, the table below lists the databases with their proposed start time – this may need to be adjusted once the durations are known:
Database Default Instance Default server Estimated Duration
Calculated Start Time
Table 3 Statistics Gathering Schedule
5.2 Purging log/trace files
The Oracle Grid Infrastructure and Database processes generate a considerable amount of logging and tracing data that needs to be archived and/or purged on a regular basis.
By default all log and trace files for Oracle Grid Infrastructure will be purged after seven calendar days. This will be managed through a Unix shell script run locally on each server.
By default all log and trace files for each Oracle database will be purged after seven calendar days. This will be managed through a Unix shell script run locally on each server.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 221403621
TA.100 System Management Guide
6 MONITORING AND ALERTING
6.1 Oracle Enterprise Manager
Oracle Enterprise Manager (OEM) Cloud Control 12c will be used to monitor and manage the Exadata estate.
OEM Cloud Control 12c provides comprehensive monitoring and notifications to enable the Infrastructure Support team to proactively detect and respond to problems with Oracle Exadata Database Machine and its software and hardware components. When notified of these alerts, the team can easily view the history of alerts and associated performance metrics of the problem component, such as the network performance of an InfiniBand port or the disk activity of an Exadata storage cell, to identify the root cause of the problem.
Figure 4 Oracle Enterprise Manager
The OEM Plug-in for Exadata enables the real-time monitoring and management of all the Exadata components and provides a consolidated view of each Database Machine:
Database Servers
Hardware, operating system, Grid Infrastructure, databases
Storage Servers
Hardware, operating system, exadata software
InfiniBand network
InfiniBand switches
Cisco Ethernet switch
Keyboard-Video-Mouse (KVM) Switch
Power Distribution Units(PDUs)
With direct connectivity into the hardware components of Exadata, Oracle Enterprise Manager can alert staff to hardware-related faults and log service requests automatically through integration with Oracle Automatic Service Requests (ASR) for immediate review by Oracle Support.
The Oracle Management Server (OMS) and supporting Repository will reside within the Management zone. Enterprise Manager Agents will be installed on all Production and Non-production Database
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 222403622
TA.100 System Management Guide
Machines to enable the Infrastructure team to monitor and manage all of the Exadata based databases from a single console.
Figure 5 Customer Exadata Estate
In addition to the ‘Production’ Enterprise Manager environment, there will be a second ‘Dev/Test’ OEM environment to support the testing of Enterprise Manager functionality and new software/configuration changes prior to implementing in the production environment.
There will also be a single Auto Service Request (ASR) service that will automatically upload Exadata component faults to Oracle Support. This service will support all Oracle/Sun based servers in Production, Pre-Production and Dev/Test environment.
6.2 Metrics and Alerts
Enterprise Manager comes with a comprehensive set of performance and health metrics that allow monitoring of key components in the Exadata environment including databases, hosts, operating systems, and storage cells.
Most metrics have predefined limiting parameters called thresholds that cause alerts to be generated when metric values exceed these thresholds. A metric alert, which is a type of event, indicates a potential problem indicating that a warning or critical threshold for a monitored metric has been crossed. An event can also be generated for various availability states such as ‘Database Down’ or ‘Target Unreachable’
When an event is generated, details about the event are accessible from the Incident Manager page. Administrators can be automatically notified via email and/or SMS when an event is generated. It is also possible to establish corrective actions to automatically resolve an event condition.
Each application using an Exadata based database will need to review the metrics and their thresholds to ensure they are suitable for their environment. They will also need to identify the administrators that are to be notified in the event of an alert being raised.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 223403623
TA.100 System Management Guide
6.3 Incident Management
In addition to monitoring the Oracle Exadata environments, Enterprise Manager enables the management of all incidents in one central location. Incident Manager is the central point for summarizing, managing, diagnosing, and resolving events, incidents, and problems that impact the enterprise. An incident is an event that represents an issue requiring resolution, whereas a problem is an undiagnosed underlying root cause of one or more related incidents.
Incident Manager enables incidents to be assigned to specific personnel, thus distributing the workload among the administrators. Incidents can be prioritized, escalated, and tracked through various states of resolution. If you are the person working the incidents and problems, you can acknowledge that you are working the incidents and problems, and provide information to the user community regarding the progress of the resolution.
In conjunction with Incident Manager, Incident Rules can be defined that enable you to manage the automation and implementation of operational practices that manage events, incidents, and problems.
By using incident rules, responses to incoming incidents and their updates can be automated. A rule consists of the selection criteria to identify the incidents the rule applies to, the conditions when the rule should be applied (for example, if an incident priority is changed to P1), and actions that need to be taken in response to the incident. The actions supported for incidents include: notifications, changing of the appropriate resolution management attributes, and ticket creation.
The Oracle Enterprise Manager Incident Management functionality will need to be integrated into the Infrastructure Management process.
6.4 Performance Monitoring
Comprehensive database monitoring enables the DBA team to identify the problem areas in a database environment that are degrading performance. Enterprise Manager uses data from the Automatic Workload Repository (AWR) to display performance information and initiate database alerts. The user interface provides several real-time performance charts and drill-downs for the targets that are being managed.
The Automatic Workload Repository retains detailed performance data for eight days. To enable ‘like for like’ performance comparisons over longer periods of time, regular AWR Baselines should be taken. These Baselines should cover key processing periods (e.g. batch processing window, peak user access period, end of month processing).
6.5 Configuration Management
Enterprise Configuration Management deals with the collection, storage, and monitoring of configuration data tied to managed entities within the enterprise. A host, for example, has configuration item types related to its hardware and software components—number of CPUs, memory, IO devices, OS platform and version, installed software products, and so forth. Changes to configuration data invariably happen, typically because of common events like patches and upgrades. At some point a change to one component can affect the overall system in a negative way. Detecting the root cause becomes paramount.
The Oracle Enterprise Manager Agents running on each of the managed servers, with gather configuration information about all of the targets on that server.
By using this data, OEM enables you to compare configurations of a target with configurations of another target of the same type. The comparisons can be done on the current configuration or configurations previously saved (perhaps, for example, just before applying a patch or doing an upgrade).
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 224403624
TA.100 System Management Guide
It also enables the monitoring of change activity across the enterprise. The history is a log of changes to a managed entity (target) recorded over a period of one year; it includes changes both to configurations and to relationships. Relationships are the associations that exist among managed entities.
While viewing a configuration history you can:
Track changes to targets over time by specifying and refining search criteria.
View change history and manipulate how the information is presented.
Annotate change records with comments that become part of the change history.
Schedule a history search to capture future changes based on the same criteria.
This information can then be used in conjunction with the Enterprise Configuration Management tool to detect anomalies between the expected configuration(s) and the actual configuration(s).
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 225403625
TA.100 System Management Guide
7 APPLICATIONS TIER
This chapter should be filled when actual implementation project is on-going.
7.1 Web Server Monitoring
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
7.2 Error Log Management (*) SysAdm ?
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 226403626
TA.100 System Management Guide
7.3 Management of Servers in the DMZ
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 227403627
TA.100 System Management Guide
8 DESKTOP CLIENT TIER
Client Browser: Startup
Responsibility
Frequency
Tools Used
Key Metric(s)
Procedure Steps
Step 1
Step 2
Error or Failure Condition
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 228403628
TA.100 System Management Guide
9 SECURITY AND ACCOUNTS MANAGEMENT (*) - ALL
9.1 Security
Guidelines for platform security are documented to a separate Oracle Exadata and Exalogic Security Hardening document.
Virus Control
9.2 Accounts
This document should give guidelines for Exadata and Exalogic account management.
There is a separate document about using Oracle Enterprise Manager for controlling access to Exadata and Exalogic. Security hardening is detailed in reference document #4.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 229403629
TA.100 System Management Guide
10 HARDWARE AND NETWORK MANAGEMENT (*) - HUS
General
Automatic Service Request (ASR)
There will be a single Auto Service Request (ASR) service that will automatically upload Exadata and Exalogic component faults to Oracle Support. This service will support all Oracle/Sun based servers in Production, Pre-Production and Dev/Test environment
Email-box at HUS, which is associated to MyOracle Support automatic service requests is [email protected]. Alerts of automatic requests are sent to this email-box, which need follow up.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 230403630
TA.100 System Management Guide
11 SOFTWARE MANAGEMENT (*) – ALL INSTALLING SOFTWARE
11.1 Scope
This document should be used with MyOracle Support guidance and notes. Purpose is only to highlight special considerations related to Exadata and Exalogic and reference to application related documentation.
11.2 Procedures
Security Patching - Database
Security patching frequency is xx . The frequency may change subject to Oracle Support policies.
There are several Oracle Homes. Some instances share a common Oracle Home.
Oracle Homes
IDM1
IDM2
EBSB
EBST
Fusion apps
o Fusion apps1
o Fusion apps2
Etc.
Version Upgrading – Database
Security Patching – Application Server
Version Upgrading – Application Server
Fusion Application Patching – extra considerations
Scope
Application Tier
Database Tier
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 231403631
TA.100 System Management Guide
EBS Database Patching – extra considerations
Scope
Database Tier
Enterprise IDM Patching – extra considerations
Scope
Application Tier
Database Tier
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 232403632
TA.100 System Management Guide
12 PERFORMANCE AND AVAILABILITY MANAGEMENT
There is a chapter about performance management. Performance managements includes also capacity managements. Availability management includes also backup and recovery.
12.1 Procedures
12.2 Testing Fault Tolerance
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 233403633
TA.100 System Management Guide
13 CAPACITY PLANNING
This chapter describes how capacity is planned to be monitored and augmented as needs arise.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 234403634
TA.100 System Management Guide
14 EXTERNAL SUPPORT – PARTIES APPLYNING PATCHES
Support Information
CSI Numbers:
Exadata HW CSI – 19085705 Exadata SW CSI – 19080315 Exalogic HW CSI – 19080310 Exalogic SW CSI – 19085704 Fusion Applications - 19179841
Support Level: Premier Support
Patches Applied to Date
See attachment for current list of patches applied to date.
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 235403635
TA.100 System Management Guide
15 OPEN AND CLOSED ISSUES
15.1 Open Issues
ID Issue Resolution Responsibility Target Date Impact Date
1 Use of HCC compressing must be defined by separate decision making process
HUS
2 Party responsible for maintenance must be identified
HUS
3 Maintenance procedures must be defined and documented
HUS
4 Backup strategy and implementation must be verified
HUS
5 Decision needs to be made whether this document needs to be divided to separate documents – this decision is partly depended on responsible parties
HUS
15.2 Closed Issues
ID Issue Resolution Responsibility Target Date Impact Date
Error: Reference source not foundFile Ref: document.docx (v. DRAFT 1 )
Doc Ref: Error: Reference source not found
Introduction 236403636