+ All Categories
Home > Documents > HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence...

HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence...

Date post: 15-Apr-2020
Category:
Upload: others
View: 75 times
Download: 4 times
Share this document with a friend
20
HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster with Peer Persistence Technical white paper
Transcript
Page 1: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster with Peer Persistence

Technical white paper

Page 2: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper

Contents Executive summary .............................................................................................................................................................................................................................................................................................................. 3

Solution overview ................................................................................................................................................................................................................................................................................................................... 3

Solution components .......................................................................................................................................................................................................................................................................................................... 4

HPE ProLiant DL580 Gen8 Server ................................................................................................................................................................................................................................................................. 4

HPE 3PAR StoreServ Storage System ......................................................................................................................................................................................................................................................... 5

HPE SN6000B Fibre Channel Switch ........................................................................................................................................................................................................................................................... 6

Oracle RAC with Peer Persistence solution overview ........................................................................................................................................................................................................................... 7

Peer Persistence overview ...................................................................................................................................................................................................................................................................................... 7

Oracle RAC 12c in a stretched cluster overview ................................................................................................................................................................................................................................ 8

High Availability and failover scenarios ............................................................................................................................................................................................................................................................. 8

Planned migration from the active array to the passive array ............................................................................................................................................................................................... 8

Server failure in a stretched cluster ............................................................................................................................................................................................................................................................. 11

Failure of primary array ........................................................................................................................................................................................................................................................................................... 12

HPE 3PAR infrastructure for the cloud .......................................................................................................................................................................................................................................................... 14

Multi-tenancy ................................................................................................................................................................................................................................................................................................................... 14

Lower cost of ownership .............................................................................................................................................................................................................................................................................................. 14

HPE 3PAR Thin Provisioning with Oracle 12c ................................................................................................................................................................................................................................. 14

HPE 3PAR StoreServ Management Console ...................................................................................................................................................................................................................................... 15

Key findings .............................................................................................................................................................................................................................................................................................................................. 17

Oracle RAC configuration ..................................................................................................................................................................................................................................................................................... 17

HPE 3PAR Peer Persistence configuration .......................................................................................................................................................................................................................................... 17

Summary ..................................................................................................................................................................................................................................................................................................................................... 18

Bill of materials ...................................................................................................................................................................................................................................................................................................................... 18

Hardware bill of materials ..................................................................................................................................................................................................................................................................................... 18

Software bill of materials ........................................................................................................................................................................................................................................................................................ 19

Terminology ............................................................................................................................................................................................................................................................................................................................. 19

Page 3: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 3

Executive summary Oracle databases are at the heart of enterprise environments. For online transaction processing (OLTP), the Oracle DB must deliver reliable performance and must provide mission-critical availability. SAN storage performance and availability are critical components for this offering. For Oracle DB environments, HPE 3PAR StoreServ 20800 offers extreme OLTP performance with mission-critical availability and tier-1 data services. This reference architecture demonstrates performance, high-availability, and lower TCO for Oracle environments, both traditional and private cloud, with the all-new HPE 3PAR StoreServ 20800 array.

Target audience This white paper is intended for Hewlett Packard Enterprise customers and channel partners, Oracle DB administrators, and presales as well as solution architects seeking an SAN storage solution that delivers reliable performance and mission-critical availability for their Oracle database. A familiarity with Oracle Real Application Cluster (RAC) concepts is assumed.

This white paper describes testing performed on HPE 3PAR InForm OS 3.2.2 MU1 during 2015.

Solution overview This white paper describes how an Oracle 12c RAC and HPE 3PAR Peer Persistence can be deployed to provide customers with a highly available stretched cluster, a cluster that spans two data centers. A stretched RAC cluster with HPE 3PAR Peer Persistence protects services from site disasters and expands storage load balancing to the multi-site data center level. The stretched cluster described can span metropolitan distances (up to 5 ms roundtrip latency for the Fibre Channel [FC] replication link, generally about a 500 km roundtrip) allowing administrators to move storage workloads seamlessly between sites, adapting to changing demand while continuing to meet service-level requirements.

Oracle 12c RAC combines servers to create a resilient client connect and compute infrastructure for Oracle databases. HPE 3PAR Peer Persistence combines HPE 3PAR Storage systems for multi-site level flexibility and availability. HPE Remote Copy synchronous replication between arrays offers storage disaster tolerance. HPE Remote Copy is a component of HPE 3PAR Peer Persistence. Peer Persistence adds the ability to redirect host IO from the primary storage system to the secondary storage system transparently.

Solution highlights • Oracle RAC

– Active/Active Server Cluster, sharing host performance, and availability

– Load-balanced and failover-protected client connections with Single Client Access Name (SCAN) and Transparent Application Failover (TAF) services

– Shared database buffer cache across a network link

• HPE 3PAR Peer Persistence

– HPE 3PAR Remote Copy maintains synchronous copies of critical volumes over metropolitan distances

– Transparent failover/failback of replicated storage

– The ability to redirect host IO to either data center

– Highly available storage at the primary and secondary sites

– Industry-leading array performance

Page 4: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 4

Figure 1. Stretched Oracle Real Application Cluster

Solution components HPE ProLiant DL580 Gen8 Server

HPE 3PAR StoreServ 20800 Storage System

HPE SN6000B Fibre Channel Switch

HPE ProLiant DL580 Gen8 Server Two DL580 Gen8 servers were located at each site in this solution. HPE ProLiant DL580 Gen8 Server is an enterprise-grade four-socket x86 server offering breakthrough performance, rock-solid reliability, and compelling consolidation as well as manageability efficiencies. It is ideal for mission-critical applications, business intelligence, and online database applications. This solution is also compatible with the HPE ProLiant DL580 Gen9 Server.

Key features • Intel® Xeon® E7-4850 v2 processors, HPE DL580 Gen8 offers blazing-fast results with enhanced processor performance

• More memory slots (96 DIMMs)

• Greater IO bandwidth (9 PCIe Gen3 slots)

• Security and data protection for system resiliency that businesses can depend on

• Intelligent manageability through HPE OneView, iLO 4, and user-inspired features

• Faster, lower cost infrastructure management

Figure 2. HPE ProLiant DL580 Gen8 Server

Page 5: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 5

HPE 3PAR StoreServ Storage System Two HPE 3PAR StoreServ 20800 Storage systems served as the highly available storage platforms in this solution. The two arrays are connected with redundant 16 Gb Fibre SAN ISLs, creating a federated storage infrastructure. The HPE 3PAR StoreServ 20000 Storage family offers tier-1 flash-ready arrays, which accommodates the performance demands of consolidated workloads. With flash configurations, a single HPE 3PAR StoreServ can deliver 3 million IOPS, sub-millisecond latencies, and is scalable up to 15 PiB.

The HPE 3PAR StoreServ Gen5 Thin Express ASIC enables solid-state accelerated thin technologies, including inline deduplication to reduce used storage requirements by up to 75%.1 The Thin Express technology takes over CPU tasks lowering latency and making the most of the 8-node Active-Mesh architecture. The Thin Express ASIC also enables Persistence Checksum throughout the StoreServ datapath, protecting against media and transmission errors with no impact on performance. The HPE 3PAR ASIC alleviates the performance contention caused by mixed transactional and high throughput IO demands. The HPE 3PAR StoreServ 20000 arrays are designed for performance and reliability under the multi-tenant workloads of consolidated and virtualized data centers.

HPE 3PAR StoreServ 20800 Storage offers large amounts of cache, up to 1.8 TB of on-node cache, which supports multiple data services simultaneously. These data services are HPE 3PAR Remote Copy, HPE 3PAR Priority Optimization, HPE 3PAR Peer Persistence, and HPE 3PAR Adaptive Optimization (AO). The ability to scale up to 8 controller nodes and a large amount of on-node cache available enables high levels of consolidation to support multiple applications within the same storage array. The ports scalability enables support for various data services such as HPE 3PAR Remote Copy, Peer Persistence, Peer Motion, and File Persona without port constraints. The above facts included with 6 nines of availability and tier-1 enterprise resiliency makes this the ideal platform for consolidation and high-availability use cases.

One HPE 3PAR StoreServ 20800 Storage System is located at each site.

Note that although testing for this reference architecture was done on the StoreServ 20800 array, all other HPE 3PAR StoreServ arrays are capable of running this solution. The entire HPE 3PAR StoreServ family offers the same firmware and feature set. All of the products in the HPE 3PAR StoreServ product line can run this solution.

Key features • Flash-ready hardware and software for potential performance greater than 3 million IOPS at sub-millisecond latencies in a single array.

• Scalable up to 15 PiB and up to eight controllers in a Mesh-Active cluster.

• 7X or more density with 5.5 PiB of usable storage in a single expansion rack.

• HPE Remote Copy with Peer Persistence expands Asymmetric Logical Unit Access (ALUA) support to the data center layer by linking storage arrays across metropolitan distances. Peer Persistence enables arrays to serve as disaster recovery (DR) sites for specific volumes while still serving active data to both local and remote servers. Planned and DR failover is seamless, allowing application data access to move from one site to another without downtime.

• HPE 3PAR Priority Optimization allows management of consolidated workloads by enabling granular administration of IO performance. Priority Optimization enables setting minimum and maximum limits for IOPS and throughput at the Virtual Volume Set level. As demands grow, service providers can manage how the arrays performance is divided, enable growth without contention, and support SLA commitments.

• AO makes tuning workloads to the right storage tier automatic. AO moves the hottest data to the fastest disks and cold blocks to less expensive tiers.

• Guaranteed 99.9999% data availability to meet the most demanding service-level agreements. See the HPE 3PAR 6-Nines Guarantee Program for more information.

1 Thin Deduplication Brochure, HPE 3PAR StoreServ Architecture

Page 6: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 6

Figure 3. HPE 3PAR StoreServ Storage2

HPE SN6000B Fibre Channel Switch Four HPE SN6000B Fibre Channel switches were used in the test environment to provide redundant SAN connectivity to local storage and ISL connections between the emulated sites. Two SN6000B FC switches are provisioned at each site. HPE SN6000B Fibre Channel Switch meets the demands of hyper-scaled storage with 16 Gb FC technology and a design that enables flexibility. It is configurable in 24, 36, or 48 ports in a 1U package.

Key features • In-flight compression and encryption provide efficient link utilization and security.

• Metro cloud connectivity features, along with integrated DWDM and dark fiber support (optional license).

• Supports multi-tenancy and non-stop operations.

• Provides a flexible, simple SAN solution with industry-leading technology.

Figure 4. HPE SN6000B Fibre Channel Switch

2 HPE 3PAR StoreServ offering: hpe.com/us/en/storage/3par.html

Page 7: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 7

Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle RAC environment that, with HPE Peer Persistence can share resources across data centers without being constrained by local boundaries. HPE 3PAR Peer Persistence protects Oracle databases from site-wide disasters and hardware failures while adding the flexibility to managing IO workloads across data centers. Refer to HPE SAN Design Reference Guide and the white paper on Disaster-tolerant solutions with HPE 3PAR Remote Copy, which provides in-depth information on SAN design for an extended cluster.

Peer Persistence overview HPE 3PAR Peer Persistence manages host connect paths and replication direction of HPE 3PAR Remote Copy. Volumes are grouped together into sets referred to as Remote Copy Groups. Remote Copy replicates writes to Remote Copy Groups between storage systems. When a server issues writes to volumes in a Remote Copy Group, the storage system orders the IO consistently across the volumes in the group. The writes are then replicated to the remote array’s target volumes. When the remote storage system has written the blocks to cache on at least two controller nodes, the write is acknowledged back to the source array, and then to the host that issued the write. The HPE 3PAR Remote Copy Software User Guide details Peer Persistence and Remote Copy setup as well as requirements.

Quorum Witness and failover policies Peer Persistence uses a Quorum Witness to monitor for HPE 3PAR Storage system failure. The Quorum Witness is a virtual machine that sits on a third site and has network connections to both storage systems. Hyper-V and VMware® are both supported hypervisors for the Quorum Witness virtual machine. The Quorum Witness and Remote Copy fiber link provide the key inputs to trigger Peer Persistence Remote Copy Group failover—informing whether replication should stop or the secondary site should become primary. Table 1 lists the circumstances that cause Remote Copy Group replication to stop and failover from active to passive sites.

Table 1. Peer Persistence failover table

Circumstance Replication stopped Automatic failover

Array-to-array Remote Copy links failure Yes No

Single site to Quorum Witness network failure No No

Single site to Quorum Witness network and array-to-array Remote Copy link failure Yes Yes

Both sites to Quorum Witness network failure No No

Both sites to Quorum Witness network and array-to-array Remote Copy link failure Yes No

Remote Copy Group and Oracle Automatic Storage Management considerations Oracle Automatic Storage Management (ASM) volume manager stripes data across all the disks in an ASM disk group. Because of this, it is important that all the disks in an array-generated copy of an ASM disk group have the same point-in-time version of the data. To achieve this consistency put all of your ASM disk group’s volumes in the same HPE 3PAR Remote Copy Group. A Remote Copy Group can contain multiple ASM disk groups, as long as it contains the complete set of volumes for each ASM group. No ASM disk group should be made of volumes from multiple Remote Copy Groups.

Note Only external redundancy was tested. Other ASM configurations would require a different Remote Copy Group configuration.

A one-to-one ASM disk group to Remote Copy Group configuration is recommended to simplify documentation and management. Naming the Remote Copy Groups and ASM disk groups with similar names further simplifies documentation and management.

Oracle has a general recommendation3 of no more than two ASM disk groups per RAC cluster. There are benefits to using more ASM groups (mapped to individual Remote Copy Groups) in a Peer Persistence environment. For example, if two databases are in contention for IO on a local storage system, Peer Persistence allows one of those databases to be switched to a remote storage system only if it is isolated in its own Remote Copy Group. Only write IOs are replicated back to the busy storage system, which results in the two databases no longer being in contention with reads on the local storage system.

3 Oracle ASM 11gR1 Best Practices

Page 8: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 8

Oracle RAC 12c in a stretched cluster overview In terms of Oracle Grid installation and database management, the setup of Oracle RAC in a stretch cluster is largely the same as a single site RAC cluster. Refer to Oracle’s install guides for hardware requirements and operating system tuning recommendations. Oracle has produced a white paper4 with relevant information on the cluster interconnect latency and server-cluster design in a stretched cluster.

Disk device files The two HPE 3PAR StoreServ Storage systems with Peer Persistence manage the data replication and storage availability in a manner that is transparent to ASM and RAC. The HPE 3PAR Storage systems present mirrored volumes from both sites as alternate paths to what appears to be a single volume. Volumes in HPE 3PAR Peer Persistence Remote Copy Groups can be considered as shared storage from the RAC/ASM installer’s perspective.

Both UDEV and ASMLIB disk devices were validated in this configuration. The ASM Filter Driver was not implemented in this solution. The ASM disk groups should be created with external redundancy to take advantage of the storage systems performance and fault tolerance. For supported HPE 3PAR Peer Persistent Software combinations, see the HPE Single Point of Connectivity Knowledge (SPOCK) website.5

CRS and Voting disks Oracle recommends creating OCR and Voting files in multiple locations when using ASM-based redundancy. In this configuration (external redundancy), the OCR and Voting files reside in a single ASM disk group called OCR_VOTE. The OCR_VOTE ASM disk group is provisioned from a RAID 5 volume and replicated to the second storage system with RAID 5 on the target device. This protects the OCR and Voting files from hardware failure and mirrors them between two sites. In the event of a site failure, the IO path will failover to the surviving site that has a synchronously updated copy of the files. Because Peer Persistence joins the storage system into a single federated storage source, there is no need to have a third site with a Voting file. RAID protection and Remote Copy do not protect from user or software errors that result in deletion or corruption.

Flex ASM Oracle introduced Flex ASM with the 12c release. Flex ASM allows an ASM instance on Server-A to deliver storage blocks to Server-B over the cluster interconnect or a dedicated ASM network. This means Server-B does not run an ASM instance or have a direct SAN connection. If Server-A fails, a different ASM server will provide Server-B with disk data. If the ASM server is at a remote data center, the IO could incur latency from both the ASM network transfer and the storage replication copy. Oracle Flex ASM was not considered for this solution because of the potential latency issues.

High Availability and failover scenarios In this environment, Oracle RAC 12c manages the server availability and the availability of user network connection. HPE 3PAR StoreServ with Peer Persistence provides the storage availability and flexibility. The use cases illustrated in this section are a high-level view of the major functions of this solution.

Planned migration from the active array to the passive array During normal operation, IO from all four of the servers is routed to the primary volumes in the HPE 3PAR StoreServ at Site-A. This means that IO traffic from servers at Site-B is routed between sites to the primary side of the Remote Copy Group at Site-A. Figure 5 shows the server IO paths in red and the replication IO path in green. Writes to the primary volumes at Site-A are replicated to the secondary volumes at Site-B.

4 Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters 5 HPE Single Point of Connectivity Knowledge (SPOCK) website

Page 9: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 9

Figure 5. Normal replication direction

The Oracle ASM disk group containing the online redo logs consists of volumes that are in the HPE 3PAR Remote Copy Group named FRA01rc. Similarly, the Oracle ASM disk group containing the database data files maps to the HPE 3PAR Remote Copy Group DB01rc.

In this test, both FRA01rc and DB01rc are manually switched over; meaning the replication direction is reversed and the primary (writable) volumes become secondary (visible not writable). The Linux® Multipath driver recognizes the change in IO path status and directs IO to the new active volumes. Figure 6 shows the reversed paths after the manual switch.

Figure 6. Reversed replication

Page 10: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 10

The Linux Multipath driver’s ALUA functionality recognizes the Site-A paths as ready and Site-B paths as inactive ghost paths. When the Remote Copy replication direction is switched, the multipath driver fails over to the Site-B paths that now have a ready status. The ready and ghost path status can be monitored on the Linux systems with the multipath command. Figure 7 shows part of the multipath–ll command before and after the switchover.

Figure 7. Multipath status

Figure 8 shows redo log activity on the four DB01 instances during the manual switchover. The switch did not cause a significant impact on transaction activity.

Figure 8. Redo log activity during switch over

Page 11: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 11

The OLTP workload generator was emulating 1000 users during the switch over. Figure 9 shows the relative TPS during the manual switchover.

Figure 9. OLTP user TPS during switchover

The operation to failback to the normal replication direction is functionally identical to the failover described above. The switchover is transparent to user access and database activity. This enables read activity to be moved to the peer data center without impacting service levels.

Server failure in a stretched cluster Oracle Clusterware and Oracle RAC 12c provide the services to maintain client connections and online cluster reformation if a server fails. In this scenario, one node was power failed in the four-node stretched cluster (2 nodes reside at each site). Oracle RAC reformed the cluster with the remaining three nodes, allowing the emulated users to continue working in the database. Figure 10 shows the server part of the solution without the storage.

Figure 10. Oracle RAC Server cluster

Page 12: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 12

After the server failure, Oracle RAC automatically performed an online reconfiguration of the cluster to include only the surviving nodes. The Oracle Net functionality allowed the new and re-established client connections to be directed to the remaining database instances. Figure 11 shows the virtual IP for the failed server has moved to a surviving node.

Figure 11. Reformed 3-node cluster 1

Figure 12 shows redo log activity as the server running the DB012 instance shows power failed. There is a brief drop in transactions being applied followed by the cluster being reconfigured without the DB012 instance. The redo activity recovered to near the original rate with the three remaining instances.

Figure 12. Redo log activity

Failure of primary array In this test case, an OLTP workload was started against the cluster database and then the primary storage system crashed. An automatic failover occurs when the Quorum Witness loses contact with an array and the Remote Copy link is down. Replication is stopped and the surviving storage system is made the primary side of the Remote Copy pair. Figure 13 shows the normal state of the environment with the Site-A array serving as the host IO path. Data written to Site-A is synchronously replicated to the paired volumes at Site-B before the write is acknowledged back to the host.

Page 13: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 13

Figure 13. Site-A normal replication

Figure 14 shows the IO route after the primary array is down. Replication has stopped and the volumes at Site-B are now able to serve data to the RAC servers. The Linux Multipath driver on the server nodes failed the paths to Site-A and continues IO over the newly enabled Site-B paths. The Remote Copy Group is resynced from the Site-B array and replication is restarted after the Site-A array is backed up.

Figure 14. IO route after failover

Figure 15 shows that transactions continue during the failover to Site-B. There is an increase in redo log activity after the secondary paths become active and the queued IOs are retried to the Site-B paths. The synchronous Remote Copy maintains the same point in time data image at both sites so no database downtime or recovery is required. After the primary site is backed up, a recovery operation on the Remote Copy Groups will re-sync the data. This recovery operation can be performed online. The performance impact of the recovery operation will vary depending on the amount of data being transferred and the concurrent online activity.

Page 14: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 14

Figure 15. Redo log during array crash

HPE 3PAR infrastructure for the cloud A fundamental requirement in Cloud Service Management is to maintain configuration flexibility and the ability to scale quickly to meet demand. HPE 3PAR StoreServ Storage system has features beyond Peer Persistence described in this paper that help cloud administrators adapt to changing requirements and provide infrastructure services with an impressive return on the hardware investment.

Multi-tenancy Priority Optimization HPE 3PAR StoreServ Storage System can scale to 15 PB of disk space, and is designed to serve storage to multiple and varied applications. It is capable, in SSD configurations of 3 million IOPS and sub-millisecond latency. HPE 3PAR Priority Optimization is designed to help the administrator apply the performance potential of the array to the applications that need it. Priority Optimization allows the definition of Service Level Objectives and Service Level caps for IOPS and bandwidth.

For example, policies could be used to cap the bandwidth consumed by a DWH job so that backups can stream or guarantee random small block IOPS performance to a critical online application regardless of what else is running. These rules are applied at the Virtual Volume Set and Virtual Domain Levels. VVsets are groups of disks that map well with the Oracle construct of ASM disk groups. With HPE 3PAR Priority Optimization, ASM disk group performance can be portioned out to meet the requirements of applications, removing concerns about performance contention in the array.

Peer Persistence HPE 3PAR Peer Persistence is another tool for managing storage performance. When a Virtual Volume Set is in contention for IOPS or bandwidth on one array it can transparently be switched to a peer array with the Peer Persistence switchover function. If an application is being throttled by a Priority Optimization rule or contention on one array, the Peer Persistence switchover command moves the active read IOs to the peer array and the only write activity to the original array is through the Remote Copy link. This also enables the migration of application data from an array built for performance to an array designed and priced for less-demanding access. Peer Persistence enables tiered performance tuning at the data center layer by providing the ability to move application data between performance and value arrays.

Lower cost of ownership HPE 3PAR Thin Provisioning with Oracle 12c Thin Provisioning reduces the amount of actual storage used by allowing for forecast storage space requirements to be configured without dedicating actual storage assets and overhead until it is required.

Oracle installations can take advantage of Thin Provisioning by pointing archive log destinations to thinly provisioned volumes. Thinly provisioned archive log destinations create an automatically expanding place for archive logs, avoiding the issue of full archive log destinations that could cause a database to hang. Thin Provisioning works well with archive log destinations on ASM disk groups.

Page 15: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 15

To take advantage of Thin Provisioning inside the database, create Oracle data files on thinly provisioned volumes using the AUTOEXTEND parameter. Extending a data file is a resource-intensive task for Oracle, regardless of the underlying storage. The array also incurs negligible overhead when allocating thin space on the fly. During data file extension, there might be a noticeable impact on database performance. To avoid frequent extensions and their associated performance overhead without leaving an unnecessary amount of space stranded, choose an AUTOEXTEND increment sufficiently large for data growth.

Oracle’s course ASM allocation unit is by default 1 MB. For optimal space usage, consider setting AUTOEXTEND in increments of 1 MB.

Note Example SQL command for data file creation on thin volumes:

CREATE BIGFILE TABLESPACE “TBLSPACE01” DATAFILE ‘+DATA01’ SIZE 100G AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

HPE 3PAR StoreServ Management Console The StoreServ Management Console (SSMC) is a converged management and reporting interface for the HPE 3PAR product family. Virtual Volume provisioning and exporting can be accomplished with a few mouse clicks. The intuitive SSMC interface simplifies complex operations such as Remote Copy Group creation and Peer Persistence management. The SSMC is designed to simplify tasks and put all the important configuration information at the administrator’s fingertips.

Figure 16 shows the front dashboard page of the SSMC.

Figure 16. SSMC dashboard

Page 16: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 16

Managing and monitoring Peer Persistence configurations is made simple with the SSMC. Figure 17 shows a map of the ASMLIBrc Remote Copy Group. The map shows the source array, 3par1449, at the top. It also shows how it maps to Remote Copy Groups and volumes down to the target array, 3par1450, at the remote site.

Figure 17. ASMLIBrc map

Switching the replication direction between data centers can be accomplished with just a pull-down menu selection. Figure 18 shows the overview page of the ASMLIBrc. In the Actions pull down menu, Switchover is selected. This simple GUI operation performs the complex task of changing replication direction between data centers and moving the entire read IO overhead to Site-B, as described in the Planned Migration section earlier.

Figure 18. ASMLIBrc switchover

Page 17: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 17

The Remote Copy link switches direction after a pop-up window is acknowledged. The multipath functionality on the Linux servers recognizes the new primary paths and the operation is complete. In Figure 19, the Overview area shows the new source array is now 3par1450.

Figure 19. ASMLIBrc after switchover

Efficient array management helps lower the cost of ownership by reducing the time it takes for the IT staff to understand the technology and complete tasks. HPE 3PAR SSMC is the single interface you need to manage multiple HPE 3PAR Storage systems in standalone or Peer Persistence configurations. The contemporary browser-based array management console is customer inspired and easy to use. For more information about the SSMC, see HPE 3PAR StoreServ Management Console Administrator’s Guide.

Key findings Oracle RAC configuration • All Oracle ASM disk groups should be created with external redundancy to take advantage of the storage system’s high availability and

performance features. Only external redundancy was tested.

• Both ASMLIB and UDEV configured Linux disk devices work well with Peer Persistence and the Linux Multipath daemon.

• The volumes in a given Oracle ASM disk group must all be in the same HPE 3PAR Remote Copy Group. Corrupted data could result from mapping an ASM disk group over multiple Remote Copy Groups. Multiple ASM disk groups can be configured into a single HPE 3PAR Remote Copy Group. Configuring one ASM disk group to one similarly named Remote Copy Group is recommended for simplified mapping of the solution layers.

HPE 3PAR Peer Persistence configuration • Peer Persistence manual switchover is a valuable tool in managing performance. The ability to move potentially competing IO to different

data centers without downtime is a new level of application performance tuning.

• In an Oracle RAC stretch cluster built on top of an HPE 3PAR Peer Persistence solution, the OCR and Vote volumes do not need to be at a third site. When OCR and Vote are replicated between two remote storage systems using HPE 3PAR Peer Persistence, they essentially follow the primary array, even in the event of a site failure.

• Install the network switch that connects the Quorum Witness server on the same power and rack as the HPE 3PAR Storage system. Also, install switches for one of the SANs in the same rack. This will enable Peer Persistence failover in the unlikely event of a rolling site failure that results in the loss of redundant SANs before the storage system loses connectivity to the remote Quorum Witness.

Page 18: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 18

Summary HPE 3PAR Peer Persistence provides adaptable performance, data protection, and disaster tolerance at the data center layer. As an expandable, high performance, rock solid storage cluster, HPE 3PAR Peer Persistence is the ideal storage foundation for a critical Oracle RAC environment. Peer Persistence enables federation of arrays at metropolitan distances, expanding the solution beyond single site vulnerability. This solution offers tested resilience and flexibility at every layer. With Oracle RAC protecting from server failures, HPE 3PAR StoreServ arrays deliver storage redundancy and performance manageability.

Bill of materials Major components used in this paper are listed here. The list is not exhaustive and is only intended to provide a guidance. For example, power distribution units (PDUs) and interconnecting cables are not listed. Often these items will already be present or can be ordered as needed. Table 2 lists the hardware components and Table 3 lists the software and applications used.

Hardware bill of materials

Table 2. Hardware bill of materials

Quantity Component Description

HPE 3PAR StoreServ 20800 Storage (Primary Side)

1 HPE 3PAR StoreServ 20800 Storage 8-node array for primary Oracle Peer Persistence (SN1449)

16 HPE 3PAR StoreServ 20000 4-port 16Gb Fibre Channel Host Bus Adapter • 4 ports for RC • 4 for host connect

16 HPE 3PAR StoreServ 20000 12Gb SAS 24-Drive 2U SFF (2.5in) Drive Enclosure Disk enclosures

64 HPE 3PAR StoreServ 20000 300 GB SFF HDD 15K HDD

HPE 3PAR StoreServ 20800 Storage (Target Side)

1 HPE 3PAR StoreServ 20800 Storage 8-node array target for Oracle Peer Persistence (SN1450)

16 HPE 3PAR StoreServ 20000 4-port 16Gb Fibre Channel Host Bus Adapter • 4 ports for RC • 4 for host connect

16 HPE 3PAR StoreServ 20000 12Gb SAS 24-Drive 2U SFF (2.5in) Drive Enclosure Disk enclosures

64 HPE 3PAR StoreServ 20000 300 GB SFF HDD 15K HDD

HPE ProLiant Servers

4 HPE DL580 ProLiant Gen8 • 2 Intel Xeon E7-4850 v2 processors (15/15 cores; 30 threads—per processor)

• 512 GB RAM • Embedded HPE Ethernet 1Gb 4-port 331FLR Adapter • 2 HPE SN1000E 16Gb 2P FC HBA (1 port used/HBA) • 1 HPE NC523SFP 10Gb 2-port Server Adapter

1 HPE DL380p ProLiant Gen8 • 2 Procs 128 GB RAM • Windows Server® 2012 R2 • Hyper-V • HPE 3PAR Quorum Witness • Oracle Client 12.1.0 • Load Generator

HPE Networking/Fibre Channel SAN

1 HPE ProCurve 2824 (J4903A) 1Gb network switch • Array management ports and Quorum Witness network

1 HPE ProCurve 6108 (J4902A) 1Gb Network switch • User network

2 HPE ProCurve 6600-24XG (J9265A) 10Gb network switch • Oracle RAC cache fusion network. Two for redundancy (1 used in test environment)

4 HPE SN6000B Fibre Channel switches • 16 Gb FC technology • Configurable in 24, 36, or 48 ports in a 1U package

Page 19: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper Page 19

Software bill of materials

Table 3. Software bill of materials

HPE Software

1 HPE 3PAR OS 3.2.2 MU1 + Patch 02

1 HPE StoreServ Management Console (SSMC) 2.3.0

Unlimited HPE 3PAR 20800 Replication Software Suite license

Other Software

4 Oracle RAC 12c Release 1 Enterprise Edition

4 Red Hat® Enterprise Linux 6.6

Terminology Table 4 provides a list of common terms used in this paper.

Table 4. Glossary of common terms

Term Description

ASM Oracle Automatic Storage Management is a volume manager and filesystem that manages disk devices for combined storage space and RAID 1 or RAID 0 availability. Oracle database and log files reside in ASM disk groups similar to files in an LVM volume.

RAC Oracle Real Application Cluster is an active/active server clustering solution designed to provide a highly available framework for accessing and hosting Oracle databases.

Redo log Oracle redo logs are crucial transaction logs. Redo logs contain records of all Oracle user data changes.

Remote Copy HPE 3PAR Remote Copy is a unique replication technology that allows you to protect and share data from any application—simply, efficiently, and affordably. Remote Copy dramatically reduces the cost of remote data replication and DR.

TPVV

CPVV

FPVV

Thinly Provisioned VV (TPVV) is a volume that allocates system storage as needed and has associated snapshot space while a Common Provisioned VV (CPVV) is fully provisioned and has associated snapshot space.

Fully Provisioned VV (FPVV) is fully provisioned but without active snapshot shape.

Fibre Channel Fabric; FC SAN One or more FC switches connected together creates an FC Fabric.

Two individual switches not connected are two separate FC Fabrics.

An FC SAN is composed of one or more FC Fabrics.

VVs Virtual Volumes. A logical disk created from a CPG that is exported to one or more hosts for user access. Exported VVs are also called VLUNs.

VVset A set of VVs. A VV can reside in multiple VVsets. A VVset is a logical construct that can have rules applied to it for such operations as exporting to hosts (or host sets) or Priority Optimization (a QoS feature).

Page 20: HPE 3PAR StoreServ 20800, Oracle RAC stretched cluster ... · Oracle RAC with Peer Persistence solution overview This solution was designed to demonstrate a highly available Oracle

Technical white paper

Share now

Get updates

© Copyright 2016, 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Intel Xeon is a trademark of Intel Corporation in the U.S. and other countries. Windows Server is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle and/or its affiliates. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other third-party marks are property of their respective owners.

4AA6-3907ENW, November 2018, Rev. 1

For more information

HPE 3PAR StoreServ Storage

HPE Storage Solutions for Oracle DB

HPE ProLiant DL580 Gen8 Server

HPE 3PAR Peer Persistence Software

HPE StoreFabric SN6000B 16Gb 48/24 Bundled Fibre Channel Switch

HPE SN6000B Fibre Channel Switch QuickSpecs

HPE 3PAR Thin Technologies White Paper

HPE 3PAR StoreServ Management Console

HPE SAN Design Reference Guide

Disaster-tolerant solutions with HPE 3PAR Remote Copy

Oracle resources Oracle Database 12c

Oracle RAC on Stretched Clusters

Learn more at hpe.com/storage


Recommended