+ All Categories
Home > Documents > HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For...

HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For...

Date post: 12-Mar-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
28
HPE Reference Architecture for Oracle 12c license savings with HPE 3PAR StoreServ All Flash and ProLiant DL380 Gen9 Reference Architecture
Transcript
Page 1: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

HPE Reference Architecture for Oracle 12c license savings with HPE 3PAR StoreServ All Flash and ProLiant DL380 Gen9

Reference Architecture

Page 2: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 3 Introduction ................................................................................................................................................................................................................................................................................................................................................... 3 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 4 Solution components ............................................................................................................................................................................................................................................................................................................................ 6

Hardware ................................................................................................................................................................................................................................................................................................................................................... 7 HPE ProLiant DL380 Gen9 server ................................................................................................................................................................................................................................................................................... 9 Software ................................................................................................................................................................................................................................................................................................................................................. 10 Oracle Database ............................................................................................................................................................................................................................................................................................................................. 10

Best practices and configuration guidance for the Oracle on HPE 3PAR solution ................................................................................................................................................................. 11 Workload description ................................................................................................................................................................................................................................................................................................................. 13 SLOB workload results .............................................................................................................................................................................................................................................................................................................. 14 HammerDB workload results .............................................................................................................................................................................................................................................................................................. 17

Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 19 Analysis and recommendations ....................................................................................................................................................................................................................................................................................... 21 Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................. 22 HPE Database Performance Profiler (DPP) .......................................................................................................................................................................................................................................................... 22

Summary ...................................................................................................................................................................................................................................................................................................................................................... 22 Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 23 Appendix B: Multipath configuration ................................................................................................................................................................................................................................................................................. 25 Appendix C: udev device permission rules ................................................................................................................................................................................................................................................................... 27 Appendix D: udev rules .................................................................................................................................................................................................................................................................................................................. 27 Resources and additional links ................................................................................................................................................................................................................................................................................................ 28

Page 3: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 3

Executive summary Information Technology departments are under pressure to add value to the business, improve existing infrastructure, enable growth, and reduce cost without compromising on performance. The demands of database implementations continue to escalate. Faster transaction processing speeds, scalable capacity, and increased flexibility are required to meet the needs of today’s business. This Reference Architecture is intended to provide customers with the expected performance implications associated with deploying Oracle 12c on an HPE 3PAR StoreServ All Flash array and HPE ProLiant DL380 Gen9 servers and also helps to achieve Oracle license savings of 20%-50%1 as compared to the performance with spinning media. In a head-to-head comparison, the HPE 3PAR StoreServ All Flash array provided up to 29% better throughput than the all spinning media solution. This will help customers plan to provide the appropriate level of performance and continue to meet SLAs that may be a requirement for the business enterprises.

Target audience: This Hewlett Packard Enterprise Reference Architecture (RA) is designed for IT professionals who use, program, manage, or administer large databases that require high performance. Specifically, this information is intended for those who evaluate, recommend, or design new and existing IT high performance architectures. Additionally, CIOs may be interested in this document as an aid to guide their organizations to determine when to implement an HPE 3PAR StoreServ All Flash solution for their Oracle environment and the performance characteristics associated with that implementation.

This RA describes testing completed in December 2016.

Document purpose: The purpose of this document is to describe a Reference Architecture that demonstrates license cost savings with HPE 3PAR StoreServ All Flash storage.

Disclaimer: Products sold prior to the separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. on November 1, 2015 may have a product name and model number that differ from current models.

Introduction In today’s market place, Oracle database license costs comprise the largest portion of the expenditure when deploying a new application, or even when re-hosting an existing application.

Because Oracle licenses their 12c database on a per-core basis, organizations need to ensure they deploy the fewest number of cores required to satisfy the business need and also to fully utilize the cores they do license.

This Reference Architecture’s objective is to provide a series of proof points supporting the notion that an HPE 3PAR StoreServ All Flash storage deployment model will allow a higher level of utilization of CPU resources and/or a reduction in the number of cores to provide a commensurate level of performance and throughput when compared to the same 3PAR StoreServ with 15K RPM spinning media. Utilizing CPU cores more effectively is key, because that allows for the reduction in cores, while providing the same level of throughput. A reduction in cores also reduces the license costs associated with the Oracle database.

This RA compares the performance and throughput of two HPE ProLiant DL380 Gen9 servers, both utilizing the same HPE 3PAR StoreServ 8400 array. In one instance, the 3PAR LUNs exported to the servers were created using 80 X 480GB multi-level cell (MLC) SSDs. In the other configuration, the 3PAR LUNs were created using 160 x 300GB 15K RPM disk drives.

The comparison scenarios tested are as follows:

1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The SLOB tool is an Oracle database system global area (SGA)-intensive workload which eliminates application contention and stresses the I/O subsystem.

2. Storage comparison testing – This testing simulated a real-world Oracle DB deployment and compares the performance of DB servers using spinning media with that of the same servers utilizing MLC SSDs.

3. Core count reduction testing – For this testing, server cores were disabled in the DB servers to determine the number of cores needed to sustain an equivalent level of throughput performance between the two configurations.

1 HPE achieved 20% core reduction while testing this solution. The 50% reduction in core count was achieved by a customer when they deployed a 3PAR All Flash solution. For more

details on the customer’s experience, please contact your HPE representative.

Page 4: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 4

Solution overview For this Reference Architecture HPE chose a four-node HPE StoreServ 8400. This array comes with 64 GiB of cache per node pair for a total of 128 GiB of cache. Further, it has the ability to mix SSD media and spinning media in the same array, which was appealing for this RA, because we wanted to demonstrate the impact on performance when moving from a spinning media configuration to an All Flash configuration, while changing nothing else.

We found during testing, that configuring the redo logs and undo tablespace as RAID-5 LUNs provided better performance than configuring them as RAID-10 LUNs. This is due to the serial nature of that type of I/O. We also found that tablespaces and indexes were best served with multiple RAID-10 LUNs, due to the randomness of their I/O patterns.

Two HPE ProLiant DL380 Gen9 servers were used to run Oracle simultaneously. Each were connected to the same HPE StoreServ 8400. LUNs were created on the StoreServ for each of the two servers. No LUNs or data was shared between servers.

The HPE 3PAR StoreServ 8400 was connected to the HPE ProLiant DL380 Gen9 servers with two SN6000B 48/24 16Gb Fibre Channel switches. Dual switches were used for redundancy.

Page 5: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 5

Figure 1 provides a detailed, logical layout of how the servers were connect via the Fibre Channel network to the storage along with how the storage was allocated and provisioned.

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB12

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB12

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000 3PAR 80003PAR 80003PAR 800010K 1.2TB 10K 1.2TB 10K 1.2TB10K 1.2TB

3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB3PAR 8000

10K 1.2TB

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

HP SN6000B 16Gb FC Switch

47434642454144403935383437333632312730262925282423192218211720161511141013912873625140

HP SN6000B 16Gb FC Switch

47434642454144403935383437333632312730262925282423192218211720161511141013912873625140

HPE 3PAR StoreServ 8400• 4-node• 128 GiB of cache• 12 Expansion Shelves• 80 X 480 MLC SSDs

provisioned to each server:• 8 X 512GB RAID-10

LUNs for tablespaces and indexes

• 8 X 256GB RAID-5 LUNs for redo and the undo tablespaces

• 160 X 300GB 15K RPM disk drives provisioned to each server:• 8 X 512GB RAID-10

LUNs for tablespaces and indexes.

• 8 X 256GB RAID-5 LUNs for redo and the undo tablespace.

HPE ProLiant DL380 Gen9• 2 X Intel Xeon E5-2689 v4 10-core 3.1 GHz

processors• 512GB of memory• 2 X HPE 480GB 12G SAS RI-3 SSDs for OS• 2 X HPE 82E 8Gb Dual port Fibre Channel

HBA• 1 X dual port 10GbE network cards (only one

port used during testing)

HPE ProLiant DL380 Gen9• 2 X Intel Xeon E5-2689 v4 10-core 3.1 GHz

processors• 512GB of memory• 2 X HPE 480GB 12G SAS RI-3 SSDs for OS• 2 X HPE 82E 8Gb Dual port Fibre Channel

HBA• 1 X dual port 10GbE network cards (only one

port used during testing)

2 X HPE SN6000B SAN Switches

Figure 1. Physical layout of the HPE ProLiant DL380 Gen9 servers and the HPE 3PAR StoreServ 8400 environment

Page 6: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 6

Solution components The solution utilized a single, 4-node HPE 3PAR StoreServ 8400. Depending upon which test was being executed, HPE either configured the spinning media or the SSD media. The purpose was to demonstrate the difference in performance offered by utilizing one media type versus the other media type. There was no difference in any other parameter. The same number of expansion shelves was used in each configuration and the same amount of 3PAR cache was used for each configuration. However, we used 160 spinning disk drives for the spinning media tests while we only used 80 SSDs to configure the LUNs for the SSD portion of the test.

HPE used two HPE ProLiant DL380 Gen9 servers running the Oracle database. These were used in tandem to drive a heavier I/O load on the HPE 3PAR StoreServ 8400, than could be driven using only one server. Each server was connected to the HPE 3PAR StoreServ 8400 via two 8Gb Fibre Channel HBAs. The same system and Oracle configuration parameters were utilized, and as a result, there was no difference from the server’s perspective when using either type of media.

The HPE 3PAR StoreServ 8400 had the following configuration:

• 4-nodes

• 128GiB of cache

• 12 expansion shelves

• 80 X 480 MLC SSDs

• 160 X 300GB 15K RPM disk drives

Each of the HPE ProLiant DL380 servers had the following configuration:

• 2 x Intel® Xeon® E5-2689 v4 10-core processors with a clock speed of 3.1GHz

• 512GB of memory

• 2 x HPE 480GB 12G SAS RI-3 SSDs for OS and ORACLE_HOME

• 2 x HPE 82E 8Gb Dual-port Fibre Channel HBA

• 1 x dual port 10GbE network card (only one port was used during testing)

Connecting the servers to the storage were two SN6000B 48/24 SAN switches.

Software:

• Oracle 12c Enterprise Edition – Version 12.1.0.2.0

• Red Hat® Enterprise Linux® (RHEL) – Version 7.3

Page 7: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 7

Hardware HPE 3PAR StoreServ All Flash array

Figure 2. HPE 3PAR StoreServ 4-node All Flash

The one thing we know about the future is that it is unpredictable. So how do you prepare for data growth and new business initiatives in such an environment? You need storage that gives you the power to master the unpredictable. Storage that is scalable and can grow with you. Storage that is adaptable, so it supports a variety of different applications and workloads without sacrificing performance. Storage that supports non-disruptive data mobility, so the right data is in the right place at the right time, and painful data migrations are a thing of the past. And you need it all for a price you can afford.

HPE 3PAR StoreServ 8000 storage delivers the performance advantages of a purpose-built, flash optimized architecture without compromising resiliency, data services, or data mobility. A flash optimized architecture reduces the performance bottlenecks that can choke hybrid and general-purpose disk arrays. However, unlike other purpose-built flash arrays, HPE 3PAR StoreServ 8000 does not require you to introduce an entirely new architecture into your environment to achieve flash optimized performance. As a result, you don’t have to sacrifice rich, Tier-1 data services, quad-node resiliency, or flexibility to get midrange affordability. A choice of all flash, converged flash, and tiered flash models gives you a range of options that support true convergence of block and file protocols, all flash array performance and the use of spinning media to further optimize costs.

The HPE 3PAR StoreServ Architecture was designed to provide cost-effective single-system scalability through a cache-coherent, multi-node clustered implementation. This architecture begins with a multifunction node design and, like a modular array, requires just two initial controller nodes for redundancy. However, unlike traditional modular arrays, enhanced direct interconnects are provided between the controllers to facilitate Mesh-Active processing. Unlike legacy Active/Active controller architectures—where each LUN (or volume) is active on only a single controller—this Mesh-Active design allows each LUN to be active on every controller in the system, thus forming a mesh. This design delivers robust, load-balanced performance and greater headroom for cost-effective scalability, overcoming the trade-offs typically associated with modular and monolithic storage arrays.

With rich capabilities, the lowest possible cost for all flash performance, and non-disruptive scalability to four nodes, HPE 3PAR StoreServ 8000 storage eliminates tradeoffs. You no longer need to choose between affordability and Tier-1 resiliency or flash optimized performance and Tier-1 data services. That’s because HPE 3PAR StoreServ 8000 storage shares the same flash optimized architecture and software stack with the entire family of HPE 3PAR StoreServ arrays, so you’ll not only get an industry-leading storage platform, but a storage platform that you can grow into, not out of.

When combined with high-density SSDs, HPE 3PAR compaction and compression technologies lower the cost of flash storage to below that of traditional 10K spinning media.

Page 8: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 8

In cases where there is a large amount of duplicate data, HPE 3PAR Thin Deduplication software also improves write throughput and performance. Other storage architectures that support deduplication are not able to offer these benefits at the same capacity and scale at the same performance level.2

HPE 3PAR StoreServ compression technology is particularly useful for Oracle databases. This is because Oracle databases pre-allocate the storage space they consume with a repeating pattern. Thin Compression removes recurring patterns at the bit-level by replacing them with a pointer to the dictionary, which references every page that has been compressed. In an Oracle proof of concept, HPE was able to achieve a 2 to 1 compression ratio. Compression is part of Adaptive Data Reduction, a collection of technologies that come standard with 3PAR StoreServ designed to reduce data footprint. Adaptive Data Reduction includes Zero Detect, deduplication, compression and Data Packing. When used alone or in combination, these technologies maximize flash capacity, reduce total cost, and improve flash media endurance.

Unique technologies extend your flash investments HPE innovations around flash not only help bring down the cost of flash media, but HPE 3PAR Gen5 Thin Express ASICs within each node also provide an efficient, silicon-based, zero-detection mechanism that “thins” your storage and extends your flash media investments. These ASICs power inline de-duplication for data compaction that removes allocated but unused space without impacting your production workloads, which has the added benefit of extending the life of flash-based media by avoiding unnecessary writes. The unique adaptive read and write feature also serves to extend the life of flash drives by automatically matching host I/O size for reads and writes.

In addition, while other architectures generally reserve entire drives as spares, the HPE 3PAR architecture reserves spare chunklets within each drive. Sparing policies are adjusted automatically and on the fly to avoid using flash for sparing, thus lengthening media lifespan and helping to drive down performance costs. A five-year warranty on all HPE 3PAR StoreServ flash drives protects your storage architecture investment.

Databases Database performance and availability are so critical that many organizations deploy generous capacity and hire expensive management resources to maintain the required service levels. HPE 3PAR StoreServ storage removes these inefficiencies. For example, with HPE 3PAR Thin Persistence software and the Oracle Automatic Storage Management (ASM) Storage Reclamation Utility (ASRU), your Oracle databases stay thin by automatically reclaiming stranded database capacity. HPE also offers cost-effective Oracle-aware snapshot technologies, which also benefit from HPE 3PAR compression cost savings.

Quality of Service (QoS) Quality of service (QoS) is an essential component for delivering modern, highly scalable multi-tenant storage architectures. The use of QoS moves advanced storage systems away from the legacy approach of delivering I/O requests with “best effort” in mind and tackles the problem of “noisy neighbors” by delivering predictable tiered service levels and managing “burst I/O” regardless of other users in a shared system. Mature QoS solutions meet the requirements of controlling service metrics such as throughput, bandwidth, and latency without requiring the system administrator to manually balance physical resources. These capabilities eliminate the last barrier to consolidation by delivering assured QoS levels without having to physically partition resources or maintain discreet storage silos.

HPE 3PAR Priority Optimization software enables service levels for applications and workloads as business requirements dictate, enabling administrators to provision storage performance in a manner similar to provisioning storage capacity. This allows the creation of differing service levels to protect mission-critical applications in enterprise environments by assigning a minimum goal for I/O per second and bandwidth, and by assigning a latency goal so that performance for a specific tenant or application is assured. It is also possible to assign maximum performance limits on workloads with lower service-level requirements to make sure that high-priority applications receive the resources they need to meet service levels.

HPE 3PAR Thin Provisioning Since its introduction in 2002, HPE 3PAR Thin Provisioning software has been widely considered the gold standard in thin provisioning. This thin provisioning solution leverages the system’s dedicate-on-write capabilities to make storage more efficient and more compact, allowing customers to purchase only the storage capacity they actually need and only as they actually need it. 3PAR Thin Provisioning is a complementary technology to the Adaptive Data Reduction technologies mentioned above.

HPE 3PAR Thin Persistence HPE 3PAR Thin Persistence software is an optional feature that keeps thin provisioned virtual volumes (TPVVs) and read/write snapshots of TPVVs small by detecting pages of zeros during data transfers and not allocating space for the zeros. This feature works in real time and

2 Subject to qualification and compliance with the HPE 3PAR Get Thinner Guarantee Program Terms and Conditions, which will be provided by your HPE Sales or Channel Partner

representative

Page 9: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 9

analyzes the data before it is written to the source TPVV or read/write snapshot of the TPVV. Freed blocks of 16 KB of contiguous space are returned to the source volume, and freed blocks of 128 MB of contiguous space are returned to the CPG for use by other volumes.

HPE 3PAR Peer Persistence HPE 3PAR Peer Persistence can be deployed to provide customers with a highly available stretched cluster, a cluster that spans two data centers. A stretched Oracle RAC cluster with HPE 3PAR Peer Persistence protects services from site disasters and expands storage load balancing to the multi-site data center level. The stretched cluster can span metropolitan distances (up to 5 ms roundtrip latency for the Fibre Channel [FC] replication link, generally about a 500 km roundtrip) allowing administrators to move storage workloads seamlessly between sites, adapting to changing demand while continuing to meet service-level requirements.

HPE 3PAR Peer Persistence combines HPE 3PAR storage systems for multi-site level flexibility and availability. HPE Remote Copy synchronous replication between arrays offers storage disaster tolerance. HPE Remote Copy is a component of HPE 3PAR Peer Persistence. Peer Persistence adds the ability to redirect host I/O from the primary storage system to the secondary storage system transparently.

Figure 3 is a pictorial representation of fine-grained virtualization and system-wide striping.

Figure 3. HPE 3PAR StoreServ 8400 data layers

HPE ProLiant DL380 Gen9 server

Figure 4. HPE ProLiant DL380 Gen9 server

Page 10: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 10

As the world’s best-selling server3, the HPE ProLiant DL380 server is designed to adapt to the needs of any environment, from large enterprise to remote office/branch office. With Gen9, the DL380 just got better, offering enhanced reliability, serviceability, and continuous availability, backed by a comprehensive warranty

With the HPE ProLiant DL380 Gen9 server, you can deploy a single platform to handle a wide variety of enterprise workloads:

• Storage-centric applications, such as Oracle Database—Remove bottlenecks and improve performance

• Data warehousing/analytics—Find the information you need, when you need it, to enable better business decisions

• Big Data—Manage exponential growth in your data volumes—structured, unstructured, and semi-structured

• Virtualization—Consolidate your server footprint by running multiple virtual machines on a single DL380

• Customer relationship management (CRM)—Gain a 360-degree view of your data to improve customer satisfaction and loyalty

• Enterprise resource planning (ERP)—Trust the DL380 Gen9 to help you run your business in near real time

• Virtual desktop infrastructure (VDI)—Deploy remote desktop services to provide your workers with the flexibility they need to work anywhere, at any time, using almost any device

• SAP®—Streamline your business processes through consistency and real-time transparency into your end-to-end corporate data

The HPE ProLiant DL380 Gen9 delivers industry-leading performance and energy efficiency, delivering faster business results and quicker returns on your investment. The HPE ProLiant DL380 Gen9 posts up to 21% performance gain by using the Intel E5-2600 v4 processors4 versus the previous version E5-2600 v3 processors, and up to 23% performance gain with 2400MHz DDR4 memory5. Power saving features, such as, ENERGY STAR® rated systems and 96 percent efficient Titanium HPE Flexible Slot power supplies help to drive down energy consumption and costs.

Software Oracle Database Oracle Database 12c is available in a choice of editions that can scale from small to large single servers and clusters of servers. The available editions are:

• Oracle Database 12c Standard Edition 2: Delivers unprecedented ease-of-use, power and price/performance for database applications on servers that have a maximum capacity of two sockets6.

• Oracle Database 12c Enterprise Edition: Available for single or clustered servers with no socket limitation. It provides efficient, reliable and secure data management for mission-critical transactional applications, query-intensive big data warehouses and mixed workloads6.

For this paper, Oracle Database 12c Enterprise Edition was installed. In addition to all of the features available with Oracle Database 12c Standard Edition 2, Oracle Database 12c Enterprise Edition has the following options:

• Oracle Active Data Guard

• Oracle Advanced Analytics

• Oracle Advanced Compression

• Oracle Advanced Security

• Oracle Database In-Memory

• Oracle Database Vault

3 CQ1’16 IDC Server Tracker 4 Intel performance testing, intel.com/content/www/us/en/benchmarks/intel-data-center-performance.html, comparing measurements on platforms with two E5-2600 v3 versus

E5-2600 v4. November 2015. 5 Memory 23% better performance is based on similar capacity DIMMs running on HPE servers compared to a non HPE server with DDR4 memory. HPE internal labs estimate.

March 2016. 6 Source: Oracle Database 12c Product Family white paper

For more information, refer to: https://docs.oracle.com/cd/B28359_01/license.111/b28287/editions.htm

Page 11: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 11

• Oracle TimesTen Application-Tier Database Cache

• Oracle Label Security

• Oracle Multitenant

• Oracle On-line Analytical Processing

• Oracle Partitioning

• Oracle Real Application Clusters

• Oracle RAC One Node

• Oracle Real Application Testing

• Oracle Spatial and Graph

Best practices and configuration guidance for the Oracle on HPE 3PAR solution HPE ProLiant BIOS settings • Hyper-Threading – Enabled

• Intel Turbo Boost – Enabled

• HPE Power Profile – Maximum Performance

RHEL configuration • Create udev rules to set the following device options for the LUNs and required settings for the Oracle volumes (per values in Appendices C

and D).

– Set the sysfs “rotational” value for SSD disks to 0.

– Set the sysfs “rq_affinity” value for each device to 2.

Note: Request completions all occurring on core 0 caused a bottleneck; setting the rq_affinity value to 2, resolved this problem.

– Set “I/O scheduler” to noop.

– Set permissions and ownership for Oracle volumes.

• Volume size – Virtual volumes should all be the same size and type for each Oracle ASM disk group.

• Use the recommended multipath parameters (see Appendix B) to maintain high availability while also maximizing performance and minimizing latencies.

HPE 3PAR StoreServ space allocation Both servers used the storage in the same manner. When spinning media was used, all LUNs were created on the spinning media. The same is true when SSD storage was used. Each server had the following LUN definitions:

• 8 x 512GB RAID-10 LUNs for the database, tablespaces, indexes and undo tablespace. This was labeled within Oracle ASM as DATA.

• 8 x 256GB RAID-5 LUNs for the redo log and Undo tablespace space. This was labeled within Oracle ASM as REDO.

Page 12: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 12

Figure 5 is a graphical depiction of how the storage was laid out and how it was connect to each node.

Figure 5. Depiction of how storage was laid out and connected to each server from a single HPE 3PAR StoreServ 8400

Oracle configuration best practices Since the primary goal of this testing was to provide the reader of this document with the expected performance implications associated with deploying All Flash versus deploying all spinning media, we kept all Oracle configuration parameters exactly the same between test scenarios.

The Oracle database configuration highlights for each node were as follows:

• Set the buffer cache memory size large enough to avoid much of the physical reads. For this testing, the buffer cache size was set to 64GB.

• Create two large redo logfile spaces of 300GB to minimize log file switching and reduce log file waits. Note: Customer implementations should create their log files at a size that will cause a log file switch to occur at a frequency that meets their business need.

• Create an undo tablespace of 200GB.

• Set the number of processes to a level that will allow all intended users to connect and process data. During the testing we used 3000 for this parameter.

• Set the number of open cursors to a level that will not constrict Oracle processes. This was set to 3000 during testing.

• During SLOB testing, it was found that there were a number of log buffer space waits. In order to alleviate this, the parameter LOG_BUFFER was set to 1G.

Page 13: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 13

Workload description HPE tested Oracle using both SLOB and HammerDB as workload drivers. HammerDB provides a real-world type scenario that consumes both CPU for the application logic and I/O. SLOB, on the other hand, removes all of the application logic and instead concentrates the tests on I/O via the Oracle SGA.

When testing with SLOB, all tests were run from the local servers. When testing with HammerDB, all tests were driven from a load driver server.

When a SLOB test is configured, we specify how many schemas are to be loaded, along with the size of each schema. Each individual schema is a set of objects that are created by and associated with a specific user. From a SLOB perspective, each schema has a series of tables and indexes that are accessed by a specific user.

For these tests, we loaded 60 schemas and each schema was 15GB in size. The total database size was 1.5TB. We then ran the SLOB tests against 50 of the schemas.

Then, when we ran a SLOB test, we specified the number of schemas to run against and we also set the number of threads per schema. The number of schemas chosen was 1, 5, 10, 15, 20, 25, 30, 35, 40, 45 and 50. The number of threads was set at 1, 8, and 16. Each thread was a separate work stream in to a schema. For example, in the case of 1 schema and 16 threads, there were 16 Oracle processes manipulating data within a single schema. Likewise, when we had 50 schemas and 16 threads, there were 16 Oracle processes per schema and 50 schemas for a total of 800 Oracle processes performing reads, writes and updates.

HammerDB is an open-source tool. The HammerDB tool implements an OLTP-type workload (60 percent read and 40 percent write) with small I/O sizes of a random nature. The transaction results were normalized and used to compare test configurations. Other metrics measured during the workload came from the operating system and/or standard Oracle Automatic Workload Repository (AWR) statistics reports.

The OLTP test, performed on a database 1.8TB in size, was both highly CPU and moderately I/O intensive. The environment was tuned for maximum user transactions. After the database was tuned, the transactions were recorded at different connection levels. Because customer workloads vary in characteristics, the measurement was made with a focus on maximum transactions.

We used several different Oracle connection counts for our tests. The results of various user count tests can be seen in the following graphs.

When testing with HammerDB, we tested both for maximum performance and CPU consumed, comparing MLC SSDs (All Flash) directly head-to-head against 15K spinning media. HPE also tested by reducing the cores on the system under test using All Flash until we were at a performance level that was equal to or slightly above that delivered when using the 15K spinning media on the same system utilizing all cores available.

Page 14: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 14

SLOB workload results The following graph, figure 6, shows the SLOB results of the 1-50 schema tests using 1 thread per schema. In this graph, the spinning media result for each schema count is set to 100%. The All Flash results are relative to the spinning media result for each schema count. The results are based on transactions per second as reported by Oracle AWR reports.

The 1 and 5 schema count All Flash results were very good. After 5 schemas, we exceeded the ability of the DB block buffers to move data as quickly as was required.

With the spinning media, the DB block buffers were never able to keep up with the demand. Because of this, all transactions were performed at hardware I/O speeds.

Figure 6. Spinning media versus SSD results, based on each schema count, 1 thread per schema

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%

3308

%

7131

%

1854

%

946%

747%

589%

526%

479%

452%

413%

404%

0.0%

1.0%

2.0%

3.0%

4.0%

5.0%

6.0%

7.0%

8.0%

0%

1000%

2000%

3000%

4000%

5000%

6000%

7000%

8000%

1 5 10 15 20 25 30 35 40 45 50C

PU U

tiliz

atio

n

Thro

ughp

ut P

erce

ntag

e

Number of Schemas

Spinning Media vs All Flash results 1 thread per schema relative to each schema count

Spinning Results SSD Only Spinning Avg CPU Util SSD Avg CPU Util

The SSD result is more than

7000% better than the

corresponding spinning media

result.

Page 15: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 15

The next graph, figure 7, shows the results, still utilizing SLOB as the load driver, for the same schema counts, but this time 8 active threads per schema were used. The All Flash results are relative to the spinning media results for the same schema count.

Again, after the 5 schema result for SSD, the DB block buffers were unable to keep up with the demand.

On the other hand, the DB block buffers were better able to keep up with the demand better with the spinning media at the 1 schema data point, which allowed some of the transactions to be satisfied using that buffer.

Figure 7. Spinning media versus SSD results, based on each schema count, 8 threads per schema

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%34

1%

4489

%

549%

332%

287%

267%

250%

242%

241%

236%

240%0.0%

10.0%

20.0%

30.0%

40.0%

50.0%

60.0%

0%

500%

1000%

1500%

2000%

2500%

3000%

3500%

4000%

4500%

5000%

1 5 10 15 20 25 30 35 40 45 50

CPU

Util

izat

ion

Thro

ughp

ut P

erce

ntag

e

Number of Schemas

Spinning Media vs All Flash results8 threads per schema anchored to 1 schema result

Spinning Results SSD Only Spinning Avg CPU Util SSD Avg CPU Util

We see SSD providing a

4300% throughput

improvement over the

corresponding spinning media

result

Page 16: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 16

The following graph, figure 8, shows the 16 thread per schema results for spinning media and All Flash. The All Flash results are relative to the spinning media results for the same schema count.

Again we saw the phenomena of running out of DB block buffer cache on the results after the 5 schema test with SSD and after the 1 schema test with spinning media.

Figure 8. Spinning media versus SSD results, based on each schema count, 16 threads per schema

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%

100%16

8%

611%

549%

307%

266%

247%

234%

232%

226%

233%

233%

0.0%

10.0%

20.0%

30.0%

40.0%

50.0%

60.0%

0%

100%

200%

300%

400%

500%

600%

700%

1 5 10 15 20 25 30 35 40 45 50

CPU

Util

izat

ion

Perc

enta

ge

Number of Schemas

Spinning Media vs All Flash results 16 threads per schema anchored to 1 schema result

Spinning Results SSD Only Spinning Avg CPU Util SSD Avg CPU Util

In this case, SSD beats spinning media by over

500%.

Page 17: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 17

HammerDB workload results The following graph, figure 9, depicts the performance that was achieved with each user count. The results are anchored to the 25 connection count for spinning media for the Single-Instance Oracle Database which has been set to 100%. All other percentages are relative to that result.

Figure 9. Spinning media versus SSD results based on 25 connection count spinning result

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

50%

100%

150%

200%

250%

300%

25 50 75 100 125 150 175 200 225 250 270 300 325 350 375 400

CPU

Util

izat

ion

Thro

ughp

ut P

erce

ntag

e

Number of Oracle Connections

Spinning & SSD Results based on spinning 25 connections

Spinning SSD Spinning CPU Utilization SSD CPU Utilization

By virtue of SSD’s lower

latency, for all connection counts, SSD

allows fuller use of the available CPU resources.

Page 18: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 18

The same data is contained in the following graph, figure 10. The difference is that each spinning media result for a given connection count is set to 100%. The result using the All Flash solution is in direct relationship to the spinning media result for the same connection count.

The largest increase in performance occurred at 25 connections. The improvement after that was reduced. This is because we had a larger amount of available processor resources to respond to the shorter latency provided by the HPE 3PAR StoreServ All Flash solution.

The bottleneck that kept the All Flash solution from performing that much better than the spinning media solution was a lack of CPU resources.

Figure 10. Results from tests relative to each connection count

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

50%

60%

70%

80%

90%

100%

110%

120%

130%

140%

25 50 75 100 125 150 175 200 225 250 270 300 325 350 375 400

CPU

Util

izat

ion

Thro

ughp

ut P

erce

ntag

e

Number of Oracle Connections

Spinning & SSD Results based on Each Connection Count

Spinning SSD Spinning CPU Utilization SSD CPU Utilization

At every connection count, SSD provides a

performance improvement

versus spinning media.

Page 19: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 19

Capacity and sizing Next, let’s take a look at what happens to the performance numbers when we start reducing cores in the configuration. Please keep in mind that, in this scenario, nothing else has been altered, other than removing cores. In reality if you were to go with a lower core count, you would be able to increase clock speed, which would further enhance the throughput numbers. However, HPE wanted to keep the configuration consistent and demonstrate the value of an HPE 3PAR StoreServ All Flash array alone for an Oracle implementation.

The next graph, figure 11, shows the results when running HPE 3PAR All Flash with 32 total cores versus running HPE 3PAR with spinning media with 40 total cores. Since Oracle licenses Oracle 12c Enterprise Edition by the core, a reduction in the number of cores required to deliver Oracle services to the business equates to a reduction in the license and support fee charged by Oracle. The reduction in the number of cores was accomplished by turning off 2 cores per processor, a total of 4 cores per server. Each server connected to the HPE 3PAR using All Flash had 16 cores for the total of 32 cores. The results of this graph are relative to each spinning media connection count which has been set to 100%.

Figure 11. HammerDB results for spinning media using 40 server cores versus All Flash using 32 server cores relative to each connection count

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

50%

60%

70%

80%

90%

100%

110%

120%

130%

25 50C

PU U

tiliz

atio

n

Perc

enta

ge

Number of Oracle Connections

Spinning Results 40 total coresSSD Results 32 total cores

Spinning SSD Spinning CPU Utilization SSD CPU Utilization

Even with fewer cores, the SSD based solution

provides a performance improvement

over the spinning media based solution

Page 20: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 20

Even with a reduced core count, the HPE 3PAR StoreServ All Flash provides better transactional latency. Improving the transactional latency enables more transactions to be entered per person per minute, hour or day and also allows for fewer people to enter the same number of transactions. The following graph, figure 12 shows the transactional latency between the spinning solution running with 40 cores and the All Flash solution running with 32 cores.

Figure 12. Transactional latency. Spinning media utilizing 40 cores, and All Flash using 32 cores

0

1

2

25 50

Milli

seco

nd

Number of Oracle Connections

Transactional LatencySpinning 40 cores SSD 32 cores

Spinning Media Transactional Latency All Flash Transactional Latency

Page 21: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 21

Analysis and recommendations As shown in the above graphs, utilizing HPE 3PAR StoreServ All Flash technology provides the ability to reduce core counts with a commensurate reduction in Oracle licensing costs.

But let’s take a look at one other metric, that being, the user experience. We know that time spent waiting for a transaction to complete is time that is wasted and is frustrating for the user’s experience. The following graph, figure 13, shows the transactional latency attributable to both spinning media and All Flash storage. Here we take a look at the total time it takes a transaction to complete. When we set up our HammerDB test scenario, we configured each connection with no think or type time. As a result, we believe each connection represents up to 17,000 real world users. Each transaction is made up of multiple Oracle SQL executions. The latency represented in this graph is the total time spent on each transaction only.

While both 3PAR StoreServ environments provide good response time, the All Flash experience is consistently lower latency than the spinning media result.

Figure 13. Transactional latency. Both spinning media and All Flash are using 40-cores

Another advantage of an All Flash solution over a spinning media solution has to do with power and cooling. Table 1 shows the two types of configurations we have been talking about in this paper. Each configuration has the type of storage configured as we used during the testing. The spinning media was configured as a 4-node with 6 expansion shelves; each shelf had 20 disk drives, for a total of 160 disk drives. The All Flash solution was also a 4-node with 6 expansion shelves; each shelf had 10 MLC SSDs, for a total of 80 SSDs. As you can see from the table, the All Flash solution consumes less energy and produces less heat.

Table 1. Power and cooling comparison

Type of storage Idle power consumption Transactional power consumption Idle heat Transactional heat

Spinning media 2128 2578 7251 8797

All Flash 1256 1730 4283 5901

0

1

2

3

4

5

6

7

8

25 50 75 100 125 150 175 200 225 250 270 300 325 350 375 400

Milli

seco

nd

Number of Oracle Connections

Transactional Latency

Spinning Media Transactional Latency All Flash Transactional Latency

Page 22: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 22

Another advantage for the All Flash solution has to do with data center space utilization. In the above example, the spinning media solution had only 32 open drive slots. Once an additional 32 drives have been added to the configuration, additional expansion shelves would be required.

In comparison, the All Flash solution had 112 open drive slots, meaning that 112 additional SSDs can be added before requiring additional expansion shelves.

We also need to consider Oracle licensing. Table 2 shows the cost of Oracle licenses, both a new purchase price as well as the ongoing yearly support cost. This calculation is based on Oracle’s published global December 15, 2016 price list7. For a new purchase of the Oracle database as well as support, the total savings in Oracle licensing after three years would be $547,2009 list, for the initial year license fee and 2 years of ongoing support.

Also listed in table 2 is data from an actual customer example where they were able to reduce their core count by 50%. With this 50% savings the total savings after the initial license fee and 2 years of ongoing support would be $1,368,00010.

Table 2. Oracle database license cost comparison

Type of storage Number of cores Initial license cost Ongoing yearly support

Spinning media 40 $1.9M US List $418K

All Flash8 32 $1.52M US List $334.4K

All Flash customer example9 20 $950K US List $209K

Implementing a proof-of-concept As you can see from the performance results, differences between implementations and data access patterns can cause a given environment’s performance to vary from what was tested as part of this paper. As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

HPE Database Performance Profiler (DPP) HPE also offers DPP to qualified customers. DPP uses performance data collected on existing Oracle databases to identify performance bottlenecks that are inflating software license costs. The result of this assessment will include specific recommendations as well as a summary that provides an estimate of the potential benefits associated with the proposal. Please contact your HPE Account Manager for full details.

Summary HPE 3PAR StoreServ All Flash is a capable storage array on which to deploy the Oracle database. It has the ability to allow customers to reduce the number of cores required to deploy an Oracle solution, which in turn reduces the license costs associated with Oracle.

Based up the expandability of the HPE 3PAR StoreServ 8400, a customer can purchase only what they need to meet their requirements today and be confident they can expand the array to meet their needs in the future. This allows the business to grow at its own pace, acquiring the storage they need to satisfy their need today and not over spend on present day equipment only to plan for future growth.

Customers can use this Reference Architecture to determine, at a high level, the impact of deploying the HPE 3PAR StoreServ All Flash solution.

7 Oracle published global price list oracle.com/us/corporate/pricing/technology-price-list-070617.pdf 8 These prices reflect the core count reduction achieved based on our testing. 9 These prices reflect the core count reduction achieved by an HPE customer who has deployed an HPE StoreServ All Flash array. For more details regarding this real-world

customer experience, please see your HPE representative.

Page 23: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 23

Appendix A: Bill of materials

Note Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html

Table 3a. Bill of materials for the HPE 3PAR StoreServ 8400 as used

Qty Part number Description

HPE 3PAR StoreServ 8400

1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack

1 H6Z03A HPE 3PAR StoreServ 8400 4N Stor Cnt Base

24 K2P97A HPE 3PAR 8000 300GB SAS 15K SFF HDD

12 K2Q95A HPE 3PAR 8000 480GB SFF SSD

1 L7B69A HPE 3PAR 8400 OS Suite Base LTU

168 L7B70A HPE 3PAR 8400 OS Suite Drive LTU

1 L7B81A HPE 3PAR 8400 Virtual Copy Base LTU

168 L7B82A HPE 3PAR 8400 Virtual Copy Drive LTU

1 L7B71A HPE 3PAR 8400 Data Opt St v2 Base LTU

168 L7B72A HPE 3PAR 8400 Data Opt St v2 Drive LTU

1 L7D49A HPE Smart SAN for HPE 3PAR 8xxx LTU

2 QK753B HPE SN6000B 16Gb 48/24 FC Switch

48 QK724A HPE B-series 16Gb SFP+SW XCVR

6 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

72 K2P97A HPE 3PAR 8000 300GB SAS 15K SFF HDD

36 K2Q95A HPE 3PAR 8000 480GB SFF SSD

4 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

40 K2P97A HPE 3PAR 8000 300GB SAS 15K SFF HDD

24 K2Q95A HPE 3PAR 8000 480GB SFF SSD

2 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

24 K2P97A HPE 3PAR 8000 300GB SAS 15K SFF HDD

8 K2Q95A HPE 3PAR 8000 480GB SFF SSD

1 TK808A HPE Rack Front Door Cover Kit

48 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

8 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE Basic 4.9kVA/L6-30P/C13/NA/J PDU

1 BW932A HPE 600mm Rack Stabilizer Kit

1 BW932A B01 Include with complete system

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362AAE HPE 3PAR StoreServ Mgmt/Core SW E-Media

1 BD363AAE HPE 3PAR OS Suite Latest E-Media

1 TC472AAE HPE Intelligent Inft Anlyzer SW v2 E-LTU

Page 24: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 24

Table 3b. Bill of materials for the HPE ProLiant DL380 Gen9 servers used as database servers

Qty Part number Description

Rack and server infrastructure

1 H6J66A HPE 42U 600x1075mm Advanced Shock Rack

1 H6J66A 001 HPE Factory Express Base Racking Service

2 719064-B21 HPE DL380 Gen9 8SFF CTO Server

2 719064-B21 ABA U.S. - English localization

2 817949-L21 HPE DL380 Gen9 E5-2689v4 FIO Kit

2 817949-B21 HPE DL380 Gen9 E5-2689v4 Kit

32 805353-B21 HPE 32GB 2Rx4 PC4-2400T-L Kit

4 816562-B21 HPE 480GB 12Gb SAS RI-3 SFF SC SSD

2 749974-B21 HPE Smart Array P440ar/2G FIO Controller

2 727054-B21 HPE Ethernet 10Gb 2-port 562FLR-SFP+Adpt

2 733660-B21 HPE 2U SFF Easy Install Rail Kit

4 AJ763B HPE 82E 8Gb Dual-port PCI-e FC HBA

4 720478-B21 HPE 500W FS Plat Ht Plg Pwr Supply Kit

2 BD505A HPE iLO Adv incl 3yr TSU 1-Svr Lic

1 H8B55A HPE Mtrd Swtchd 14.4kVA/CS8365C/NA/J PDU

1 H6J85A HPE Rack Hardware Kit

1 BW930A HPE Air Flow Optimization Kit

1 BW930A B01 Include with complete system

1 BW906A HPE 42U 1075mm Side Panel Kit

Table 3c. Bill of materials for the HPE 3PAR StoreServ 8450 All Flash solution only

Qty Part number Description

HPE 3PAR StoreServ 8450

1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack

1 BW904A 001 HPE Factory Express Base Racking Service

1 H6Z25A HPE 3PAR StoreServ 8450 4N Stor Cnt Base

20 K2Q95A HPE 3PAR 8000 480GB SFF SSD

1 L7C17A HPE 3PAR 8450 OS Suite Base LTU

80 L7C18A HPE 3PAR 8450 OS Suite Drive LTU

1 L7C29A HPE 3PAR 8450 Virtual Copy Base LTU

80 L7C30A HPE 3PAR 8450 Virtual Copy Drive LTU

1 L7C19A HPE 3PAR 8450 Data Opt St v2 Base LTU

80 L7C20A HPE 3PAR 8450 Data Opt St v2 Drive LTU

1 L7D49A HPE Smart SAN for HPE 3PAR 8xxx LTU

2 QK753B HPE SN6000B 16Gb 48/24 FC Switch

48 QK724A HPE B-series 16Gb SFP+SW XCVR

6 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

60 K2Q95A HPE 3PAR 8000 480GB SFF SSD

1 K2R28A HPE 3PAR StoreServ SPS Service Processor

Page 25: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 25

Qty Part number Description

1 TK808A HPE Rack Front Door Cover Kit

48 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

8 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE Basic 4.9kVA/L6-30P/C13/NA/J PDU

1 BW932A HPE 600mm Rack Stabilizer Kit

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362AAE HPE 3PAR StoreServ Mgmt/Core SW E-Media

1 BD363AAE HPE 3PAR OS Suite Latest E-Media

1 BD365AAE HPE 3PAR SP SW Latest E-Media

1 TC472AAE HPE Intelligent Inft Anlyzer SW v2 E-LTU

Appendix B: Multipath configuration The following multipath parameters were included in the /etc/multipath.conf file, which are the settings for RHEL 7.3 and HPE 3PAR Persona 2 (ALUA). It should also be noted that the names of the multipath device files were altered so that all nodes could see the same device names and so they were in a human readable format.

defaults { polling_interval 10 user_friendly_names yes find_multipaths yes } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18 rr_min_io_rq 1 detect_prio yes

} } blacklist { devnode "^(ram|zram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" } multipaths { multipath { wwid 360002ac00000000000000072000190e2 alias data01 } multipath { wwid 360002ac00000000000000073000190e2 alias data02 }

Page 26: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 26

multipath { wwid 360002ac00000000000000074000190e2 alias data03 } multipath { wwid 360002ac00000000000000075000190e2 alias data04 } multipath { wwid 360002ac00000000000000076000190e2 alias data05 } multipath { wwid 360002ac00000000000000077000190e2 alias data06 } multipath { wwid 360002ac00000000000000078000190e2 alias data07 } multipath { wwid 360002ac00000000000000079000190e2 alias data08 } multipath { wwid 360002ac00000000000000082000190e2 alias redo01 } multipath { wwid 360002ac00000000000000083000190e2 alias redo02 } multipath { wwid 360002ac00000000000000084000190e2 alias redo03 } multipath { wwid 360002ac00000000000000085000190e2 alias redo04 } multipath { wwid 360002ac00000000000000086000190e2 alias redo05 } multipath { wwid 360002ac00000000000000087000190e2 alias redo06 } multipath { wwid 360002ac00000000000000088000190e2 alias redo07 } multipath { wwid 360002ac00000000000000089000190e2 alias redo08 } }

Page 27: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 27

Appendix C: udev device permission rules For HPE 3PAR StoreServ 8400 storage configuration, an udev rules file named /etc/udev/rules.d/99-oracleasm.rules was created to set the required ownership of the Oracle ASM LUNs:

ENV{DM_NAME}=="data01", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data02", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data03", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data04", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data05", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data06", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data07", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data08", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo01", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo02", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo03", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo04", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo05", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo06", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo07", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="redo08", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

Appendix D: udev rules A rules file was created to set the rotational latency, the I/O scheduler and rq_affinity. The name of this file was /etc/udev/rules.d/10-3par.rules.

ACTION=="add|change", KERNEL=="dm-*", PROGRAM="/bin/bash -c 'cat /sys/block/$nam e/slaves/*/device/vendor | grep 3PARdata'", ATTR{queue/rotational}="0", ATTR{que ue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1", AT TR{queue/nr_requests}="128"

Page 28: HPE Reference Architecture for Oracle 12c license savings ... · 1. Maximum I/O throughput – For this set of tests, we use the Silly Little Oracle Benchmark (SLOB) test tool. The

Reference Architecture Page 28

Sign up for updates

© Copyright 2017-2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Oracle is a registered trademark of Oracle and/or its affiliates. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. SAP is a registered trademark of SAP AG in Germany and other countries.

a00002706enw, May 2018, Rev. 1

Resources and additional links HPE Reference Architectures hpe.com/info/ra

HPE Servers hpe.com/servers

HPE Storage hpe.com/storage

HPE 3PAR Thin Technologies http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA3-8987ENW

HPE 3PAR Peer Persistence Software http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA4-3533ENW

HPE Networking hpe.com/networking

HPE Technology Consulting Services hpe.com/us/en/services/consulting.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.


Recommended