+ All Categories
Home > Documents > Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database...

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database...

Date post: 31-Mar-2020
Category:
Upload: others
View: 13 times
Download: 0 times
Share this document with a friend
24
Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology Abstract This white paper examines the performance considerations of placing Oracle Databases on Enterprise Flash Drives versus conventional hard disk drives, as well as discusses the best practices for placing partial database containers on Flash drives. October 2009
Transcript
Page 1: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives

for Oracle Database Deployments Applied Technology

Abstract

This white paper examines the performance considerations of placing Oracle Databases on Enterprise Flash Drives versus conventional hard disk drives, as well as discusses the best practices for placing partial database containers on Flash drives.

October 2009

Page 2: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Copyright © 2008, 2009 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part Number h5967.3

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 2

Page 3: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Table of Contents Executive summary ............................................................................................4 Introduction.........................................................................................................5

Audience ...................................................................................................................................... 5 Technology overview .........................................................................................6

CLARiiON CX4 ............................................................................................................................ 6 Oracle Databases and Enterprise Flash Drives................................................7

Database workloads that are the best fit for EFDs ...................................................................... 7 Analyzing the Oracle AWR report or Statspack report ................................................................ 8

Load profile section .................................................................................................................. 8 Oracle wait events.................................................................................................................... 9 Tablespace I/O stats ................................................................................................................ 9

Use case comparison .......................................................................................10 Use Case #1: Extremely read-intensive workload..................................................................... 10 Use Case #2: Oracle OLTP workload comparison.................................................................... 13 Use Case #3: Short-stroked Oracle OLTP workload................................................................. 14 Use Case #4: Oracle DSS workload on EFDs........................................................................... 16 Use Case #5: Moving partial database to EFDs........................................................................ 17

Redo Logs on EFDs? (or not) ................................................................................................ 17 Oracle TEMP tablespace on EFDs ........................................................................................ 17 Moving high Object Intensity objects to EFDs........................................................................ 18

Impact of LUN cache settings..........................................................................21 EFDs and an Information Lifecycle Management (ILM) strategy..................22 Conclusion ........................................................................................................24 References ........................................................................................................24

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 3

Page 4: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Executive summary One of the major features that EMC introduced in the CLARiiON® CX4 series is the availability of Enterprise Flash Drives (EFD). EMC® CLARiiON is the first midrange array with support for this emerging generation of data storage device. With this capability, EMC creates a new ultra-performing “Tier 0” storage that removes the performance limitations of rotating disk drives. By combining enterprise-class Flash drives optimized with EMC technology and advanced CLARiiON functionality, organizations now have a new tier of storage previously unavailable in a midrange storage platform.

EFDs dramatically increase the performance of latency-sensitive applications. Also known as solid state drives (SSD), EFDs contain no moving parts and appear as standard drives to existing CLARiiON management tools, allowing administrators to manage Tier 0 without special processes or custom tools or extra training. Tier 0 EFDs are ideally suited for applications with high transaction rates and those requiring the fastest possible retrieval and storage of data, such as currency exchange and electronic trading systems, or real-time data acquisition and processing. They also prove to be extremely good for highly read-intensive workloads like search engine databases. The EFDs can deliver millisecond application response times and up to 30 times more I/O operations per second (IOPS) than traditional rotating Fibre Channel hard disk drives. Additionally, EFDs consume significantly less energy per IOPS than traditional hard disk drives, thereby significantly improving the TCO by extremely reducing the data center energy and space footprints.

Database performance has long been constrained by the I/O capability of rotating hard disk drives (HDD), and the performance of the HDD has been limited by intrinsic mechanical delays of head seek and rotational latency. EFDs, however, have no moving parts and therefore no seek or rotational latency delays, which dramatically improves their ability to sustain very high number of IOPS with very low overall response times.

Figure 1 on page 5 shows the theoretical IOPS rates that can be sustained by traditional HDD based on average seek and latency times as compared to EFD technology. Over the past 25 years, the rotational speeds of HDDs have improved from 3,600 rpm to 15,000 rpm, yielding only four times the improvement in IOPS when the rest of the computer technologies like CPU speeds saw double digit growth. EFD technology represents a significant leap in performance and can sustain up to 30 times the IOPS of traditional HDD technology.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 4

Page 5: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

FLASH Drive vs. HDD IOPS

3600 rpm 5400 rpm 7200 rpm 10K rpm 15K rpm Flash Disk rpm

Flash Explosion Up to 30x IOPS HDD

Figure 1. Relative IOPS of various drive technologies

The introduction of the industry’s first Enterprise Flash Drives into the EMC CLARiiON midrange storage array enables these disk arrays to meet the growing demand for higher transaction rates and faster response times. Companies with stringent latency requirements no longer have to purchase large numbers of the fastest rotating Fibre Channel disk drives and only utilize a small portion of their capacity (known as short stroking) to satisfy the IOPS performance requirements of very demanding random workloads.

Relational databases are often at the core of business applications. Increasing their performance, while keeping storage power consumption and footprint to a minimum, significantly reduces the total cost of ownership (TCO) and helps to alleviate growing data centers constraints. The deployment of Tier 0 EFDs together with slower, higher capacity tiers such as rotating Fibre Channel and SATA drives enables customers to structure the application data layout where each tier of storage meets the I/O demands of the application data it hosts.

Introduction This white paper examines some of the use cases and best practices for using EFDs with Oracle Database workloads. Proper use of EFDs delivers vastly increased performance to the database application when compared to traditional rotating Fibre Channel drives, both in transaction rates per minute as well as transaction response time. Recommendations for identifying the right database components to place on EFDs, consistent with Oracle engineering’s finding, are also covered in this paper.

Audience This white paper is intended for Oracle Database administrators, storage architects, customers, and EMC field personnel who want to understand the implementation of CLARiiON EFDs in Oracle Database environments to improve the performance of business applications.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 5

Page 6: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Technology overview

CLARiiON CX4 The EMC CLARiiON CX4 series with UltraFlexTM technology is based on a new, breakthrough architecture and extensive technological innovation, providing a midrange storage solution that is highly scalable, meeting the price points of most midrange customers. The CX4 is the fourth-generation CX series, and continues EMC’s commitment to maximizing customer’s investments in CLARiiON technology by ensuring that existing resources and capital assets are optimally utilized as customers adopt new technology.

The new CLARiiON CX4 systems, as shown in Figure 2, are the next generation in the CX series that extends EMC's leadership in the midrange enterprise storage market. The CX4 delivers immediate support for the latest generation of disk drive technologies, such as EFDs, 4 Gb/s FC drives for high performance, and SATA II for high capacity. CLARiiON CX4 is the first midrange storage system that can support all of these latest generations of disk drive technologies. CLARiiON CX4 with the latest release of FLARE® (R28) has been optimized for maximum performance and tiered storage functional flexibility. A complete introduction to the CX4 series is beyond the scope of this paper. The “References” section on page 24 can provide further listings. A few major features of the CLARiiON CX4 series are listed in Figure 2. EFDs are supported on all four models of CLARiiON CX4 listed here.

CX4-960 Up to 960 drives; 32 GB memory Standard 8 Fibre Channel/4 iSCSI Maximum 32 front-end Fibre Channel and/or iSCSI

CX4-480 Up to 480 drives; 16 GB memory Standard 8 Fibre Channel/4 iSCSI Maximum 24 front-end Fibre Channel and/or iSCSI

CX4-240 Up to 240 drives; 8 GB memory Standard 4 Fibre Channel/4 iSCSI Maximum 20 front-end Fibre Channel and/or iSCSI

CX4-120 Up to 120 drives; 6 GB memory Standard 4 Fibre Channel/4 iSCSI Maximum 16 front-end Fibre Channel and/or iSCSI

Figure 2. CLARiiON CX4 models

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 6

Page 7: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

The enterprise-class EMC Flash drives supported by CLARiiON CX4 are constructed with nonvolatile semiconductor NAND Flash memory and are packaged in a standard 3.5-inch disk drive form factor used in existing CLARiiON disk drive array enclosures. These drives are especially well suited for latency-sensitive applications that require consistently low read/write response times. EFDs also benefit from the advanced capabilities that CLARiiON provides, including local and remote replication, Virtual LUN Migration, Navisphere® Quality of Service Management, and five 9s availability.

CLARiiON storage arrays currently support EFDs in three sizes — 73 GB, 200 GB, and 400 GB. All these drives have slightly different application performance characteristics depending on workload nature. They have almost similar read characteristics but somewhat different write characteristics. The 400 GB drives are specifically optimized for space and provide very good $/GB ratio whereas the 73 GB and 200 GB drives are optimized for performance and would offer better $/IOPS. The customer can choose either drive depending on their space and performance requirements.

Oracle Databases and Enterprise Flash Drives

Database workloads that are the best fit for EFDs There are no simple, definitive rules that would readily identify applications that best suit the EFDs, but we can follow some guidelines. It is very important to understand the load profile of an application before putting it on the EFDs, taking into consideration the fact that most databases have different workload profiles during different times of the day. The EFDs are suitable for highly read-intensive and extremely latency-sensitive applications and using these drives against the wrong target may not yield the desired benefit for the investment. It is important to understand the following terminology to help with deciding whether the EFDs are suitable for certain workloads.

• Write cache: Most of the storage systems have big write side cache and all write IOPS from a host are generally written to cache and incur no delay due to physical disk access. CLARiiON storage arrays have write caches sized to match the disk count supported by the controller and support enabling and disabling write cache at the LUN level, if needed.

• Read hit: A read request from a database host can be served by storage system immediately if it already exists in storage cache because of a recent read or write or due to prefetch. A read serviced from the storage cache without causing disk access is called a read hit. If the requested data is not available in storage cache, the CLARiiON must retrieve it from disk; this is referred to as a read miss.

• Short-stroked drives: Some extremely latency-sensitive applications use this technique on regular Fibre Channel drives to obtain low latencies. This is a technique where data is laid out on many partially populated disks in order to reduce the spindle head movement to provide high IOPS at a very low latency.

Workloads with high CLARiiON cache read-hit rates are already serviced at memory access speed, and deploying them on Flash drive technology may not show a significant benefit. Workloads with low CLARiiON cache read-hit rates that exhibit random I/O patterns, with small I/O requests of up to 16 KB, and that require high transaction throughput will benefit most from the low latency of EFDs.

Database and application managers can easily point to mission-critical applications that directly improve business revenue and productivity when business transaction throughput is increased, along with reduced service latencies. Cognizant of these applications, storage administrators would often resort to “short stroking” more drives to ensure the highest possible I/O service level supported for them. EFDs can provide two very important benefits for such applications. • A single EFD can replace many short-stroked drives by its ability to provide a very high transaction

rate (IOPS). This reduces the total number of drives needed for the application, increases power saving by not having to keep many spinning disks, and eventually means reduced floor space in the data center as well.

• EFDs provide very low latency, so applications where predictable low response time is critical and not all the data can be kept at the host or CLARiiON cache will highly benefit from using such drives.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 7

Page 8: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Because of the absence of rotating media in EFDs, their transfer rate is extremely high and data is served much faster than the best response time that can be achieved even with a large number of short-stroked hard drives.

The Oracle workload load profile obtained by using either the Oracle tool called Statspack or the AWR report can be used to determine the potential for placing either the entire database or parts of it on EFDs. These tools are two variants of the same fundamental measurement technique that monitors the Oracle level performance counters. The AWR is supported on the later versions of the Oracle Database (10g and 11g) where as Statspack existed since Oracle8i. Both these tools are delta-based tools, which sample the Oracle performance counters at two different time intervals and calculate the average performance metrics based on the total elapsed time between samples. Collecting the samples over a long interval can dilute the average values. So, the tool should be run during the busiest time of the database application over a short period (typically 30-minute samples are good enough). Some experienced DBAs may resort to directly using the underlying Oracle V$ views used by these tools to obtain performance metrics dynamically, and more on a real-time basis, instead of using these tools.

Analyzing the Oracle AWR report or Statspack report Various sections of the performance reports can be used to identify the workload profile. This section describes each area of interest and what to look for in those areas.

Load profile section This section defines the overall workload profile during the sampling interval. The key parameters to identify here are Logical reads, Physical reads, and Physical writes.

Table 1. Workload profile

Per Second Per Transaction Per Exec Per Call

DB Time(s): 68.8 46.4 0.22 0.23

DB CPU(s): 2.5 1.7 0.01 0.01

Redo size: 4,256.8 2,875.4

Logical reads: 18,421.9 12,443.7 Block changes: 13.5 9.1

Physical reads: 17,459.4 11,793.6 Physical writes: 2.1 1.4 User calls: 302.3 204.2

Parses: 5.7 3.8

Hard parses: 0.3 0.2

W/A MB processed: 45,301.9 30,600.7

Logons: 0.2 0.1

Executes: 307.3 207.6

Rollbacks: 0.0 0.0

Transactions: 1.5

A read-intensive workload is one where a significant number of reads are physical (reads served from storage rather than Oracle cache) as reported in the AWR. This workload will typically benefit the most by leveraging EFDs. It is important, however, to determine if these IOPS can be reduced by simply tuning the database cache before exploring the EFD alternative. It is much easier to increase the memory in the system or increase Oracle buffer cache as opposed to throwing EFDs at the problem. The Buffer Pool Advisory section of the Oracle performance report indicates the impact of increasing the database buffer cache. In the current example, the database was configured to use 4 GB of buffer cache;

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 8

Page 9: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Table 2 indicates that even when the buffer pool is doubled the Estimated Physical Reads remain the same. This is an example where the improvement can only be obtained by having faster storage and hence an ideal candidate for EFDs.

Table 2. Buffer Pool Advisory

P Size for Est (M) Size Factor Buffers for Estimate Est Phys Read Factor Estimated Physical Reads

D 384 0.09 47,340 1.49 897,413

D 1,152 0.28 142,020 1.06 635,008

D 1,920 0.47 236,700 1.01 603,605

D 3,456 0.84 426,060 1.00 600,590

D 4,096 1.00 504,960 1.00 600,590 D 4,608 1.13 568,080 1.00 600,590

D 5,376 1.31 662,760 1.00 600,590

D 6,144 1.50 757,440 1.00 600,590

D 6,912 1.69 852,120 1.00 600,590

D 7,680 1.88 946,800 1.00 600,590

Oracle wait events This section indicates the top five Oracle foreground wait events. Most DBAs concentrate on this section when they tune their databases as these are the low hanging fruits that return the maximum benefit for their tuning effort. The key parameter to identify in this section is the “db file sequential read” wait event. The name of the parameter is counter-intuitive as it actually indicates the random nature of the workload rather than what its name implies. This event tracks how many times the database had to wait for single block I/O to finish during the sampling interval and its average wait time. In the following example, the database was spending around 88% of the time for an I/O to finish with an average wait time of 14 ms. Placing this database on EFDs is going to significantly improve the average latency if the application users are complaining about slow responsiveness of the system. A higher latency value here does not always indicate a problem. Ultimately, it is the actual user experience that decides if the overall business transaction service performance is acceptable.

Table 3. Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class

db file sequential read 1,169,805 16,171 14 87.92 User I/O db file parallel read 34,303 1,819 53 9.89 User I/O

DB CPU 290 1.58

log file sync 79,538 54 1 0.29 Commit

db file scattered read 1,997 27 14 0.15 User I/O

Even doubling database cache will result in the same amount of physical reads

Tablespace I/O stats Two more sections toward the end of report provide I/O statistics at the data container level, which can be used for identifying right migration targets to EFDs. The first is called “Tablespace I/O stats” and the second is “File I/O stats.” The entries at the top of these tables (with the highest I/O rate) will provide the maximum benefit by migrating to EFDs. Data containers with higher average response times are likely good candidates for moving to EFDs. These can be data files, index, temp, and so on. This approach provides an alternative for moving the whole database to EFDs. Keep in mind that these reported numbers

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 9

Page 10: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

are averages, and do not necessarily reflect bursts of I/O activities on the tablespaces, which may in fact spike to significantly higher values for certain periods within the sampling interval.

Table 4. Tablespace I/O stats

Tablespace Reads Av Reads/s

Av Rd(ms)

Av Blks/Rd Writes Av

Writes/s Buffer Waits

Av Buf Wt(ms)

DATA1 4,464,451 4,730 19.11 1.00 1,440,601 1,526 347 47.20DATA2 454,647 482 11.05 1.03 288,654 306 142 10.35DATA3 425,990 451 17.80 1.00 186,795 198 48 2.71INDEX1 265,040 281 10.30 1.09 234,963 249 14 10.00INDEX2 188,572 200 15.07 1.03 168,047 178 3 3.33

Use case comparison The following are different use case scenarios that had been tested by EMC engineering, demonstrating how properly leveraging EFDs can decidedly benefit these typical business deployments. Please note that both 73 GB and 400 GB EFDs were used for testing. The types of drives used were clearly called out in each use case description. Also note that all testing was done on a CLARiiON CX4-960 running FLARE release 28 or later.

Use Case #1: Extremely read-intensive workload The following use case documents the improvement the EFDs can bring to a highly read-intensive and latency-sensitive application. It is common practice to deploy these kinds of applications on a huge number of short-stroked Fibre Channel spindles. This particular application was deployed on 150 Fibre Channel spindles to meet the very low latency requirement, and still was able to achieve only 4 ms latency. This use case shows how the same application can be deployed on 6 x 73 GB EFDs, resulting in more transactions and better latency. The following is the beginning of an AWR report (10 minutes sample). The environment was Oracle Database 11g/ASM/Oracle Enterprise Linux with a CLARiiON CX4-960.

Table 5. Results of Use Case #1

Cache sizes

Begin End

Buffer Cache: 4,096M 4,096M Std Block Size: 8K

Shared Pool Size: 640M 640M Log Buffer: 41,488K

Load profile

Per Second

Redo size: 4,256.8

Logical reads: 18,421.9

Block changes: 13.5

Physical reads: 17,459.4

Physical writes: 2.1

User calls: 302.3

Executes: 307.3

Transactions: 1.5

4 GB buffer cache

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 10

Page 11: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Instance efficiency percentages (target 100%)

Buffer Nowait %: 100.00 Redo NoWait %: 100.00

Buffer Hit %: 50.88 In-memory Sort %: 100.00

Library Hit %: 86.13 Soft Parse %: 95.64

Execute to Parse %: 98.16 Latch Hit %: 99.99

50% of I/Os are served from storage, but Buffer Pool Advisory indicates no help from increasing buffer cache

Top 5 Timed Events

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class

db file sequential read 10,803,192 42,450 4 95.92 User I/O

DB CPU 1,575 3.56

db file scattered read 32,266 219 7 0.50 User I/O

control file sequential read 25,993 7 0 0.02 System I/O

db file parallel read 704 5 7 0.01 User I/O

The top wait event (at 96%) is random reads with 4 ms response time. This application is latency-sensitive and requires a huge number of Fibre Channel drives. It will highly benefit by use of Flash drives.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 11

Page 12: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

The next tables show Oracle workload profiles after moving the data on to just six EFDs.

Table 6. Results of Use Case #1 after moving the database to six EFDs

Cache sizes

Begin End

Buffer Cache: 4,096M 4,096M Std Block Size: 8K

Shared Pool Size: 640M 640M Log Buffer: 41,488K

Load profile

Per Second

Redo size: 6,564.8

Logical reads: 57,548.5

Block changes: 27.2

Physical reads: 53,055.7

Physical writes: 3.6

User calls: 937.8

Executes: 944.3

Transactions: 2.4

Instance efficiency percentages (target 100%)

Buffer Nowait %: 100.00 Redo NoWait %: 100.00

Buffer Hit %: 75.53 In-memory Sort %: 100.00

Library Hit %: 88.20 Soft Parse %: 96.58

Execute to Parse %: 99.21 Latch Hit %: 99.97

Top 5 Timed Events

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class

db file sequential read 33,689,960 40,945 1 87.76 User I/O

DB CPU 5,631 12.07

db file scattered read 30,079 60 2 0.13 User I/O

control file sequential read 25,993 6 0 0.01 System I/O

latch free 6,621 5 1 0.01 Other

The response time dropped from 4 ms to 1 ms even with 96% fewer disks

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 12

Page 13: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

This test clearly shows the significant improvement a few EFDs can bring to enterprise applications. Imagine the amount of power and tile-space saved in a data center by moving to EFDs on top of the performance gains! The overall improvement in this case is well beyond the 30x promised by EFDs considering the 25x difference in spindle count and more than 3x improvement in the overall transaction throughput.

Table 7. EFDs over FC drives with Use Case #1

Metric 150 FC drives 6 EFDs

Physical reads 17,459 53,055

Read latency 4 ms 1 ms

Managed business information explosion in the Internet age has created the new demand for powerful search engines that can effectively process vast volumes of accumulated business data. EFDs will be a good fit into this market segment because of their characteristics.

Use Case #2: Oracle OLTP workload comparison EFDs are extremely good for highly read-intensive applications and are impressive even using uncached RAID 5 for applications with a read-write mix that favor writes. In either case they provide good total cost of ownership (TCO) because of the other benefits they bring such as significant reduction in energy costs and significant improvements in latency along with the performance benefits. A typical OLTP Oracle application will have some write activity because of DML operations like updates and inserts.

To compare the performance of EFDs with hard disk drives in OLTP environments, identical Oracle Database 11g on ASM were deployed onto two separate RAID 5 (5+1) groups created of 6 x 73 GB EFDs, and 6 x 300 GB 15k rpm Fibre Channel drives. An OLTP workload with a 60/40 read/write ratio was used to generate identical workloads against both databases with a 64-bit Oracle Enterprise Linux server driving the activity. The storage controller level read cache was turned off to keep the test realistic because of the smaller database size used in this experiment. It is important to note that the read/write ratio used in this test reflects a worst-case OLTP scenario as most of the OLTP databases have a read to write ratio of 90:10.

Transactions per min - Flash vs. HDD

0

5000

10000

15000

20000

25000

0:00 1:30 2:00 2:30 3:00 3:30 4:00 4:30 5:00 5:30 6:00 6:30 7:00

Time

Flash Tx/min HDD Tx/min

Figure 3. Transactions per minute comparison of EFD vs. HDD

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 13

Page 14: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

As Figure 3 shows, in the configuration used for the test, the EFDs sustained an average of 19,000 transactions per minute (TPM) and the Fibre Channel drives sustained only around 2,400 TPM, roughly an eight-fold improvement in sustained TPM over Fibre Channel drives. In addition it is interesting to note that the response time on the EFDs was also one-seventh of that observed on Fibre Channel drives.

This use case points to an important fact that changing only the disks in the configuration can result in a significant improvement in IOPS and response time. However, with any tuning effort it is extremely important to understand the real issue with the application. Improving just the I/O by a factor of X may not always result in a multi-fold improvement in overall system response. Often times, the storage bottleneck has been simply removed to uncover another bottleneck somewhere else. The overall improvement in the application will be related to how badly it was impacted by the storage bottleneck that was just removed by moving to EFDs.

Use Case #3: Short-stroked Oracle OLTP workload While Use Case #2 shows the performance benefits of EFDs in an apples-to-apples scenario, oftentimes DBAs and storage architects may not replace Fibre Channel drives with an equal number of EFDs. Moreover, Use Case #2 was done under the most ideal conditions where the entire storage array was running just one workload and also was using the non-standard cache settings. It is very common in production environments to share the storage array with several different applications. To understand the performance of EFDs in the real-world environment, a background load was added to the tests, which drove the storage processors constantly to a 40% to 45% usage and kept the caches saturated. An OLTP workload with 60/40 real/write mix was now run on this system under load against two configurations, one with heavily short-stroked 75 x 300 GB Fibre Channel drives against a 6 x 73 GB EFD setup. The following cache settings were used, which are the default settings.

Table 8. Cache settings (default)

SP read cache SP write cache LUN read cache LUN write cache 75 FC ON ON ON ON 6 EFD ON ON OFF OFF

Relative Transactions per minute

1.00

0.65

0.00

0.20

0.40

0.60

0.80

1.00

1.20

75 FCD 6 EFD

Figure 4. Relative transactions per minute comparison of 6 EFDs vs. 75 FC drives

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 14

Page 15: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

The per-disk efficiency of EFD can be calculated by the following formula as the spindle count is changing between runs.

Per-disk efficiency = Disk reduction factor * Performance improvement factor

Per-disk efficiency = (75 / 6) * 0.65 ~= 8 times Figure 4 shows that, even under these extreme conditions, EFDs delivered an overall improvement of 800%, which is extremely good given the extra savings in the form of data center energy and tile space these EFDs deliver. A deep analysis of the performance data from this run reveals that the uncached EFD LUNs were causing unnecessary Oracle “logfile sync” wait-events, resulting in relatively fewer transactions and underutilization of EFDs. Adhering to Oracle’s recommendations regarding the placement of redo log files, they were moved back to a separate set of Fibre Channel spindles in the next run.

Figure 5 shows that with redo logs on their separate Fibre Channel drives, EFDs delivered an impressive overall improvement of 1,200%. This test also confirms Oracle’s recommendation about redo files movement to EFDs (covered in the next section). It is better to leave the Oracle redo log files on cache backed Fibre Channel drives rather than move to EFDs.

Per-disk efficiency = (75 / 6) * 0.98 ~= 12 times

Relative Transactions per minute (with logs on separate FC drives)

1.00 0.98

0.00

0.20

0.40

0.60

0.80

1.00

1.20

75 FCD 6 EFD

Figure 5. Relative transactions per minute comparison of 6 EFDs vs. 75 FC drives

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 15

Page 16: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Use Case #4: Oracle DSS workload on EFDs EFDs also provide better performance characteristics to sequential workloads than rotating Fibre Channel drives. The improvement for sequential workloads may not be as pronounced as small random workloads; however, it is still significantly higher than when rotating Fibre Channel drives are used. It is hard to find a 100 percent sequential Oracle database workload in the real world as most of the time the workload becomes large random in nature because of the following factors: • Parallel queries – The degree of parallelism on a table, which defaults to twice the number of cores in a

database server and controls how many concurrent processes are scheduled against it. • User concurrency - Generally with consolidation of several small database instances into a single large

database instance, it is highly likely that more than one query is running against a given dataset at any given time.

Rotating Fibre Channel drives perform well when a single stream of queries runs against it, whereas adding multiple concurrent streams will cause additional seek and rotational latencies, thereby reducing the overall per disk bandwidth that the drive can support. In contrast, EFDs, because of the absence of any moving parts, would provide sustained bandwidth irrespective of the number of concurrent queries running against it. To compare the performance of EFDs with rotating Fibre Channel drives in DSS environments, identical Oracle Database 11g instances on ASM were deployed onto two separate RAID 5 (4+1) groups created of 5 x 400 GB EFDs, and 40 x 300 GB 15k rpm rotating Fibre Channel drives. Several query streams were run against these databases. Figure 6 shows that a single RAID 5 group of 5 x 400 GB EFDs finished the queries faster compared to rotating Fibre Channel drives in each case.

Figure 6. Relative total query time comparison of 5 EFDs vs. 40 FC drives

Most of the overnight and quarter-end reporting activity tends to be DSS in nature. DBAs and storage administrators constantly struggle to keep up with the narrow reporting windows. Even most tuned systems, which work fine during the regular operating hours, may not, at times, be able to meet the batch window requirements. The EFDs can easily shrink the batch windows depending on how much the system is impacted by a slower I/O subsystem during that period. Sometimes, even reducing the batch window in half by partially moving databases to EFDs would prove to be a better investment than costly system upgrades and time-consuming application tuning engagements.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 16

Page 17: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Use Case #5: Moving partial database to EFDs It is always a good idea to move the entire database to EFDs if possible; however, sometimes it may not be economically viable to move the entire database because of constraints like size of the database. This section discusses the methods to identify the parts of the database that will yield the most benefit when moved to EFDs. The following Oracle guidelines can be used to help with the data movement decisions.

Table 9. Oracle Flash drive recommendations (from Oracle OpenWorld 2008 presentation)

EFD-friendly DB workloads Not as cost-effective on EFD Random reads • B-Tree leaf access • ROWID look up into Table • Access to out-of-line LOB • Access to overflowed row • Index scan over Unclustered Table • Compression: Increases I/O intensity

(IOPS/GB) Serial reads Random writes • Row update by PK • Index maintenance • Reduce checkpoint interval TEMP: Sort areas and Intermediate tables • Sequentially read and written but I/O done in

1 MB units: not enough to amortize seeks • Lower Latency: Get In, Get Out

Redo log files • Sequentially read and written and commit

latency already handled by cache in storage controller

Undo table space • Sequentially written, randomly read by

FlashBack. But reads are for recently written data that is likely to still be in the buffer cache

Large table scans (if single stream)

Redo Logs on EFDs? (or not) It is a common misconception that Oracle online redo logs will benefit by moving them on EFDs, whereas all the experimental data indicates the opposite position. Testing has shown that moving redo logs on to EFDs results in a low percentage of improvement. It is better to leave them on the write cache backed Fibre Channel drives rather than moving them on to EFDs, thereby using EFDs for other I/O intensive parts of the database like indexes or data.

Oracle TEMP tablespace on EFDs Oracle uses this space mainly for data aggregations and sorting. When the database engine cannot fit the sorts in memory, they will be spilled on to disk for storing intermediary results. Oracle typically does large sequential I/Os against these tablespaces in the context of single user. When multiple users are performing concurrent sorts on these tablespaces, the I/O turns out to be largely random in nature. Even though EFDs do not provide as much benefit for large random I/O as they provide to small random operations, still they are far ahead of what regular rotating Fibre Channel drives can deliver. Depending on the availability of space on EFDs, Oracle applications will benefit by moving the temporary tablespaces to them. Temporary tablespace files should only be moved to EFDs after all the other I/O intensive parts have been moved to them.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 17

Page 18: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Moving high Object Intensity objects to EFDs Oracle defines a parameter called “Object Intensity” (OI) to help identify the right database objects to move to EFDs. These objects could be Oracle tablespaces, data files, indexes, and so on. This parameter simply defines the relative IOPS received by a given object compared to its size. Moving these objects to EFDs makes sense as these objects get accessed frequently thereby significantly improving the latency.

Object Intensity (OI) = Object-IOPS / Object-GB

To study the performance impact of moving data containers with high Object Intensity, a bigger 1 TB database was created on 45 x 15k rpm Fibre Channel spindles. The Object instance analysis was done after the baseline run to identify the potential migration candidates to EFDs. The initial observation of the Tablespace I/O stats table reveals that TABLE1, which occupies 30% of the database size and receives about 70% of the I/O, should receive the maximum benefit by moving to EFDs.

Table 10. Tablespace I/O stats

Tablespace Reads Av Reads/s

Av Rd(ms)

Av Blks/Rd Writes Av

Writes/s Buffer Waits

Av Buf Wt(ms)

TABLE1 4,464,451 4,730 19.11 1.00 1,440,601 1,526 347 47.20

TABLE2 454,647 482 11.05 1.03 288,654 306 142 10.35TABLE3 425,990 451 17.80 1.00 186,795 198 48 2.71TABLE4 265,040 281 10.30 1.09 234,963 249 14 10.00INDEX1 188,572 200 15.07 1.03 168,047 178 3 3.33INDEX2 222,456 236 10.85 1.00 128,160 136 1 20.00 The Object Intensity for all these objects is as shown in Table 11. A close observation of the Object Intensity table reveals that TABLE4 receives a large number of IOPS per GB of size compared to TABLE1. By moving these objects with high Object Intensity to EFDs, a secondary cache equivalent is created for these objects, thereby speeding up their access. The relative benefits realized by moving the top three items in this table should be more compared to any other data container considering the size of the objects. Moving bigger objects to EFDs would require more drives, resulting in more investment to be made in EFDs. The Object Intensity approach can significantly reduce the amount of data to be moved to EFDs while providing the maximum performance benefit, thereby reducing overall EFD investment. It is important to note that the top three entries in Table 11 do not even occupy 2% of the database.

Table 11. OI ratios

Tablespace Av Reads/s Av Writes/s Total I/O Object size (GB) OI ratio TABLE4 281 249 530 0.343 819.24 INDEX2 236 136 372 1.75 134.86 INDEX1 200 178 378 3.35 59.70 TABLE1 4,730 1,526 6,256 109 43.39 TABLE5 158 136 294 4.76 33.19 TABLE2 482 306 788 52 9.27 In an ideal tiered Oracle deployment, the objects with very high Object Intensity should be placed on EFDs, those with moderate Object Intensity should be placed on Fibre Channel drives, and finally objects with the least Object Intensity value should be placed on SATA drives.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 18

Page 19: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Figure 7 shows performance improvements that partial database movements to EFDs can bring. Various parts of the database from a 1 TB OLTP database deployed on 45 x 15k rpm Fibre Channel spindles were moved individually to ASM disk groups created on 6 x 73 GB EFDs to find the benefit of moving that particular part of the database to EFDs.

Relative transactions per minute

1.00 1.011.18

2.13 2.50

All on FC Logs on EFD High OI on EFD Top TS on EFD

Just 2% data moved to EFD

30% data with 70% I/O

moved to EFD

Moving logs - minimal improvement

0.00

1.00

2.00

1.50

0.50

OI – Object Intensity TS – Tablespace

Figure 7. Performance benefits from moving a partial database

The data in Figure 7 confirms Oracle’s recommendations about Redo Log files. The overall gain was only 1% by moving logs on to EFDs. The logs can safely be left on their own set of Fibre Channel drives while EFDs can be used for moving other latency-sensitive parts of database to maximize the benefit. Moving of high Object Intensity objects to EFDs always provides cost-effective performance benefit. In this test, an 18% gain was observed when only 2% of the data (top three objects with high Object Intensity) were moved to EFDs. Whenever the DBA or storage admin is posed with a choice to moving only certain objects of the database because of space constraints, they should resort to the Object Intensity approach for maximum benefit. Moving the top tablespace receiving 70% of I/O provided over 2.13 times improvement. It is important to note that a total of 30% data was moved to get a 2.13 times improvement, which is less cost-effective compared to moving high intensity objects consuming just 2% of database size.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 19

Page 20: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Database tuning is an iterative process. Tuning just the I/O part of the problem may not always deliver the expected performance benefits, not because of performance capabilities of EFDs but for various other bottlenecks. The Oracle Database can have bottlenecks because of various resources like CPU, memory, network, I/O, and sometimes even because of inefficient SQL and poor application logic. It is very important to understand the real issue before deploying EFDs. Usually a database transaction is a very complex operation involving: • Several I/O operations of varied size and type • Several CPU intensive operations processing the data obtained from the storage system • May have inherent serialization

Databases have to commit change vectors to transaction logs before proceeding Data block read cannot happen until the index block is fetched

• A transaction cannot be faster than its slowest sub-operation Figure 8 explains why the applications sometimes may not see relatively higher performance gains even after moving the entire database to EFDs. Consider a case where the transaction was taking a total of 35 ms to finish when the database was deployed on Fibre Channel drives. Assume the transaction comprised of three I/O operations targeting three different data containers consuming 15 ms, 6 ms and 9 ms, respectively, along with some CPU intensive operations to process the data. Moving the database to EFDs optimized only the I/O part of the transaction and nothing has changed as far as the CPU intensive operations go. Hence, the application could only realize an overall gain of 5x (7 ms instead of 35 ms) even though the I/O operations were completing in 1/15th the time (2 ms instead of 30 ms).

1m

15m

1m 1m

9m

2m

6m

Total= 35msStorage=30mBefore

1m

1m

1m 1m 2m

0.5m

Total = 7msStorage= 2ms

0.5m

After

Overall Transaction response time before and after deploying Flash drives (Entire database is moved to Flash)

Time spent processing data

Waiting for I/O on FC drive to finish

Legend

Waiting for I/O on Flash drive to finish

Application only got 5x (35 / 7) improvement, even when storage got 15x (30 / 2)

Figure 8. Transaction response time improvements from moving the entire database

Depending on the amount of data moved to EFDs in the case of partial database migration, only some of the I/O operations targeting EFDs would complete faster but the transactions still have to wait for other operations on the slower Fibre Channel drives to finish. This may result in smaller improvement in the overall transaction response time. Figure 9 explains this scenario, where the I/O operation against the data left on Fibre Channel drives still needs 6 ms to finish irrespective of the huge improvements in EFD-based I/Os, resulting in smaller gains in overall transaction response time.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 20

Page 21: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Total= 35msStorage=30ms

1m

15m

1m 1m

9m

2m

6m

Before

Overall transaction response time before and after deploying Flash drives (Only parts of database moved to Flash)

Time spent processing data

Waiting for I/O on FC drive to finish

Legend

Waiting for I/O on Flash drive to finish

1m

1m

1m 1m 2m

6m

Total = 12.5msStorage= 7.5ms

0.5m

After

Application only got 2.4x overall improvement, even when the I/O on Flash drives finished in 1/16th the time ( 24/1.5 = 16). The overall transaction response time is not improving significantly because of partial data left on Fibre Channel drives.

Figure 9. Transaction response time improvements from moving a partial database

Impact of LUN cache settings EMC CLARiiON storage arrays support enabling and disabling both read / write caches at LUN granularity. The default recommendation is to turn off both read and write caches on all the LUNs that reside on EFDs for following two reasons: • EFDs are extremely fast, so when the read cache is enabled for the LUNs residing on them, the read

cache lookup for each read request adds a significantly higher overhead as compared to FC drives, in an application profile that is not expected to get many read cache hits at any rate. Thus it is faster to directly read the block from the EFD.

• In a real-world scenario, where the CX4 is being shared by several applications and, especially, deployed with slower SATA drives, the write cache may become fully saturated, placing the EFDs in a force flush situation, which adds latency. In these situations, it is better to write the block directly to EFDs than to the write cache of the storage system.

Even though the standard recommendation is not to enable caches for the LUNs residing on EFDs, DBAs and storage administrators can still choose to enable write cache for the LUNs residing on EFDs if they are aware of the implications. This may help them to get the maximum benefit from EFDs in some dedicated environments where the storage system is not shared across many applications. A careful analysis and benchmarking are highly recommended before deviating from the default settings. It was also noticed in our testing that the 400 GB EFDs seem to benefit more by enabling write cache for the OLTP workloads profiles that were used in our testing. This is due to their optimization for capacity versus performance. Figure 10 shows that, when the entire database including the redo logs was deployed on EFDs with write cache enabled, the system delivered almost 1,700 percent improvement with both 400 GB and 73 GB EFDs at the same time. Please note that enabling write cache has a more positive performance impact on 400 GB EFDs. The huge improvement obtained by just enabling write cache for the EFDs cannot be guaranteed for every application. It is heavily dependent on the nature of the application and its data access patterns. The benchmark used in this study resulted in significant improvements not just because of writes to cache but also because of write cache re-hits and reads from the write cache that may not exist with every real-world workload.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 21

Page 22: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Table 12. Cache settings (for this experiment)

SP read cache SP write cache LUN read cache LUN write cache 75 FC ON ON ON ON 6 EFD ON ON OFF ON

Figure 10. Relative transactions per minute comparison of 6 EFDs vs. 75 FC drives

Per-disk Efficiency with cache on ~= (75 / 6) * 1.3 ~= 17 times

EFDs and an Information Lifecycle Management (ILM) strategy Most enterprise application data is temporal in nature. The most recent data gets accessed more often than the older data. It is very common to move the frequently accessed data to a faster storage and similarly move the less frequently accessed data to cheaper storage like SATA. This type of data classification is the beginning of an Information Lifecycle Management (ILM) strategy. Data classification is important in order to provide applications with the most cost-effective storage tier to support their workload needs. It can be done by placing each application on the storage tier that fits it best, but it can also be achieved by using multiple storage tiers within the same application.

A common way for deploying a single database over multiple tiers is by file type. For example archive logs and backup images can use SATA drives while redo logs and data files can use Fibre Channel HDD. As EFDs add another high performing tier called “Tier 0,” it is now possible to place latency-critical data files, indices, or temp files on this tier as discussed earlier.

However, when the database is large, in order to achieve optimum utilization of drive resources, it might be better to place only the data that is accessed most frequently and/or has the most demanding latency requirements on EFDs. Many databases can achieve this by using table partitioning.

Using analysis techniques such as those presented in this paper, the customer can determine the most active tablespaces and files that EFDs are uniquely able to help. Placing the LUNs for these tablespaces on EFDs will provide significant benefit while not requiring the entire database to be on to EFDs.

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 22

Page 23: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Table partitioning also creates subsets of the table, usually by date range, which can be placed in different data files, each belonging to a specific storage tier. Table partitioning is commonly used in data warehouses where it enhances index and data scans. However, with an ILM strategy in mind, customers should consider the advantages of using table partitioning for OLTP applications. While partitioning allows the distribution of the data over multiple storage tiers, including EFDs, it does not address the data movement between tiers. Data migration between storage tiers is out of the scope of this paper. Solutions in this space are available by using CLARiiON virtual LUN technology, the Oracle online redefinition feature, or by using the host volume management features. CLARiiON virtual LUN technology can be used to do a seamless migration of database parts to EFDs with zero interruption to host and applications running on it. An example of using table partitioning is shown in Figure 11 and Figure 12.

Figure 11. An Oracle partitioned table

Figure 12. ILM strategy example

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 23

Page 24: Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deploymentssanstoragedirect.com/Products/Storage/EMC/Clariion/CX4/... · 2016-02-25 · Executive summary

Conclusion Incorporation of EFDs into CLARiiON CX4 provides a new Tier 0 storage layer that is capable of delivering very high I/O performance at a very low latency, which can dramatically improve OLTP throughput and maintain very low response times. With comprehensive qualification and testing to ensure reliability and seamless interoperability, Tier 0 is supported by all the key CLARiiON software applications such as data replication and remote protection.

Magnetic disk drive technology no longer defines the performance boundaries for mission-critical storage environments. The costly approach of spreading workloads over dozens or hundreds of underutilized disk drives is no longer necessary.

CLARiiON now combines the performance and power efficiency of EFD technology with traditional disk drive technology in a single array managed with a single set of software tools, to deliver advanced functionality, ultra-performance, and expanded storage tiering options.

References • Specification Sheet: EMC CLARiiON CX4 Model 960 Networked Storage System • EMC CLARiiON CX4 Series Ordering Information and Configuration Guidelines (restricted

audiences) • Introduction to the EMC CLARiiON CX4 Series Featuring UltraFlex Technology white paper

Leveraging EMC CLARiiON CX4 with Enterprise Flash Drives for Oracle Database Deployments Applied Technology 24


Recommended