+ All Categories
Home > Documents > Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help...

Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help...

Date post: 27-Mar-2018
Category:
Upload: lyxuyen
View: 234 times
Download: 1 times
Share this document with a friend
63
Insights into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix V-Max and V-Max SE disk systems. This kind of important information is generally not found in product overview presentations or marketing brochures.
Transcript
Page 1: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Insights into

EMC Symmetrix V-Max

This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix V-Max and V-Max SE disk systems. This kind of important information is generally not found in product overview presentations or marketing brochures.

Page 2: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 2

Notices © Copyright 2009 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. The information provided in this document is distributed “AS IS” without any warranty, either express or implied. IBM EXPRESSLY DISCLAIMS any warranties of merchantability, fitness for a particular purpose OR INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products.

IBM makes no representations or warranties, expressed or implied, regarding non-IBM products and services, including those designated as System Storage Proven.

Some information in this presentation addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM’s current investment and development activities as a good faith effort to help with our customers’ future planning.

Trademarks DS6000, DS8000, ESCON, FICON, FlashCopy, GDPS, Geographically Dispersed Parallel Sysplex, HyperSwap, IBM, System Storage, System i, System p, System z, TotalStorage, and z/OS are trademarks of International Business Machines Corporation. Other company, product, and service names may be trademarks or service marks of others. Published November 10, 2009 (2009-11-10) Comments on this document can be emailed to David Sacks, Senior Storage Consultant, [email protected].

Page 3: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 3

Table of Contents How to Use This Paper ................................................................................................................. 6 IBM DS8000 – EMC Symmetrix V-Max Feature Name Cross-reference............................... 7 Identifying Current Issues ........................................................................................................... 8

Some Sources of Information .....................................................................................................................................8 Scalability / Upgradability Questions.......................................................................................... 9

Limited Combinations of Host Ports, Cache and Disks..............................................................................................9 Model Upgrade Limitations........................................................................................................................................9 Cache Effective Size Limitation .................................................................................................................................9 Cache-to-Disk Capacity Ratio Issue .........................................................................................................................10 V-Max SE Incompatibilities with V-Max.................................................................................................................11 Specification Sheet Does Not Account for Capacity Reserved by the System.........................................................11 Floor Space Inefficiency...........................................................................................................................................12

Availability Questions................................................................................................................. 13 Lack of Comprehensive Fault Tolerance ..................................................................................................................13 Disruptive Changes – General Considerations .........................................................................................................13 Disruptive Hardware Changes ..................................................................................................................................14 Disruptive Logical Configuration Changes ..............................................................................................................15 Disruptive Microcode Changes ................................................................................................................................15 Write Data in Cache May Not Always be Protected by a Redundant Copy .............................................................15 Track Tables Issues...................................................................................................................................................16 Long Battery Recharge Time....................................................................................................................................16 Customer Control over Power On/Off ......................................................................................................................16 Single Failure can Reduce Front-End Ports, Back-End Ports, and Cache Capacity at the Same Time ....................17

Performance Questions .............................................................................................................. 18 Performance Verification Barriers............................................................................................................................18 Performance vs. Capacity Scalability Issue ..............................................................................................................19 Internal Bandwidth ...................................................................................................................................................19 4Gb/s Device Path Speed Considerations.................................................................................................................20 Issues if More than Four Engines are Configured ....................................................................................................22 Cache Performance Issues ........................................................................................................................................23 Data Striping Inefficiencies ......................................................................................................................................24 Symmetrix Optimizer Issues.....................................................................................................................................24 Component Failure Performance Impact ..................................................................................................................25 Preferred Host-to-Engine Path Question...................................................................................................................25 Unshared Processor Inefficiencies............................................................................................................................26

Management Questions .............................................................................................................. 27 BIN File Issues .........................................................................................................................................................27 ControlCenter Issues.................................................................................................................................................27 Migration/Coexistence Issues ...................................................................................................................................28 Standard Volume Management Complexities ..........................................................................................................28 Metavolume Management Complexities ..................................................................................................................29 Copy Feature Source-Target Restrictions .................................................................................................................30 Dynamic Cache Partitioning Issues ..........................................................................................................................30 Mirror Position Constraints ......................................................................................................................................31 Invista Issues.............................................................................................................................................................31 PowerPath Host-Based Encryption Issues ................................................................................................................33 Flash Drive Restrictions............................................................................................................................................33 Logical Configuration Changes Requiring EMC Assistance....................................................................................33

TimeFinder (Internal Volume Replication) Questions............................................................ 35 TimeFinder Management Complexity......................................................................................................................35 Copy on First Write Performance Issues ..................................................................................................................35

Page 4: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 4

SRDF (Remote Copy) Questions ............................................................................................... 36 Maximum SRDF/Synchronous Distance Issues .......................................................................................................36 SRDF/S Link Protocol Can Elongate Response Time..............................................................................................36 SRDF I/O Parallelism Limitation .............................................................................................................................36 SRDF Volume Consistency Group Requirements....................................................................................................36 SRDF/A Remote Site Data Loss...............................................................................................................................36 SRDF/A Cache Usage ..............................................................................................................................................37 SRDF/A Consistency Groups - Host Software Overhead.........................................................................................37 SRDF/A Remote Site Data Integrity Issues..............................................................................................................38 SRDF/A and TimeFinder/Snap Restriction ..............................................................................................................39 SRDF/A Write Folding Inapplicability.....................................................................................................................39 SRDF/A Link Bandwidth Issues...............................................................................................................................39 SRDF/A Link Outage Management Issues...............................................................................................................39 SRDF/A DSE Sharing Issue .....................................................................................................................................40 Concurrent SRDF/Star and Concurrent SRDF Performance Impact ........................................................................40 Multi-hop and Cascaded SRDF Issues .....................................................................................................................40 SRDF/EDP intermediate system is not “diskless” ....................................................................................................41

Virtual (Thin) Provisioning Questions...................................................................................... 42 Feature Effectiveness Issues .....................................................................................................................................42 Functional Limitations..............................................................................................................................................42 Performance Issues ...................................................................................................................................................42 Pooling Complexity ..................................................................................................................................................43 TCO Considerations .................................................................................................................................................43

Total Cost of Ownership Questions .......................................................................................... 44 Hardware and Software Warranty Durations............................................................................................................44 Hardware Upgrade Warranty Duration Limitation ...................................................................................................44 Warranty versus Prepaid Maintenance .....................................................................................................................44 Hardware Upgrade Charges......................................................................................................................................44 Rules Requiring you to Buy More Hardware than Needed ......................................................................................44 Unusable Disk Capacity can Raise Costs .................................................................................................................45 Disk Mirroring (RAID-1/10) can Raise Hardware and Software Costs ...................................................................45 RAID-5 3D+P (3+1) Costs .......................................................................................................................................45 BIN File Potential Charges .......................................................................................................................................46 Predicting Software Costs.........................................................................................................................................46 PowerPath Pricing Concerns ....................................................................................................................................46 System Resale Restrictions.......................................................................................................................................46 Fibre Channel / SRDF Cost Issue .............................................................................................................................46 Fibre Channel FCP - FICON Conversion.................................................................................................................46 GbE and iSCSI Ports – Total Costs ..........................................................................................................................47 Cache Cost Issues .....................................................................................................................................................47 Enterprise Storage Platform Charge .........................................................................................................................48

IBM System i Support Questions .............................................................................................. 49 System i Commitment Issues....................................................................................................................................49 i5/OS V6R1 Functions – Support Issues ..................................................................................................................49 Cache Restrictions ....................................................................................................................................................49 SAN Port Sharing Restrictions .................................................................................................................................49 Disk Capacity Support Limitations...........................................................................................................................50 Copy Services Automation .......................................................................................................................................50

IBM System p Support Questions ............................................................................................. 51 Remote Copy Integration with HACMP Issue .........................................................................................................51 System p / AIX Support Limitations ........................................................................................................................51

IBM System z Support Questions ............................................................................................. 52 History of Inconsistent Mainframe Support..............................................................................................................52 System z Disk Storage Functions Not Supported by Symmetrix..............................................................................52 General purpose functions not supported for System z ............................................................................................53

Page 5: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 5

Compatibility – General Considerations .................................................................................................................54 FlashCopy, PPRC, XRC Compatibility Questions ...................................................................................................54 Mainframe Function Support “Contact” issue ..........................................................................................................54 GDPS Support Questions..........................................................................................................................................55 AutoSwap Issues.......................................................................................................................................................55 GDDR Issues ............................................................................................................................................................56 Mainframe Snap Usability Issue...............................................................................................................................57

Miscellaneous Product and Vendor Questions ........................................................................ 59 EMC Criticizes its own V-Max Announcement .......................................................................................................59 Vendor Lock-in / Proprietary Orientation.................................................................................................................60 Limited Solution Perspective....................................................................................................................................61 Lack of Total Solution Offering ...............................................................................................................................62 Value for Money? .....................................................................................................................................................62 Inconsistent Messages ..............................................................................................................................................62

Page 6: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 6

How to Use This Paper This paper is intended to help provide disk system customers with insights into EMC Symmetrix V-Max - insights that are generally not found in marketing brochures, standard product presentations, and public specification sheets. Closer examination of V-Max reveals numerous issues that customers would likely prefer knowing about before they make a disk system decision. The term issues refers to topics such as product limitations, design attributes that can adversely impact system operations, restrictions, management challenges, and so on. Issues are presented in the form of questions customers could ask their Symmetrix vendor. Many questions are followed by background information that discusses considerations behind the questions and in many cases identifies sources that help provide answers to the questions. Information in this paper is largely based on best-effort interpretation of related product documentation. As IT users likely know from experience, technical documentation is not always as clear as users desire. In addition, the disk system industry is very active and changes to products and documentation occur frequently. Further, this paper does not attempt to identify all issues customers may experience with Symmetrix. For these reasons, readers are encouraged to use this paper as a starting point - not the last word - for identifying points of interest warranting further investigation.

Page 7: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 7

IBM DS8000 – EMC Symmetrix V-Max Feature Name Cross-reference This feature name cross-reference table is based on generic feature descriptions. Details vary between the different vendors’ features. Only selected features are shown. This table is provided as a convenience for readers unfamiliar with either or both DS8000 and Symmetrix.

Feature description DS8000 names Symmetrix names Ability to support open system and IBM System z servers (mainframes) on the same disk system

Standard capability, not named Enterprise Storage Platform (ESP)

Internal volume copy – full size copies FlashCopy TimeFinder/Clone, Compatible Native Flash for Mainframe

Internal volume copy – space-efficient copies

FlashCopy SE TimeFinder/Snap

Remote mirroring - synchronous Metro Mirror SRDF/S, Compatible Peer

Remote mirroring - asynchronous Global Mirror, z/OS Global Mirror

SRDF/A, Compatible Extended

Remote mirroring – 3 site Metro/Global Copy, Metro/Global Mirror, z/OS Metro/Global Mirror

multi-hop SRDF, cascaded SRDF, SRDF/EDP, SRDF/Star

Host-based multi-path I/O drivers Subsystem Device Driver (SDD) PowerPath

Page 8: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 8

Identifying Current Issues

Some Sources of Information

In order to identify many current restrictions and limitations that apply to Symmetrix, is EMC willing to show me the latest Symmetrix Enginuity Release Notes and Solutions Enabler Release Notes documents? Background. EMC often announces or documents functions with the implication that they are fully operational. However, there are often numerous restrictions, limitations, and facilities that are not supported. Some of these issues are documented in the usual system and feature manuals (not in marketing literature), though customers often don’t read such material until after the sale. Additional – and often very significant – issues are documented in EMC Release Notes documents available to EMC customers. Customers who don’t read these documents risk making assumptions about product capabilities that may not match product realities.

Page 9: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 9

Scalability / Upgradability Questions

Limited Combinations of Host Ports, Cache and Disks

In one Symmetrix system, can hardware resources such as the size of cache, the number of disk drives, and the number of host ports be flexibly configured independently of each other to meet my needs? If only some combinations of these resources are supported, won’t that potentially limit my options and increase my costs? For example, are there cases where, if I want to add more host connection ports, I’m required to also buy other hardware components I do not need? Background. In Symmetrix V-Max, only some combinations of numbers of host ports, cache sizes, and numbers of disks are allowed. These resources cannot be configured or scaled independently of each other. This lack of hardware flexibility can result in unanticipated costs as customers grow and need more of one hardware resource but are forced to also buy more of other hardware resources they don’t need. For example, each V-Max engine supports a fixed maximum amount of cache, so adding cache beyond that requires buying additional engines which include other required components such as host ports, disk paths, processors, and a minimum number of disks that must be paid for even if they are not needed. Similarly, if a customer needs more host ports than the maximum number supported by the installed V-Max engines, the customer must buy entire additional engines, not just additional host ports. Customers might want to refer to EMC’s Symmetrix V-Max Product Guide which has more information on this topic.

Model Upgrade Limitations

What is the frequency at which EMC has been announcing new generations of Symmetrix models? Is an investment in previous models preserved by allowing them to be upgraded in-place to the replacement model? What are the in-place upgrade options if I outgrow the model I install? Background. Since 2003 EMC has announced five generations of Symmetrix: the original DMX models (in 2003), DMX-2 models (in 2004), DMX-3 (standard model in 2005 and the DMX-3 950 model in 2006), DMX-4/DMX-4 950 (in 2007), and V-Max in 2009. Model upgrades between generations are generally not supported. (Examples: A DMX-3 cannot be upgraded to a DMX-4; a DMX-4 cannot be upgraded to a V-Max.) Model upgrades within generations are generally not supported. (Example: V-Max SE cannot be upgraded to a standard V-Max.) If a customer outgrows an installed Symmetrix V-Max SE model they may find themselves limited to either buying another, additional system, or buying and moving their data to the larger V-Max as a “forklift” replacement of the V-Max SE. To circumvent EMC’s lack of support for in-place upgrades, a customer may feel they need to protect themselves against outgrowing a system earlier than desired by instead installing a more costly Symmetrix model than actually needed.

Cache Effective Size Limitation

In V-Max, why is only half the cache capacity what EMC calls “usable” or “effective”? Doesn’t this mean I need to pay for twice as much cache in Symmetrix compared to what I actually need? In particular, if I’m moving to V-Max from a previous Symmetrix model that did not have this cache design inefficiency, do I now need twice the amount of cache just to maintain the same cache effectiveness I already achieved in the previous system with much less cache? How will the need for twice the cache impact my costs for future cache size upgrades? How will the need for twice the cache impact post-warranty maintenance charges? Why does EMC acknowledge this cache limitation in public DMX-4 specification sheets, but not mention it in the V-Max specification sheet even though it also applies to V-Max?

Page 10: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 10

Background. While V-Max was announced to support up to 1TB of physical cache capacity, only half of that is “usable” (EMC’s own term). For example, in cases where other vendors can propose 128GB of cache, EMC should propose 256GB of cache just to be approximately even. (Not counting other V-Max cache efficiency issues beyond the scope of this question.) It wasn’t until 2005 that EMC for the first time introduced mirrored cache protection for writes in Symmetrix cache, finally matching the cache protection others in the industry had been delivering since the 1980s. The implementation, however, is surprisingly inefficient. Instead of mirroring writes only, V-Max, like the older DMX-3 and DMX-4, keeps two copies of both reads and writes in cache, so that a 64GB cache, for example, is essentially two 32GB caches that each contain the same data. From another perspective, 128GB of cache is needed to provide 64GB of “effective” or “usable” cache capacity; EMC has used both terms. This inefficient aspect of the V-Max/DMX-4/DMX-3 cache design represents additional and seemingly unnecessary extra cost to the customer. A higher number of parts also increases the potential of a part failure, an undesirable consequence especially considering the extra parts are not strictly necessary. While EMC appears to have reduced if not eliminated most mention of this inefficiency in V-Max documentation shared with customers, when EMC was more forthcoming about this design issue it wrote “The Symmetrix DMX-4 is available with global memory capacity ranging from 16 GB to 512 GB (256 GB effective).” (Source: DMX-4 Product Guide, December 2008.) No comparable disclosure was identified in the Symmetrix V-Max Product Guide, April 2009. There is no apparent performance benefit to this design. There is actually a performance downside since generating a copy of read data in cache adds internal performance overhead (e.g., increased utilization of the internal paths in the system and extra activity to cache boards which are limited in the number of concurrent operations they support). There is also no apparent availability benefit. While keeping a copy of writes – standard industry practice these days – helps protect against a cache failure causing loss of data, any read data in cache that is lost due to a cache failure is readily accessed from disk (and staged back into cache). Unless Symmetrix cache failures are expected to be unusually frequent, it is difficult to see how adding the cost and complexity of mirroring read data in cache is justified. Since no benefit of this additional cache has been identified, it appears to be required due to design issues peculiar to Symmetrix. Why EMC has not addressed this ongoing Symmetrix cache inefficiency is not known. Other vendors’ disk systems don’t have this design issue.

Cache-to-Disk Capacity Ratio Issue

What is the ratio of maximum effective cache size to maximum physical disk capacity? How does that compare to other disk systems? Background. High aggregate disk capacities supported by a disk system do not necessarily translate to the ability of that system to support high performance for those capacities if other system resources do not scale in proportion. Lack of that kind of resource scaling may make support for high aggregate disk capacities impractical for many users. EMC may draw attention to V-Max maximum cache size, currently 1TB. However, that size, as explained elsewhere and as EMC itself acknowledges, supports only about 512GB of effective cache (due to V-Max cache design inefficiencies). From a performance perspective, what matters is how much (effective) cache is supported relative to the disk capacity supported, not the absolute cache size in isolation. Symmetrix V-Max supports up to 2,400 disks, which would be 2.4PB of 1TB SATA disks. The (effective) cache-to-disk capacity ratio is therefore .5TB : 2.4PB which is about 1:4800. (1GB of cache for every 4,800GB of disk capacity.) In contrast, other enterprise-class disk systems support a higher ratio of maximum cache size to

Page 11: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 11

maximum disk capacity. For example, the IBM DS8700 supports up to 384GB of cache for up to 1,024 disks which is about 1PB of physical disk capacity using 1TB SATA disks. In this case, the cache:disk capacity ratio at about 1:2600 is almost two times better than the V-Max ratio. (Note that this discussion is considering only cache size and not cache management efficiency, an area where DS8000 excels.) While using 1TB disks makes the math easier to follow, the same principle applies for other disk capacities, including if usable capacities are considered rather than physical capacities (by adjusting for RAID, for example).

V-Max SE Incompatibilities with V-Max

In what ways is the entry-level V-Max SE not compatible with the standard V-Max model? Background. There are at least the following differences between the standard V-Max and the entry level V-Max SE. Customers may want to ask EMC for a complete description of differences. • Hardware scalability: As expected, V-Max SE supports fewer host ports, fewer drives, and smaller maximum

cache capacity than the standard V-Max.

• Upgradeability: V-Max SE cannot be upgraded in-place to any configuration of a standard V-Max.

• Power: V-Max SE supports single phase power but limits maximum capacity in this case to 120 drives. The standard V-Max does not support single phase power.

• Maximum logical volumes: V-Max SE supports up to 42,000 logical volumes, while the standard V-Max model supports up to 64,000 logical volumes. This could be a problem if a customer tries to use a V-Max SE as an SRDF target system for a V-Max (or group of V-Max systems) that grows beyond 42,000 volumes.

• Drives per loop: V-Max SE, which has only a single engine, supports up to 8 redundant loops (i.e., 8 loop pairs) with up to 45 disks per loop pair, for a maximum of 8x45=360 disks. But the standard V-Max model supports only 30 disks per loop in a single engine configuration (and also per engine for the first 4 engines; V-Max supports up to 45 disks per loop pair only if more than four engines are installed). The implication of the standard V-Max model design is that 30 disks per loop pair is a significantly better performer than 45 disks per loop pair (which is a 50% increase); while it may seem obvious that fewer disks per loop can provide better performance, it is important to note that the 30 disks per loop limit is mandatory for V-Max, and not merely for the first engine but for the first four engines. The implication seems to be that more than 30 disks per loop can degrade performance and should be avoided. Configuring a V-Max SE with the same maximum 30 disks per loop as allowed in V-Max would reduce maximum V-Max SE capacity by 33% to only 240 disks. Awareness of this issue may influence expectations of V-Max SE users for practical capacity scalability.

Specification Sheet Does Not Account for Capacity Reserved by the System

Does the disk capacity documented in the V-Max specification sheet account for all capacity taken away for use by the system? Background. Unless users are diligent in their planning and familiar with how Symmetrix reserves capacity from user-ordered disks, they may expect to have more usable capacity in V-Max than is the case. Having less usable capacity for data than expected based on the specification sheet implicitly raises costs and can result in running out of capacity earlier than anticipated. Compensating for the lost capacity can raise costs due to needing more disks, potentially more components to support those disks, increased charges for post-warranty hardware maintenance, and increased charges for system features priced based on capacity. The V-Max specification sheet (07/09 H6176.1) is at http://www.emc.com/collateral/hardware/specification-sheet/h6176-symmetrix-vmax-storage-system.pdf .

Page 12: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 12

Consider the spec sheet data for 96 146GB disks, the minimum number of disks supported for a single engine standard V-Max model. The specification sheet claims that in this case capacity for a configuration with RAID-5 3+1 arrays formatted for open systems is 9.47TB. This is apparently calculated as follows: Formatting disks for open systems, as shown on the spec sheet, results in 143.5GB per disk. A minimum of 8 disks must be reserved as spare disks. 25% of remaining capacity is used for RAID-5 3+1 parity data so that 75% of remaining capacity is usable for customer data. (96 disks – 8 spare disks) x 143.5GB/disk x .75 = 9.47TB. This matches the value in the specification sheet. The problem is this does not account for disk capacity required by the system and taken away from user-ordered capacity. The following two such requirements are known:

1. Symmetrix vault devices are required to hold data destaged from cache in cases such as loss of external power. 200GB=.2TB is required per pair of directors for vault space. Each V-Max engine has one pair of directors. (It could not be determined whether this capacity includes RAID protection overhead; only RAID-1 (mirroring) is supported for vault drives.)

2. An additional .064TB is required for the Symmetrix File System (one per system).

Let’s consider only the vault space since the Symmetrix File System requirement for .064TB is relatively small. The lost capacity of .2TB per engine for vault devices is equivalent to losing .2TB/.1435TB = 1.39 146GB disks per engine. In an 8 engine V-Max system the vault device requirement is for 8 x .2TB = 1.6TB which is equivalent to losing 1.6TB/.1435TB = 11 disks’ worth of capacity. Vault devices can be RAID-protected only by RAID-1 (mirroring). It is unclear whether the .2TB requirement per engine for vault space accounts for RAID protection.

Floor Space Inefficiency

Can storage bays (i.e., frames holding drives) in a standard model V-Max be added and completely filled before adding more storage bays? If not, won’t that raise costs and unnecessarily add to floor space requirements? Background. The rules for configuring V-Max systems can easily result in needing multiple storage bays each of which is only partially full, requiring more bays – and thus more floor space – than should strictly be needed just to hold a given number of drives. The issue arises because the system is designed such that all disk capacity associated with any particular V-Max engine must a) reside in bays (frames) on only the left side or only the right side of the central system bay (frame), and b) must reside in only the top half or in only the bottom half of bays. For example, to store the maximum number of drives supported by one engine requires two or three storage bays where each bay is only half-populated with drives associated with that engine. (The reason the requirement is for two or three bays is that the maximum number of drives supported by a given engine varies based on the engine position in the system bay.)

Page 13: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 13

Availability Questions

Lack of Comprehensive Fault Tolerance

Are any major Symmetrix components not protected by redundancy? Background. 1) Service Processor. One Symmetrix component known to lack redundancy is the Symmetrix console that EMC calls the service processor. The Symmetrix service processor is required for various system operations. For example, the Symmetrix service processor is required to run the Symmetrix Optimizer which is used to help address Symmetrix performance problems. From a press article:

Novus Consulting Group, a third-party implementer of storage products, has many large customers using EMC storage systems and frequently runs into the same problems. The service processor on the Symm runs on a laptop attached to the array, and according to Matt Parkinson, solutions architect with Novus, like any laptop it will die from time to time. He said that while the [Symmetrix] box stays up and running, once the laptop dies, users can't make any configuration changes to the system. “Technically, it is still running, but if you can't make any changes on the business side, that's considered downtime.”

-- EMC users make Symm 7 wish list, by Jo Maitland, http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1090186,00.html

Other storage systems offer redundancy protection for their system console. EMC has not yet matched this level of protection long-supported by other vendors’ storage systems. 2) Potential lack of comprehensive RAID redundancy. Unlike other high-end disk systems, RAID protection is generally optional in Symmetrix. A given proposal may omit RAID protection for some volumes, or may attempt to substitute SRDF/S remote volumes in place of local RAID protection. Lack of local RAID protection can reduce the price, but leaves data at increased exposure to loss or performance issues which have their own costs. Customers accustomed to standard RAID protection for all disks in other disk systems may not be aware of the potential lack of comprehensive RAID protection in Symmetrix. Or, customers may be tempted to disable RAID protection for some volumes in order to increase usable capacity.

Disruptive Changes – General Considerations

This topic is a preamble to subsequent topics on changes to V-Max hardware, microcode, and logical configurations that are disruptive to normal V-Max operations. In this discussion, a disruptive change to a storage system is defined as a properly performed intentional change to the system that prevents normal system operation for a period of time noticeable to users or applications. Examples of disruptions to normal operation include loss of data access, loss of function, and reduced performance. Disruptions may apply to a portion of a storage system or to an entire storage system. It’s virtually certain that no vendor of high-end disk systems has a record of never requiring a disruptive change – though vendor marketing literature may give the impression that such disruptions never occur. It is important to realize that a vendor claim to “support” nondisruptive changes is not the same as claiming all changes are always nondisruptive without exception. That point applies to the industry in general, not just to EMC Symmetrix V-Max. Once a vendor claims or implies that all intentional changes to a disk system are nondisruptive, finding even one counterexample opens a door to further investigation.

Page 14: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 14

Identifying changes that are disruptive to V-Max operations is not an easy task. No single list of all such changes has been found. So the following topics provide examples of changes that are disruptive to V-Max operation, but there is no attempt to provide a complete list. Some sources of information customers can refer to include EMC’s Symmetrix Release Notes and Solution Enabler Release Notes which often identify restrictions or exceptions to nondisruptive changes. (Release Notes are available to customers from EMC.) Prospective customers may want to review a year a two of recent Release Notes to help identify restrictions and exceptions that have been or are currently in place.

Disruptive Hardware Changes

EMC writes that “Symmetrix V-Max comes standard with non-disruptive everything. EMC’s philosophy has always been to anticipate whatever might interrupt information access and prevent it”. (Source: EMC Symmetrix V-Max Storage System Data Sheet H6193 03/09). However, are there exception cases where hardware changes are disruptive? Background. Consider the following exception cases. Customers may want to ask EMC to identify all exceptions. (Reference for many items in this section: EMC Symmetrix V-Max Series Product Guide, April 2009.) 1. “Physical memory [cache] is expanded by increasing memory on an existing V-Max Engine, and is nondisruptive

in environments that include redundant paths to the host.” (No comparable restriction was identified for DMX-4 in the Symmetrix DMX-4 Product Guide.) If multiple paths are configured from a host, it could not be determined whether those multiple paths could be to the same engine or must be to different engines. (If the paths must be to different engines, that could be a problem for users with a Symmetrix configuration that has only one V-Max engine.)

2. EMC states “Symmetrix systems support nondisruptive replacement of all major components, including: [list of components]”. Except for Symmetrix engineers, it will likely be a challenge for anyone to identify system components not on the list. The following significant components are notable by their absence from the list: “back end I/O modules”. Each V-Max engine has two back end I/O modules that connect the directors to the drives; therefore there are up to 16 such components in a fully configured V-Max system with eight engines. Assuming EMC was careful in building such an important list, it appears that replacement of any components not in the list – and the back end I/O modules are not in the list – may be a disruptive change.

3. The list of components that can be replaced nondisruptively, referred to above, includes “V-Max Engines”. This implies there are cases where an entire engine needs to be replaced. (Other documentation also refers to engine replacement) However, the V-Max SE model has only one engine, and a standard V-Max system can be configured with only one engine. If there is only one engine, it is difficult to see how it can be replaced nondisruptively.

4. If a RAID array is configured under one engine and that engine fails or needs to be replaced, all data in the RAID array is inaccessible. If a RAID array is spread across multiple (not necessarily all) engines, loss of just one of those engines can make access to all data in the RAID array inaccessible. For example, assume a V-Max has seven or fewer engines and a RAID-5 7+P array (8 volumes) is configured. A RAID-5 array can maintain data access if at most one volume in the array is inaccessible (by dynamically regenerating the data on the inaccessible volume using data on the remaining volumes). However, in this configuration (with up to 7 engines and an 8-volume RAID-5 array) it must be the case that one or more of those engines manages at least two volumes in the RAID-5 array, meaning that if such an engine fails or needs to be replaced then access to two or more of the eight volumes in the RAID-5 array is disabled, resulting in loss of access to all data in the RAID-5 array.

5. The list of components that can be replaced nondisruptively, referred to above, includes “Service Processor — keyboard, video display, mouse (KVM), uninterruptible power supply (UPS).” The heart of the service processor (e.g., CPUs, memory) is notably absent from this list as are other parts of a notebook computer, including the mechanical fan. If those parts need to be replaced, Symmetrix functions dependent on the service processor are apparently disrupted.

Page 15: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 15

Disruptive Logical Configuration Changes

EMC writes that “Symmetrix V-Max comes standard with non-disruptive everything. EMC’s philosophy has always been to anticipate whatever might interrupt information access and prevent it”. (Source: EMC Symmetrix V-Max Storage System Data Sheet H6193 03/09). Are there exceptions to the ability of Symmetrix to support nondisruptive changes to its logical configuration? Background. Multiple logical configuration changes appear to be disruptive. Customers may want to ask EMC to identify all such cases. Some known cases: • Changing the IP address of a gigabit Ethernet port • Converting a standard hypervolume directly to a metavolume • Configuration changes to Open Replicator volumes • Expanding the capacity of an existing striped metavolume in cases where the protect_data option is not supported,

and in cases where it is supported but required system resources are not available. Protect_data is not supported for thin provisioned devices. For cases where protect_data is supported, it preserves data on the metavolume being expanded by first making a copy on a set of TimeFinder BCV volumes. The capacity to support the BCVs needs to be available, which can be especially problematic for large metavolumes because they can be multiple terabytes in size; RAID protection for the BCVs is supported only for RAID-1 (mirroring), which doubles the BCV capacity requirement.

Disruptive Microcode Changes

EMC writes that “Symmetrix V-Max comes standard with non-disruptive everything. EMC’s philosophy has always been to anticipate whatever might interrupt information access and prevent it”. (Source: EMC Symmetrix V-Max Storage System Data Sheet H6193 03/09). Are there exceptions to the ability of Symmetrix to support nondisruptive changes to internal software (Enginuity microcode)? Background. EMC writes this on the topic:

All Enginuity upgrade and configuration changes are enabled only in the context of a careful change management process that keeps application environments functioning and avoids error conditions. In rare cases during an upgrade, the application may function with slightly degraded service for a short period of time. [What cases? How degraded? For how short a period of time?]...The list of upgrade and change activities that can be undertaken while the Symmetrix system is operational is the best in the industry. [What is the evidence to substantiate that claim? Doesn’t the wording imply there are gaps?] However, in these very complex environments some technology upgrade issues cut across the bounds of Enginuity to host operating systems and applications. As a result, in some rare circumstances [which circumstances?] the Symmetrix system may need to be brought offline to complete an elective change. -- Enginuity: The EMC Symmetrix Storage Operating Environment, April, 2009.

EMC Symmetrix Release Notes often identify conditions under which nondisruptive changes to Symmetrix microcode are not supported.

Write Data in Cache May Not Always be Protected by a Redundant Copy

Are there circumstances when Symmetrix operates with only one copy of write data in cache, so that if a cache component fails then data can be lost? Background: Beginning with DMX-3, EMC finally introduced mirroring of write data in Symmetrix cache to match the data integrity protection other disk systems had been delivering for many years. However, in some scenarios Symmetrix places only one copy of write data in cache and does not protect the data with another copy. If the only copy of write data is lost due to a component failure or a bit error beyond ECC (error correction code) ability to correct, it is then up to the customer to figure out how to recover the lost data.

Page 16: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 16

For V-Max, EMC states “Dual write technology is maintained by the system. In the event of a director/memory pair failure, data is obtained from the redundant copy.” (Source: EMC Symmetrix V-Max Product Guide, April 2009.) No claim has been identified indicating Symmetrix handles the failure by providing protection of write data via another copy stored somewhere else. Such lack of redundancy protection in V-Max would be consistent with how EMC describes this issue in DMX-4: “The DMX-4 global memory directors work in pairs. The hardware writes to the primary global memory director first, and then automatically writes data to the secondary global memory director. All reads are from the primary memory director. Upon a primary or secondary global memory director failure, all directors drop the failed global memory director and switch to a nondual write mode.” [Emphasis added] (Source: EMC Symmetrix DMX-4 Product Guide, December 2008.)

Track Tables Issues

Does V-Max address the Symmetrix track table issue identified in this press article?

Novus Consulting Group, a third-party implementer of storage products, has many large customers using EMC storage systems and frequently runs into the same problems...

Another problem he runs into concerns the robustness of the track tables in the Enginuity operating system on the DMX. These tables keep track of how out of sync one volume is with another in products like TimeFinder and SRDF. "When you change attributes on certain devices, that track table gets blown away … that's lots of data you have to resync from scratch -- that can take days," he said.

-- EMC users make Symm 7 wish list, online at http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1090186,00.html

Long Battery Recharge Time

How long does it take to fully recharge system batteries that have become fully discharged? What is the impact on Symmetrix system availability during the recharge period?

Background. The Symmetrix DMX-4 Product Guide documented the value for the DMX-4 models. The V-Max Product Guide does not document this information. Customers may want to ask EMC about this.

System batteries in Symmetrix models are used to destage data from cache to special disk locations (“vault devices”) following detection of loss of external power, so that data can be preserved until power is restored The time to recharge fully discharged batteries can take up to 8 hours in DMX-3 and DMX-4. (Sources: DMX-3 Product Guide, DMX-4 Product Guide.)

Customers may want to compare these relatively long battery recharge times in Symmetrix to the shorter battery recharge times supported in competitive disk systems.

Customer Control over Power On/Off

Has V-Max addressed the reported limitation for DMX models that restricts a customer from powering a Symmetrix on and off under their own control? Background. Symmetrix DMX customers report that EMC limits Symmetrix power on/off to EMC representatives, and charges for these actions. This can affect availability since the EMC representative presumably has to come on-site. See http://groups.google.com/group/bit.listserv.ibm-main/browse_thread/thread/354e480995d81e7a?fwc=1

Page 17: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 17

Single Failure can Reduce Front-End Ports, Back-End Ports, and Cache Capacity at the Same Time

In V-Max, each engine has two directors each of which supports host connection (front-end) ports, drive connection (back-end) ports, and cache. If a director fails, won’t this mean loss of some of all three resources at the same time, with a resulting performance impact? Won’t some remaining resources such as some of the remaining host ports and remaining disk paths become single points of failure? Isn’t this a step back from previous Symmetrix models which separated these three resources onto different directors? Isn’t the concurrent loss of these different resources a condition mostly found in midrange-class disk systems like EMC’s CLARiiON? Background. The previous generation standard DMX-4 model had separate directors for front-end ports, back-end ports, and cache, so loss of a director impacted only one of those resources. The lower-end DMX-4 950 model had common components for sets of front-end and back-end ports, but even that model had separate cache directors.

Page 18: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 18

Performance Questions

Performance Verification Barriers

Why won’t EMC support vendor-neutral performance benchmarks of Symmetrix? Why is it so difficult to get detailed, verifiable information on Symmetrix performance that applications can attain beyond marketing claims from EMC that say in effect “Symmetrix is fast – trust me”? Background. Any vendor can claim to have high, or the highest, performance – claims without evidence are not particularly useful to customers. Jon Toigo, a storage industry consultant, writes “EMC imposes...restrictions on its high-end gear; publish an unsanctioned comparative test of a high-end EMC array and you invalidate your warranty”. (Source: Wanted: Trustworthy Test Data, http://www.networkcomputing.com/storage/other/wanted-trustworthy-test-data.php) Steve Duplessie, another well-known storage industry analyst, writes: "Imagine my surprise when we decided to put the DMX and Hitachi Data Systems' hottest box head to head in a multilevel performance bake-off and EMC said, 'No.'...This is how the EMC of old acted--and just in case you forgot--no one liked the EMC of old… At the end of day, if you make performance an issue, you need to give the user some validation points. We can't use real installations (on the record), because EMC makes people sign non-disclosures that are very intimidating." (Source: Storage Magazine, http://searchstorage.techtarget.com/magazineFeature/0,296894,sid5_gci1257705,00.html) EMC has chosen to not join the premiere vendor-independent storage industry organization that develops and promotes open, standardized, audited performance assessments of disk systems. The Storage Performance Council (SPC, http://www.storageperformance.org) is a multi-vendor industry organization focused on developing and promoting vendor-neutral industry-standard benchmarks to help customers compare application performance across different disk systems. In this way, it is similar to the TPC organization that defines standardized server benchmarks. SPC membership includes Dell, Hitachi Data Systems, Hewlett-Packard, IBM, NetApp, Sun, and others. Many vendors have published benchmark results at http://www.storageperformance.org. EMC has to-date not joined the SPC. If Symmetrix provides high performance relative to competitors’ systems, as EMC typically asserts, why wouldn't EMC be eager to publish SPC benchmark results to substantiate its claims? http://www.searchstorage.com, a Web site dedicated to storage industry news, published an article titled "EMC walks away from SPC". The article quotes Roger Reich, the SPC founder: "We've worked very hard to come up with the benchmark to aid customers by giving them a fundamental selection methodology for storage subsystem products. The goal is to give the customer honest-to-god, quantitative fact about competitors in the market, so they can make an intelligent purchasing decision...The SPC simply wasn't in line with the goals of EMC." (Source: http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci205977,00.html) Storage industry analysts have expressed support for the Storage Performance Council:

"We believe customers should demand that all disk subsystem vendors submit to the SPC testing. If a vendor refuses, then it is time to look for a new supplier." - Hardware or HyperWare? Validating Disk System Performance Claims, Data Mobility Group "Performance 'benchmarketing' is a long-practiced art in this business," says Randy Kerns, senior analyst [formerly] at Evaluator Group. "Don't believe anything regarding performance benchmarks from vendors. Skepticism is a good thing. Use the SPC benchmarks, and if a company doesn't have any, they probably don't compare favorably with the competition." - Is NetApp SANbagging?, Byte and Switch, by Todd Spangler (http://www.networkcomputing.com/storage/other/is-netapp-sanbagging.php)

For an informative press article on the Storage Performance Council, see Measuring Storage Performance at http://www.enterprisestorageforum.com/hardware/features/article.php/3671466

Page 19: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 19

EMC's frequent focus on a few internal Symmetrix attributes such as internal bandwidth and numbers of microprocessors appears to be an attempt to avoid addressing the obvious questions about I/O throughput and response times that applications can attain, even though those are the performance measurements actually of value to customers.

Performance vs. Capacity Scalability Issue

V-Max supports up to around 3 times the usable capacity of DMX-4. But product documentation acknowledges that V-Max I/Os per second capability was increased only by about 2 times compared to DMX-4 which already had a reputation for having performance issues when configured with high capacities. Doesn’t this indicate that V-Max is susceptible to performance issues as capacity increases? Isn’t it important to distinguish the maximum number of disk drives that can be supported by a disk system from the performance capabilities of the system? Background. It is essential to distinguish maximum supported disk system capacity from performance the system can deliver against that capacity. It is one thing to allow a disk system to contain a relatively large number of disks “under-the-covers”, but quite another thing to be able to deliver relatively high levels of performance for such configurations. (Of course, determining what performance is acceptable depends on the customer and application.) “Symmetrix DMX-4 storage systems…have a reputation for slowing down as they load up more and more data.” (Source: Oracle and EMC Fine-Tune Flash Storage for Oracle Database, Chris Preimesberger, eWeek, http://www.eweek.com/c/a/Data-Storage/EMC-FineTunes-Flash-for-Oracle-DBs/) Increasing maximum usable capacity in V-Max compared to the predecessor DMX-4 model by a higher ratio than increasing performance capabilities would seem to be making the situation worse, not better. Considering the issue from the perspective of access density raises additional questions about the ability of V-Max to support high levels of performance at high data capacities. Access density is a measure of I/O rate per unit of capacity, typically expressed as I/Os per gigabyte per second, IO/GB/s. The industry average is around .6 I/Os per GB per second. Many customers run >1 I/O per GB per second. Let’s consider a simple configuration with much less maximum usable capacity that Symmetrix V-Max can support, to keep the math easy to follow. 1 PB = 1,000 TB = 1,000,000 GB. Using the average access density, that implies a system-wide I/O rate of 1,000,000GB x .6 IO/GB/s = 600,000 I/Os per second. That appears to be more than four times the I/O throughput DMX-4 can deliver for real-world OLTP-like workloads at anything near acceptable response times. But, remember that even EMC states that V-Max supports only about two times the I/Os per second of DMX-4. If a given system has higher than average access density or more usable capacity than in this example, the system I/O rate per second would be still higher.

Internal Bandwidth

Why does EMC often focus on an aspect of component performance inside the Symmetrix system rather than on external performance applications can attain in terms of response time and throughput? Background. In recent years EMC and the vendors selling the high-end Hitachi-based disk systems have fought over bragging rights for a storage system attribute commonly called “internal bandwidth”. There is no industry-standard definition of what internal bandwidth means, though in practice it usually refers to the speed of connections between some internal storage system components. (The vendors choose which components to consider and which to ignore. Not surprisingly, vendors want to draw attention to faster components and not to slower components.) This discussion hopes to make clear that internal bandwidth is not by itself a useful measure for determining performance applications can see or for comparing the performance capabilities of different storage systems. The V-Max specification sheet refers to “virtual matrix bandwidth” while the DMX-4 specification sheet uses the term “data bandwidth”. While the spec sheets do not further define these terms, they both appear to be attempts to claim a value for “internal bandwidth”.

Page 20: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 20

Without a common definition and when comparing different storage system architectures, the opportunity is ripe for vendors to use internal bandwidth claims to try to impress anyone who isn’t a disk system engineer that knows better.1 If an internal bandwidth number sounds impressive, a vendor can use it to draw attention away from questions about (external system) performance applications can attain which are, after all, the real performance questions customers should be concerned about. For example, EMC’s DMX-4 specification sheet claims that maximum DMX-4 “data bandwidth” is 128GB/s; the value varies based on configuration. The 128GB/s value is calculated by adding the speeds of the 128 1GB/s internal paths to cache. That calculation ignores the critical fact that vendor documentation explains that only 32 of those paths could ever be active at the same time, reducing the maximum claimed data bandwidth by 75% to 32GB/s; other factors reduce that number further. Maximum external sequential throughput applications could potentially attain is far less in any case. Asking a storage system vendor about performance and as a response being told about internal bandwidth is almost like asking an automobile salesperson how many miles per gallon (or kilometers per liter) a car can deliver and as a response being told how fast the pistons move. The following items identify many of the drawbacks of trying to use an internal bandwidth value as an indicator of system performance.

Internal bandwidth is a throughput measure that says nothing about response time; one disk system could have a similar or higher internal bandwidth than another, yet deliver slower response time per I/O request.

Storage system performance is influenced by many factors other than the speed of connections between some

internal components. For example:

vconfiguration (e.g., number and speed of host ports, number and speed of disks, cache size) vdata striping efficiency (e.g., whether striping is implemented, over how many disks, and the strip size (i.e.,

the size of the portion of a stripe that resides on each disk) vcache management efficiency (e.g., slot size, management algorithms) vinternal volume replication efficiency vexternal volume replication efficiency vspecialized performance accelerators (e.g., SCSI command tag queuing; IBM z/OS HyperPAV support) vinternal software (microcode) version

A given storage system may respond differently to different I/O workloads. Variables here include transfer

(block) size, read:write ratio, cache hit ratios, random vs. sequential content, and how the system responds when multiple applications with various characteristics are all running and contending for system resources at the same time. It is how a storage system handles its total I/O load, not the specification sheet speed of internal paths between selected components, that determines external application I/O performance.

4Gb/s Device Path Speed Considerations

Will my applications see any noticeable increase in performance due to the internal 4Gb/s device path speeds in V-Max, compared to 2Gb/s paths?

Background. This topic is frequently misunderstood and so deserves a discussion to bring out the facts.

It can be tempting to point to a specification number such as 4 Gb/s disk paths (a.k.a. device paths) and say something

1 For example, HDS claims the internal bandwidth (which HDS calls “aggregate bandwidth”) of its USP V system is 106GB/s. However, the SPC-2 industry-standard, audited benchmark result published by HDS at http://www.storageperformance.org/results show that the system delivers less than 10GB/s of external performance for applications – that number is over 90% less than the internal bandwidth claim. This raises serious questions about the usefulness of any vendor’s internal bandwidth claim as an indicator of application I/O performance.

Page 21: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 21

like "4 Gb/s is twice 2 Gb/s, so performance must be twice as good". But, is it? What do these speeds mean?

Considerations:

1) Performance is a system attribute. Vendors often try to draw attention to a particular internal component in their system they think is faster than a comparable component in a competitors’ system - they never seem to point to their own systems' performance bottlenecks or inefficiencies.

Consider that some midrange disk systems (e.g., IBM DS4000 or EMC CLARiiON) offered 4Gb/s disk paths before high-end disk systems had similar support. No one seemed to doubt that the high-end disk systems were still much faster. Increasing the speed of one internal component in isolation, unless it was a major bottleneck, is unlikely to increase a system's overall performance capability which depends on total system design and the particular configuration.

2) Device path speed does not affect cache hit speed. When reading data directly from cache and writing data directly to cache there is no delay for the device path at all regardless of path speed. In most modern disk systems all writes are cached. For sequentially read data, cache management algorithms prefetch data to avoid applications having to wait for it to be accessed from the device where it is stored.

3) Disk and solid state media generally run under 2Gb/s. It is very important to understand that the device path speed is not the same as the speed that data can be read or written to the device media (e.g., a disk surface in a disk drive, or the electronic memory in a solid-state drive, inside the device module). Media speed is typically much slower than the path speed – we’ll see some examples in a moment.

The 2Gb/s or 4Gb/s speeds of this discussion are the speeds of the individual paths between each storage system device adapter and each device module. (The paths are usually based on FC-AL (Fibre Channel Arbitrated Loop) technology. The device adapter is also sometimes called a “director” or “back-end director”.)

Devices include electronic buffers that provide a speed-matching function to handle the difference between the speed of the device media and the speed of the path between the device module and a device adapter; paths from adapters should be viewed as connecting to these device module buffers, not directly to device media. In the case of disk drives, for example, manufacturers build disk assemblies that support a given capacity and mechanical speed, then add components (including the speed-matching buffer) to the assemblies to support a particular connection interface and speed (e.g., 2Gb/s FC-AL, 4Gb/s FC-AL, or xGb/s SAS). The physical disk modules themselves often (if not always) for a given class of drive may have identical underlying performance specifications (e.g., rotation speed, seek times, media data transfer rates) regardless of the disk module path connection interface and speed.

EMC’s specification sheet for Symmetrix V-Max (www.emc.com) shows the “Internal Data Rate (Mb/s)” (i.e., media speed) for 4Gb/s disks varies from a maximum of 1,142 Mb/s to 2,225 Mb/s. (The minimum rates shown for each drive are far lower.) That means even the maximum media speed is varying from around 1Gb/s to only a little more than 2Gb/s – all far less than 4Gb/s. For flash (solid state) drives the “Internal Data Rate” is stated to be only 800 Mb/s to 1600 Mb/s, still under 2Gb/s. (Source: http://www.emc.com/collateral/hardware/specification-sheet/h6176-symmetrix-vmax-storage-system.pdf)

Having a path faster than the device media speed does not make the media faster. The device’s buffer provides a speed-matching function. For example, as a block is read from the media, the bits are stored in the buffer and only after sufficient bits are accumulated does the buffer connect to the path to transmit those bits. The idea is to use the path efficiently, but the time to read the data (from the point of view of the user) is still limited by the media speed. (The buffer can also accept write data at path speed and later write the bits to the (slower) media; this function may or may not be enabled depending on the storage system. In any case, since writes are handled at system cache speed, applications don’t wait for writes to be written to either a device buffer or device media.)

Note: Because device media speed is well under 4Gb/s (400MB/s), if a customer specifies a requirement for “4Gb/s end-to-end”, it cannot be satisfied by any disk system using devices with specifications similar to those cited above.

4) For random read I/Os not satisfied from system cache, the difference in performance due to 2Gb/s vs. 4Gb/s path speed is likely insignificant. As discussed above, the device media speed, typically less than 2Gb/s, is the speed

Page 22: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 22

at which data can be read from the media (e.g., a disk surface). Path speed, no matter how much faster, does not change that. But, for the sake of argument, let’s assume data could be read from the media at 4Gb/s; that would be the case even when media speed is slower if the data being read happens to be in the device buffer, a relatively rare occurrence given the small buffer size. How much difference might this make in read I/O response time?

Consider an 8KB block, a common transfer size for random I/O. To transfer this block at 4Gb/s (which is about 400MB/s since FC-AL uses 10-bit bytes) takes approximately 8000bytes x (1 sec/400MB) = .00002seconds = .02ms. (Some protocol overheads are not accounted for in this approximation so actual transfer time is slightly longer.) To transfer the same block at 2Gb/s takes about .04ms. Both times are tiny fractions of one millisecond and differ by only .02ms.

For disk drives, the mechanical motion portion of disk response time adds whole milliseconds to this. For example, at 15K RPM, ½ a rotation (which is the average rotational delay) takes 2 milliseconds. Seek times can vary from 0 to several milliseconds. Queuing delays can add several more milliseconds (especially for busy volumes or disks).

Therefore, even assuming media data transfer speed was 4Gb/s (though it is in fact much slower), the performance benefit of 4Gb/s versus 2Gb/s transfer speeds does not appear to be anything anyone is likely to notice. For random I/Os to disks, the time for data transfer is only a very small percentage of the time a disk is busy with an I/O request unless the requested data happens to already be in the relatively small module buffer. Rotation + seek times + queuing times add multiple milliseconds, of which a sub-millisecond path transfer time is virtually a rounding error.

7) When might device path speed matter? When the path is highly utilized to the point where I/O response times are unacceptably impacted. This is most likely in an environment with multiple concurrent sequential I/O streams with large data blocks on the same FC-AL loop(s). Most commercial I/O workloads are dominated by random I/Os.

What about high random I/O activity on one loop? Assume there are n devices on one loop and all are active with 8KB random I/Os. Since each transfer takes about .04ms on a 2Gb/s loop, 1000 transfers a second on that one loop contribute less than 10% loop utilization. (1000 x .00004sec = .04 sec = 4% utilization.) If path utilization isn’t a problem with 2Gb/s path speeds, then it's unlikely there will be a perceptible benefit to 4Gb/s path speeds.

8) What EMC really thinks about path speed. From a blog by an EMC employee:

http://thestorageanarchist.typepad.com/weblog/2007/07/0019-dmx-4-and-.html

The major Symmetrix-related hardware component of this launch is the new DMX-4, and more specifically, its new 4Gb point-to-point FC back-end infrastructure. Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives. You may have noticed that there weren't any specific performance claims attributed to the new 4Gb FC back-end. This wasn't an oversight, it is in fact intentional. The reality is that when it comes to massive-cache storage architectures, there really isn't that much of a difference between 2Gb/s transfer speeds and 4Gb/s. Transmit times are really only a tiny portion of I/O overhead, and just don't make that much difference when a massively-cached system is pre-fetching reads, buffering/delaying writes and reordering I/O requests to minimize seek times. Not that 4Gb/s won't help some applications, but most people just won't see any noticeable difference. [Emphasis added]

Issues if More than Four Engines are Configured

Why does V-Max support up to 30 drives on each of the first 32 FC-AL loop pairs (i.e., first four engines), then increase that by 50% to up to 45 drives on each of the remaining 32 loop pairs (i.e., last four engines)? Why isn’t the layout more balanced? Doesn’t the significant imbalance indicate performance can noticeably decrease if more than four engines (which support a maximum of 960 drives) are configured? If more than four engines are configured, won’t the imbalance in drives per loop complicate data placement and performance tuning as administrators have to take into consideration which size loop data resides on? Alternatively, might more engines and storage frames need to be configured to keep the number of disks per loop at a reasonable level, raising costs?

Page 23: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 23

Background. If EMC were to claim that loop length is not a performance consideration, then why does it allow only short loops until more than four engines are configured since offering the longer loops first would reduce cost for users?

Cache Performance Issues

What performance inefficiencies have been identified in the Symmetrix cache design? Will customers have to buy more cache or have to do more manual tuning to compensate? Background. The following list identifies performance inefficiencies in the Symmetrix cache design. This list may not be complete. Half of cache is not “usable”. A portion of Symmetrix cache is reserved for system use. That is not uncommon in disk systems. Of the remaining cache, nearly all of it would normally be expected to be available to hold application data to improve performance. However, in V-Max, only about half of the cache capacity is what EMC refers to using terms such as “effective” or “usable”, meaning about half of cache is not usable. This is because Symmetrix unproductively mirrors read data in cache, not only write data. Mirroring write data in cache is important to protect it from loss prior to being destaged to RAID-protected devices. However, mirroring read data in cache is not productive because if such data was lost due to a cache failure it is easily retrieved directly from disk. Keeping two copies of read data in cache wastes valuable cache capacity which is why other disk systems, including EMC’s own CLARiiON, only mirror write data in cache, not read data. It is ironic that EMC sometimes attacks disk system designs where a portion of cache is temporarily offline in relatively rare operational circumstances, considering that in V-Max half of system cache is unusable all the time. EMC was more forthcoming about this significant issue by mentioning it in the DMX-4 specification sheet which indicates that of the maximum 512GB of cache supported by DMX-4, only 256GB is “effective”; however, EMC does not mention this issue in its V-Max specification sheet though other documentation indicates the issue continues to apply to V-Max. Inefficient slot size further reduces usable cache. It is common in cache designs to divide the cache capacity available to hold application data into a set of equal size “slots”. (Other terms sometimes used are “segments” or “pages”.) A slot is used to hold a data block being written by a host, or to hold a data block being read by a host. In the case of reads, the slot may also be used to hold some nearby data (e.g., adjacent data from the same physical disk track) depending on cache management algorithms. (If one slot is too small to hold a data block, the system can just assign multiple slots to hold that block.) Symmetrix cache is designed as relatively large 64KB slots. The problem is that common random I/O block sizes for real-world applications are relatively small 4KB and 8KB blocks. This means that small, random blocks occupy only a small portion of a Symmetrix cache slot yet are assigned an entire slot, potentially wasting more cache space than they use. That leaves less cache space available to hold other active data, thus impacting performance. Consider that since EMC already indicates only about half of V-Max cache is usable, if for a given system writes are mainly random and use only half (or less) of each slot, then only about 25% of the ordered cache size is actually used to contain frequently referenced data. (Many other disk systems use much smaller cache slot sizes to more efficiently manage cache space. It is interesting to note that the XIV disk system, roughly fifteen years newer than Symmetrix but designed by the same architect, uses a small, efficient 4KB cache slot size.) Unproductive overheads. Mirroring read data in cache adds internal performance overhead because every block of data being read from disk into cache must be copied to two different caches over two different internal paths. That reduces internal path bandwidth available for productive work. Also, since cache supports a finite number of concurrent accesses, placing the copies of read data in two different cache locations appears to reduce the number of concurrent productive accesses supported by cache. Cache available for writes. EMC claims that up to 80% of Symmetrix cache can be used to hold write data. That is misleading. That value is reduced to 40% because write data is mirrored in cache, is further reduced due to wasted space in the large Symmetrix 64KB cache slots, and is further reduced because Symmetrix keeps a lot of system information in cache.

Page 24: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 24

Data Striping Inefficiencies

How is data striping supported in Symmetrix? In what cases is striping automated in Symmetrix, and in what cases does it require special (manual) configuration planning? How efficient is the size of each strip? (A "strip" is the portion of each stripe that resides on each physical device. Relatively large strip sizes can reduce the performance benefit of striping.) Background. Data striping places blocks of consecutive data across multiple physical devices - a time-honored technique to improve performance. Historically, striping has been an option supported by volume managers in many operating systems as well as being supported by many storage systems. Symmetrix has some limitations, inefficiencies, and management complexities in its data striping capabilities. Customers accustomed to disk systems that automatically stripe all data may not realize this about Symmetrix prior to actually trying to configure it. Note that the strip sizes identified below were documented in the Symmetrix DMX-4 Product Guide. The Symmetrix V-Max Product Guide does not contain that information – customers may want to ask EMC about V-Max strip sizes in case they have changed. EMC documentation sometimes calls the strip size the “stripe size”. • Non-RAID: “Hypervolumes” are the required building blocks of logical volumes. A hypervolume resides entirely on one

physical disk, is not striped, and is not RAID-protected.

• RAID-1: Data in hypervolumes configured as RAID-1 is not striped.

• RAID-10 for open systems volumes, also referred to as “RAID-1/0”: To implement this, users must group hypervolumes into “metavolumes” (a.k.a. “metadevices”). Metavolumes can optionally be striped in a RAID-10 fashion. While documentation is not consistent, it appears the strip size (i.e., logically contiguous bytes on one disk) is at least nearly 1MB in size. Such large strip sizes can mean that disk utilization is not as balanced as would be the case for smaller strip sizes, and that small files may not benefit from striping. (For example, a single busy 1MB file could reside entirely within one strip on one disk and create a disk performance hot spot.)

• RAID-10 for IBM System z (mainframe) volumes: This data is striped across only 4 disks, and the strip size is a relatively large 3390-emulated disk cylinder, about 900KB which is nearly 1MB. Large strip sizes can mean that disk utilization is not as balanced as would be the case for smaller strip sizes, and that small files may not benefit from striping.

• RAID-5: In this case the strip size appears to be a more reasonable 256KB. (4trk/strip x 64KB/trk = 256KB/strip) A drawback is that EMC reportedly often recommends that Symmetrix RAID-5 arrays be limited to four disks each (a 3D+P configuration) rather than using arrays of eight disks (7D+P) – this recommendation may be due to performance or other issues unique to Symmetrix when using 7D+P arrays. 3D+P arrays have several downsides compared to 7D+P arrays. 3D+P arrays limit stripes to spanning only four disks, offering less overall I/O balancing and less potential I/O parallelism for a given logical volume, thus reducing the potential performance benefit. 3D+P arrays also raise costs because more physical disks are required to support a desired amount of usable data capacity. (This is because the equivalent of 1 disk out of 4 is taken away for parity vs. only 1 disk out of 8 in a 7D+P array.) The cost increase applies to the disk drives, to licensed features that are priced based on raw capacity, and to post-warranty maintenance charges for both those items. Finally, 3D+P arrays reduce maximum usable capacity in one disk system more than 7D+P arrays.

Symmetrix Optimizer Issues

Isn't the priced Symmetrix Optimizer feature designed to help address bottlenecks inherent in the Symmetrix design? Why should I have to pay extra for a product to circumvent disk "hot spots" that tend to be caused in many cases by the design of the Symmetrix system (because of its limited data striping and relatively large/inefficient strip sizes for some cases where striping is supported)? Considering how it works, doesn't the Optimizer itself add internal overhead, add costs for software, add costs for required additional disk capacity to support volume swaps, and require administrator time to manage its numerous parameters to try to avoid making performance even worse? Wouldn't it be better to install a disk system that efficiently stripes all data to reduce hot spots automatically and proactively rather than a disk system that requires priced software to try to handle hot spots after they have already degraded performance? Or, more generally, wouldn’t it be better to install more system resources (e.g., disks and cache) to avoid performance problems rather than invest in a feature that only addresses problems after-the-fact and adds its own management complexities? Is the Optimizer coordinated in any way with the Dynamic Cache Partitioning feature (for example,

Page 25: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 25

could two volumes from applications that are assigned separate cache partitions still interfere with each other by the Optimizer placing them on the same physical disk)? What types of volumes are not supported by the Optimizer? Background. The number and nature of the questions shows that there is more to the subject than many customers often consider. If data striping were easy to implement in Symmetrix and effective at minimizing disk hot spots, there would likely be little value in the Optimizer. The Optimizer may well be an example of turning a design weakness into a selling opportunity. Examples of types of Symmetrix volumes not supported by the Optimizer include BCV volumes, IBM System i volumes, and striped CKD volumes, and more. Customers may want to ask EMC to identify a full list of unsupported volume types.

Component Failure Performance Impact

EMC often attacks competitive disk systems for having reduced performance if a major hardware component fails, but doesn’t this issue apply to Symmetrix as well? Since Symmetrix hardware has so many complex major components, what is the impact on application performance when one of these components fails? For example, what is the impact on performance if one engine fails in a two engine system? A V-Max system has only two Matrix Interface Boards which support communications between all the V-Max engines – what is the performance impact if one fails? Background. EMC’s attacking competitive systems for having reduced performance if a major component fails may be an attempt by EMC to deflect attention from the fact that the issue also applies to Symmetrix. Note that the Symmetrix V-Max Product Guide includes “V-Max Engines” on the list of components that can be replaced nondisruptively. That either means an entire engine can fail, or a part of an engine can fail that requires taking the entire engine offline. In any case, there would be an impact on performance due to simultaneous loss of some host ports, some cache, some processors, and some paths to devices.

Preferred Host-to-Engine Path Question

Each V-max engine has resources it manages directly including host ports, processors, cache, disk paths and disk drives. What is the impact on performance when a host sends an I/O request to a V-Max engine that does not directly manage the resources needed to satisfy the host request, in which case the engine has to communicate to another V-Max engine to help satisfy the request? Is this unbalanced access to data (i.e., sometimes accessible directly by one engine vs. sometimes requiring communication between engines) an inefficiency compared to the DMX design where every director had a direct path to every cache board? Background. (Note: The question is being raised due to a characteristic of V-Max architecture. It has not been determined whether or not there is an issue in this area users need to be concerned about.) The V-Max design inherently supports asymmetrical access to data unless there is only a single engine in the system, or host ports and data accessed over those ports are segregated on a per-engine basis. A given engine owns a set of host ports, cache, processors, and paths to a subset of drives. (Only in configurations with just a single engine does that engine have direct connections to all cache and drives.) A host could send an I/O request to an engine that can satisfy the request from data in its cache or using a directly-managed drive. Or, a host could send an I/O request to an engine that cannot do this and instead must communicate with another engine to satisfy the request; for reads, the data would need to be returned to the first engine so it could be transferred back to the requesting host over the requesting path. It seems possible that performance (e.g., the response time to complete the request, and internal system overhead) could vary depending on whether such inter-engine communication is needed. It may be the case that it is preferable from a performance perspective to configure V-Max so that hosts send a given I/O request only to the engine known to directly manage the drives where the data resides because other configurations have reduced performance. If so, that could increase management complexity. If volumes are striped across drives owned by multiple engines, it may not even be feasible to limit I/O requests to preferred paths.

Page 26: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 26

This kind of situation is sometimes seen in midrange disk systems. Some midrange disk systems with dual controllers have a best practice called “preferred path” where performance is better if an I/O request is received by the controller that is assigned to directly manage the desired drive rather than by the other controller.

Unshared Processor Inefficiencies

There are 16 processors per engine, meaning 8 per director. Can processors be shared across directors? Within a director, are subsets of processors dedicated to only certain resources? If processor sharing is limited in these ways, doesn’t that complicate tuning and result in inefficient processor utilization? Background. When processors are isolated to managing only some system resources, processor utilization can be skewed. For example, some processors may be very busy resulting in degraded performance while other processors are idle but are prohibited by system design from helping the busy processors. This has been a drawback historically in Symmetrix and appears to not be addressed in V-Max. This design is less efficient than storage system designs that support extensive processor sharing – e.g., include a pool of shared processors that can be applied against work wherever processing power is needed. Lack of processing sharing in Symmetrix may mean more processing power is needed, compared to other system designs, to compensate for processor power that sits idle. More manual monitoring and tuning may be needed to identify and address performance problems caused by skewed processor utilization.

Page 27: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 27

Management Questions Other management issues are identified throughput this paper and may not be repeated here.

BIN File Issues

Can EMC reconcile the inconsistent information it has written about the status of the BIN file in V-Max? How concerned should V-Max users be about issues that have historically surrounded this aspect of Symmetrix, including its potential impact on system availability? Background. In one V-Max document EMC states “the BIN file requirement, necessary in previous versions of Enginuity, is removed.” (Source: EMC Symmetrix V-Max Product Guide, April 2009.) However, other sources indicate this is not the case. For one, an EMC employee who claims to be EMC’s Chief Strategy Officer Symmetrix Product Group has publicly written that for V-Max “The BIN File still exists. It just won't be as big of a pain anymore.” (Source: http://thestorageanarchist.typepad.com/weblog/2009/04/1057-symmetrix-v-max-scale-up-scale-out-scale-away.html) Given EMC’s seemingly inconsistent messages about the current role and status of the BIN file in V-Max, and the serious issues associated with the BIN File in Symmetrix models prior to V-Max, the following discussion is included based on information about the BIN file in those prior models. Customers may want to consult with EMC to try to learn more about the role of the BIN file in V-Max.

- - - The Symmetrix BIN file is also sometimes referred to as the Symmetrix configuration file or database. It resides in the Symmetrix service processor that is accessed by EMC personnel through an application called SymmWin. (Each Symmetrix has only a single service processor - if it fails, configuration changes and other functions dependent on the service processor are disabled.) Multiple Symmetrix users have expressed dissatisfaction with issues concerning Symmetrix BIN files. Searching the Web and Google Groups for symmetrix “bin file” identifies numerous mentions and issues about this file. EMC’s marketing literature does not appear to discuss the BIN file or its impact on customers. It appears that many customers use EMC assistance to maintain the BIN file. It is not clear why customers choose to not make logical configuration changes they are supposedly allowed to make on their own using EMC’s tools since EMC reportedly charges a fee when they are asked to make the changes and since sources indicate there can be a 5 day delay when EMC assistance is requested to do this. This prompts questions customers may want to ask: Is the process customers are supposed to use to make configuration changes overly complex, or overly error prone? Do many desired configuration changes lack customer interfaces to manage the change and so require direct BIN file modification by EMC personnel? Are there other considerations that explain why so many customers pay EMC to make configuration changes for them? In 2006, a major U.S. insurance company reported they lost multiple volumes of data due to an error in the BIN file caused by EMC. While the customer was reluctant to share details, the cause of the data loss they experienced appears to be related to the requirement to replace the entire BIN file when making a change to even one device, so that errors introduced in the BIN file can then impact the entire system. That situation happened to this customer. EMC reportedly told the customer that this kind of problem is not preventable due to the way the BIN file is designed. The customer subsequently replaced its Symmetrix systems with IBM DS8000 systems.

ControlCenter Issues

Does ControlCenter really provide a single interface to manage my storage environment? How easy is it to install and use?

Page 28: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 28

Background. There are numerous storage management activities that are not supported by ControlCenter, so multiple "consoles" and interfaces to manage storage are needed by customers in practice. (That point is not unique to ControlCenter.) ControlCenter support for non-EMC storage is limited - often to monitoring only, with little if any active management (e.g., no ability to make configuration changes or control various system features). Even many of EMC's own storage management products are not integrated into ControlCenter, such as products acquired from Legato. While it had some positive things to say about ControlCenter, an InfoWorld analysis says "its reliance on agents and the limited support for competing hardware shoot it down as an integration solution for all but EMC-centric storage networks". (Source: Vendors square off in SAN integration challenge - EMC, HP, IBM, and Fujitsu Softek grapple with synching up three heterogeneous SAN, InfoWorld, online at http://www.infoworld.com/article/03/09/12/36FEsanint_1.html?source=searchresult) A press article that discusses user experiences with ControlCenter is: EMC users suffer software headaches, online at http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci962393,00.html?track=NL-52&ad=481878 Another article states: “[In 2008] Howard Elias, who runs EMC's management software division…spoke about upgrades to both Smarts and Control Center. But particularly as it pertained to Control Center, the talk was almost strictly about how EMC managed EMC products. So much so that at one point Elias used the word heterogeneous to refer to managing two different EMC product lines from the same management tool…So while EMC can better manage EMC (and partner's) stuff, it doesn't appear that heterogeneous storage management is anything approaching a priority for the storage giant.” http://www.informationweek.com/blog/main/archives/2008/05/emcs_own_nosoli.html

Migration/Coexistence Issues

Can the same level of Enginuity microcode run on V-Max and earlier models, so a migration from an older Symmetrix to V-Max does not require a change in hardware and microcode at the same time? Background. V-Max requires a new level of Enginuity, 5784, that is not supported on earlier Symmetrix models. Some functions work only on V-Max; some functions work only on older models. For example, TimeFinder/Mirror is not supported on V-Max; some Virtual LUN capabilities supported by V-Max are not supported on older models. If a customer has both V-Max and older Symmetrix models, a customer-written or ISV-written program or script that invokes Symmetrix functions may need to be sensitive to which Symmetrix model is being accessed, or the customer might determine they need to limit themselves as much as possible to only those functions supported by all systems. (This kind of coexistence consideration is not necessarily unique to Symmetrix.)

Standard Volume Management Complexities

What complexities are involved for managing standard logical volumes? Background. Standard volumes, called “hypervolumes” or “devices” are used to hold application data. The design behind the complexity of managing these volumes dates back to the early 1990s. Examples: • The Symmetrix manual with information about creating hypervolumes has a table identifying over 40 basic

configuration variations the administrator must understand.

• The user must divide each physical disk into multiple hypervolumes that will fit on that one disk. This can result in logical volumes of incompatible sizes or in wasted capacity.

• The size of a hypervolume cannot be easily changed. (“[M]aking changes to hyper sizes can be a torturous process.” (Source: http://thestoragearchitect.com/2009/02/09/enterprise-computing-dmx-4-is-dead-long-live-dmx-5/))

Page 29: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 29

• Each hypervolume must occupy physically contiguous disk space on a single physical disk. If sufficient contiguous space is not available to create a hypervolume of the desired size, the user must resort to managing metavolumes (discussed later).

• Unless all hypervolumes are the same size, deleting hypervolumes can leave stranded islands of capacity that may be difficult to reuse.

• If a LUN is needed that is larger than the maximum supported hypervolume size (about 256GB in V-Max, and

smaller in previous Symmetrix systems), the user must resort to managing metavolumes (discussed later). • If a LUN is needed that is larger than the size of an available physical disk (e.g., trying to create a 200GB

hypervolume when only 146GB disks are configured), the user must deal with the complexities of “metavolumes” (discussed later).

• Data striping, which helps manage performance, can be complex to implement. For example, data in individual

hypervolumes is not striped. Without adequate striping, Symmetrix can be prone to disk hot spots, requiring extra tuning attention such as moving hypervolumes around to try to deal with the problem reactively.

• Users must define spare disks, located to make efficient use of Symmetrix sparing facilities. • Users must define vault capacity on disks under each director pair for system use to help protect write data in cache

following loss of external power.

Metavolume Management Complexities

What complexities are involved when managing metavolumes (a.k.a. metadevices)? Background. A Symmetrix metavolume is a group of hypervolumes that appears to be a single logical volume (e.g., one LUN) from a host’s perspective. Metavolumes are used to provide larger capacities than supported by individual hypervolumes, to allow volume capacity expansion, and to provide a form of RAID-10 (striped mirror) protection. Symmetrix metavolumes have multiple limitations and management complexities. This list is not intended to be complete. Examples: • Configuring metavolumes reduces the maximum number of logical volumes that can be presented to hosts.

• In 2008 Symmetrix added support to create a metavolume in one step rather than multiple steps. Supported sizes are

based on a single system-wide user-defined increment. To create metavolumes with more flexibility than that requires (manually) creating multiple hypervolumes and grouping them together into one metavolume.

• Hypervolumes are limited in size so the user may need to create multiple hypervolumes to provide the desired total metavolume capacity. For example, since the maximum hypervolume size is only about 256GB in V-Max (and smaller in earlier models), it would take at least 8 hypervolumes to form a 2TB metavolume, plus additional hypervolumes needed for RAID protection.

• Metavolume expansion appears to be limited to open systems volumes (LUNs). There does not seem to be a way to

expand the capacity of IBM System z (mainframe) volumes. • Expansion of a striped metavolume is limited to equal-sized increments.

• Hypervolume capacity cannot be expanded directly – a hypervolume must first be converted to a metavolume. That

process appears to be disruptive because the volume must be made unavailable to hosts during this process (i.e., it must be “unmapped” to a host, using EMC’s term).

Page 30: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 30

• Metavolumes can be migrated using the Virtual LUN feature, but this applies only to an entire metavolume including all members; individual members cannot be migrated.

• Expanding an existing metavolume can be disruptive if the customer does not have the additional hardware resources required to support nondisruptive expansion. The following description explains the complexities of how the situation was addressed in Symmetrix prior to V-Max; however the approach appears to depend on TimeFinder/Mirror which is not supported in V-Max. It could not be determined whether the situation can be addressed in V-Max in other ways. Customers should check with EMC about the status of this situation. Background: Metavolumes can be striped or concatenated. (It appears that only concatenated metavolumes can be dynamically expanded without additional temporary disk capacity required to support the operation while preserving data. However, striped metavolumes can provide better performance.) In order to preserve data when expanding a striped metavolume, temporary capacity called a “BCV metavolume” is required that must be the same size and structure as the metavolume to be expanded. Further, users will likely want to RAID-protect all the volumes in the BCV metavolume since the BCV metavolume is used to hold the only copy of the original metavolume’s data while the original metavolume is being expanded. Only disk mirroring (RAID-1), not RAID-5, is supported as a way to protect BCV metavolumes, doubling the BCV metavolume’s required physical capacity. For example, it appears that to nondisruptively expand an existing 1TB (one terabyte) striped metavolume while preserving data, two additional terabytes for a RAID-1-protected BCV metavolume are needed. This is an implicit cost to support nondisruptive expansion of striped metavolumes. The customer has to keep a supply of reserved disk space around to support metavolume expansion; the amount of that reserved space has to keep pace with the largest metavolume that may need to be expanded.

Note: In the IBM DS8000 and XIV disk systems, there are just “volumes” - avoiding the complexity and artificial split between standard volumes and metavolumes in Symmetrix. For example, the IBM DS8000 allows creating a volume of any supported size, or expanding volume capacity, in one step without needing to separately create and then combine multiple smaller volumes together.

Copy Feature Source-Target Restrictions

For internal (TimeFinder) and remote (SRDF) volume copy features are there restrictions to be aware of in terms of the relationships between source and target volumes? Background. The source and target volume structures must match. That is, if the source is a hypervolume then the target must be a hypervolume. If the source is a metavolume then the target must be a metavolume that consists of the same number of hypervolumes as the source. If either the source or target is a “thin” volume then both must be thin volumes. While the TimeFinder/Clone target can potentially be larger than the source, restore operations require the source and target to be the same size.

Dynamic Cache Partitioning Issues

Are there any potential drawbacks to the dynamic cache partitioning feature? Background. Cache partitioning, in general, is a facility that allows a storage system administrator to dedicate a portion of cache to a subset of logical volumes. Partitioning can help control aspects of application performance, but has limitations and adds management complexities of its own. Following IBM’s (for DS8000) and Hitachi’s (for TagmaStore USP) announcements of partitioning capabilities in 2004, Symmetrix didn’t introduce Dynamic Cache Partitioning until three years later in 2007. The timing implies EMC may not consider this to be a relatively important capability. Cache partitioning is only one aspect of I/O workload partitioning. To implement comprehensive partitioning, other customer tasks not aided by cache partitioning include separating applications by host ports, host adapters, disk adapters, drive paths (FC-AL loops), and RAID arrays based on application performance objectives.

Page 31: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 31

It is not clear how “dynamic” this feature is. How quickly can the partitions adapt to changing cache demands by moving some cache from one partition to another? If not quickly enough for a given environment, might costs go up because each cache partition may need to be configured with enough cache to handle individual partition peak loads, compared to a global cache that needs only be large enough to handle system-wide peak loads? Typically, a cache board can handle only a fixed number of concurrent operations. Is there any way to separate use of a given cache board to only specified partitions, or is cache board access (as opposed to cache capacity isolation) going to remain a point of performance contention? It is interesting to note that EMC often cites the benefits of a global cache, and emphasizes that V-Max supports a global cache. Partitioning cache, by its nature, is inconsistent with the potential benefits of a shared global cache that can truly be dynamically responsive to changing data access patterns.

Mirror Position Constraints

What are Symmetrix logical volume “mirror positions”? Can they still be an issue in V-Max environments?

Background. Note: Design changes in V-Max appear to have at least partially addressed this issue, but the limit of four mirror positions per volume remains. It is not clear whether this limit will still be an issue in some environments.

In Symmetrix, each logical volume has what EMC calls four “mirror positions”. Various functions can require mirror positions. Historically, some Symmetrix functions cannot be performed when a given volume does not have any available mirror positions. This can complicate customer use of Symmetrix. Customers may want to ask EMC about the implications of this issue in their particular environment.

Invista Issues

Why does customer acceptance of Invista remain low? What issues are there around the Invista disk system virtualization product?

Background. Invista’s capabilities are relatively weak when compared to alternatives such as IBM’s SAN Volume Controller (SVC). That is reflected in the relatively low customer acceptance of Invista.

In December, 2007, an EMC spokesperson said that only “about 200 customers are currently running Invista.” (Source: http://storage.itworld.com/4620/emc-invista-vmware-virtualization-071210/page_1.html)

From a June 2008 press article: “EMC hasn't sold much of Invista, although pronouncing it ‘ready for production’ last December [2007] when version 2.0 rolled out. Its one reference customer, Purdue University, used Invista to migrate data from an older Symmetrix array to a newer one.” (Source: http://searchstorage.techtarget.com/generic/0,295582,sid5_gci1317416,00.html)

Consider:

Invista has relatively limited support for non-EMC disk systems. (The IBM SVC supports a more extensive variety of IBM and non-IBM disk storage systems.)

EMC often resorts to trying to portray Invista’s out-of-band design as superior to in-band designs (of SVC and other virtualization offerings in the industry). However, this seems to be a red herring as EMC struggles to identify some relative advantage to Invista. All virtualization solutions add some overhead to I/O requests. The in-band IBM SVC, for example, is efficiently designed to add only microseconds to each I/O request, and the SVC’s inboard cache generally improves performance compared to configurations without SVC. (In this regard, consider the published SPC (www.storageperformance.org/results) benchmark results for SVC. EMC has not published SPC benchmark results for Invista.)

Page 32: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 32

The Invista volume clone facility appears to lack functions such as consistency group support (that would allow a point-in-time copy of multiple volumes to be made nondisruptively), incremental (refresh) copy, cascaded copies (copies of copies), and space-efficient snapshots. (SVC FlashCopy supports consistency groups, incremental copies, cascaded copies, and space-efficient snapshots.)

Invista lacks native remote disk mirroring facilities. One alternative is to use native disk system remote mirroring features, but those would be managed outside of Invista and would differ for different disk systems, reducing the potential value of virtualization. Or, EMC may recommend its RecoverPoint product which is a separate, external appliance in the network. RecoverPoint uses an in-band design which is somewhat ironic considering EMC’s attempts to claim out-of-band designs are better. (SVC has native support for both synchronous and asynchronous remote replication. Also, with SVC, features are essentially licensed (by capacity) to the site even if individual disk systems come and go, while with Invista customers need to license the native disk system features which then need to be re-licensed on any new systems that replace installed systems. )

EMC’s multipath driver for Invista is the priced PowerPath host-based driver; selected non-EMC drivers are also supported. PowerPath has both initial charges and post-warranty maintenance charges. (IBM’s SVC includes a multipath driver (SDD) at no extra charge; other selected drivers are also supported.)

Invista requires specialized SAN switches with specialized hardware. This limits SAN switch decisions to those vendors and models that support Invista, and can increase deployment cost if new switches are required to implement Invista. (In contrast, SVC does not require such specialized function in SAN switches and can normally be deployed using the SAN switches already installed.)

Invista does not support thin provisioning to reduce costs and capacity requirements. (IBM’s SVC supports thin provisioning through its Space-Efficient Virtual Disks function.)

The above list of considerations is not intended to be complete.

Some relevant media reports:

Source: EMC users push for better power consumption http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1255851,00.html,) Excerpt:

"We have not pushed it significantly," Tucci said in response to a question about the lackluster sales of Invista. It's been available for about a year with little to no customer traction. "We remain totally convinced that the best place to virtualize storage is in the network … with Version 2.0 we will be just fine," he said.

Source: Invista's not invisible; EMC unveils version 2.0 http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1285382,00.html?track=NL-52&ad=614683&asrc=EM_NLN_2731042&uid=84607 Excerpts:

Another user was underwhelmed by Invista 2.0. The user, who asked that he and the insurance company he works for not be identified, said he put his test development system behind Invista after EMC gave it to his company to try. He said had it worked, he would have used it on his production system to migrate between arrays. But he described the setup as "a nightmare," and said he was put off by Invista's lack of native replication and no support for other vendors besides IBM and HDS. "They gave it to us for free, and I wouldn't spend any of our money on it," he said. As [Arun Taneja, an analyst with the Taneja Group] points out, EMC reaps hundreds of millions of dollars a year from sales of Symmetrix Remote Data Facility (SRDF) software and other replication products and isn't about to jeopardize those sales by letting Invista handle replication by itself. "Anything that suggests there's an alternative to SRDF or other EMC software to move data between DMXs or from DMX to Clariion or from DMX to an IBM DS4000, that's blasphemy, right?" he said.

Source: EMC Tries Again With Invista http://www.redorbit.com/news/technology/1175569/emc_tries_again_with_invista/index.html Excerpts:

Page 33: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 33

Invista has sold dismally since it was launched over two years ago. Three months ago the company estimated that it has sold around only 200 Invistas. Last week EMC refused to talk sales numbers, but said that total permanent production implementations are just "two or three dozen." [Ovum analyst Carl Greiner states] "They'll get traction eventually, because people go to the big suppliers for this sort of technology. But they were late to the storage virtualization party, and they didn't understand the market like they should have. EMC is not going to own this market like they usually want to do," Greiner said.

Source: Product guide: Storage virtualization http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9116160. This Computerworld article, October 6, 2008, included mention of IBM’s SVC and the Hitachi disk system virtualization feature, but did not even mention Invista. It appears that the product information cited came from the vendors and EMC chose to only cite other products; if so, that EMC chose to not even mention Invista may be indicative of the product’s relatively low customer acceptance.

PowerPath Host-Based Encryption Issues

Because encryption is a processor-intensive activity, won’t performing this function in the host add significantly to processor utilization, potentially reduce processor useful-life, possibly lead to application performance issues, and complicate capacity planning? Won’t users be motivated to limit encryption to only some data to reduce that overhead, leaving other data exposed and requiring time and effort for a data classification process? Isn’t the claimed benefit of transmitting encrypted data over the SAN offset by the long-life of the associated encryption keys (in contrast to short-lived keys for typical network transmission)? If SANs are entirely within a data center, isn’t the risk of someone intercepting data while it is traveling on the SAN negligible in any case? Wouldn’t the available technology that encrypts data on disk drives using a processor in each drive module be a preferred approach, similar to the approach already proven successful for tape technology? Background. Disk drive-based encryption has many advantages over host-based encryption. See http://fdesecurityleaders.com/. The PowerPath (host-software) based approach adopted by EMC appears to have more drawbacks than potential benefits, and appears to be contrary to industry direction:

+ Can potentially support a variety of disk systems (but this is limited to the disk systems the software supports; PowerPath encryption has limited host support) + Protects data-in-flight over disk system-to-host path (but paths are usually inside the data center)

- Keys are exposed outside the disk drives or disk system - Capability is limited to hosts supported by the software; only a few hosts are currently supported - Host software must be installed, maintained, and paid for, for every host that can access the data - Host cycles are used; encryption is a processor-intensive process - Due to above issues, users may restrict use to only some data, requiring a data classification process - Potential lock-in to one vendor’s host software

Flash Drive Restrictions

What restrictions are there for implementing flash drives in Symmetrix? Background. EMC informs customers of multiple restrictions for using flash drives compared to disk drives. Customers should contact EMC for the current list of restrictions.

Logical Configuration Changes Requiring EMC Assistance

What changes to the Symmetrix logical configuration cannot be performed by the customer and require EMC

Page 34: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 34

assistance? How much advance notice does EMC need? Are these changes done for a fee? Background. It is difficult to identify all such activities. While various product manuals mention such items, no single list of such items has been identified.

Page 35: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 35

TimeFinder (Internal Volume Replication) Questions

TimeFinder Management Complexity

What are some of the complexities involved in managing the various Symmetrix TimeFinder features? Background. The following list identifies some items that contribute to TimeFinder’s management complexity: • Support for volume consistency groups to help protect data integrity for related data that spans multiple volumes is

not a standard part of TimeFinder features, but is a separate feature separately priced. This is the case even when volume consistency groups reside in a single Symmetrix.

• Managing TimeFinder consistency groups that span multiple Symmetrix systems may require priced EMC host-resident software, e.g., PowerPath.

Copy on First Write Performance Issues

Under what conditions does the Symmetrix implementation of Copy on First Write, an algorithm used for clones and snapshots, cause application I/O delays? Background. Copy on First Write (COFW) is an algorithm commonly used to support managing logical point-in-time copies of volumes. COFW helps reduce space needed for the target volumes and/or helps reduce internal data movement from source to target volumes. Data is moved from source to target a) when the original point-in-time source data would be overlaid by application write I/Os and b) in some implementations, when write I/Os are sent to target volumes. If the source-to-target data move occurs in response to a write I/O before the host is sent an I/O-complete indication, host I/O response time can be elongated significantly. Prior to DMX-3, the use of COFW by TimeFinder/Clone and TimeFinder/Snapshot resulted in delays for write I/Os to both source and target volumes. As of DMX-3, EMC has only partially addressed the problem so that applications can still suffer elongated write I/O response times in some cases. The partial EMC solution is called Asynchronous Copy on First Write (ACOFW), though the name fails to indicate it does not eliminate I/O delays in all cases. For writes to source volumes, the source-to-target copy is still done synchronously in some cases according to a description of the feature at http://oraclestorageguy.typepad.com/oraclestorageguy/2007/12/new-asynchronou.html in a December 2007 article titled New Asynchronous Copy on First Write (ACoFW) Eliminates Snapshot Write Penalty. For writes to target volumes: EMC customers can read about the function and restrictions in an EMC paper titled Feature Specification, TimeFinder/Clone and TimeFinder/Snap TimeFinder Asynchronous Copy-on-First-Write Source and Target, January 2008. The paper appears to contain a contradiction by stating “ACOFW is also supported with TimeFinder/Snap virtual devices (VDEVs)” and also stating “ACOFW to the target device is not enabled for writes to VDEVs as part of an active TimeFinder/Snap session.”

Page 36: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 36

SRDF (Remote Copy) Questions

Maximum SRDF/Synchronous Distance Issues

What is the longest distance supported/recommended between two sites using SRDF/Synchronous mode? If this distance is relatively limited compared to competitive offerings, is that evidence of implementation inefficiencies that can cause slower response times making longer distances impractical? Background. Much EMC documentation indicates the recommended limit is 200km. See a 2007 press release at http://www.emc.com/about/news/press/us/2007/07162007-5180.htm. Response times for write I/Os in a synchronous mode remote mirroring environment depend on multiple factors including protocol efficiency, distance, link speeds, and internal processing overheads per write. In the IBM DS8000, the efficient design of synchronous remote mirroring (the Metro Mirror feature), including low internal processing overheads per write I/O, allows the system to support up to 300km (180 miles) distance between sites with good response times. This distance can potentially be increased via an RPQ.

SRDF/S Link Protocol Can Elongate Response Time

How many round trips does the SRDF/S protocol require to transmit data to the remote system? For cases where this is more than one round trip, doesn’t that elongate write I/O response time compared to disk systems that only need one round trip? Background. Historically, SRDF/S used a two round-trip protocol. With newer microcode, SRDF/S uses a one round-trip protocol in some cases only, not all cases.

SRDF I/O Parallelism Limitation

How many write I/Os are allowed at one time to one open systems LUN in SRDF/S (synchronous mode)? In SRDF/A (asynchronous mode)? Background. For SRDF/S, there is a major limitation for write I/O parallelism to a given logical volume. As of Enginuity 5773, SRDF/S supports up to only 8 concurrent writes per logical volume, and only if the writes are within 256 512-byte blocks of each other. (Source: EMC Symmetrix Remote Data Facility (SRDF) Product Guide, March 2009.) Another EMC publication indicates it must also be the case that the writes arrive through different host ports. Given the different restrictions in different manuals, customers may want to contact EMC for clarification. For SRDF/A, it could not be determined whether parallel writes to a given volume are supported.

SRDF Volume Consistency Group Requirements

Why doesn’t SRDF/S have consistency group support built-in to protect the data integrity of related data that resides on more than one volume – why is a separate feature required? Why does SRDF/A have this support built-in for only some environments – why is a separate feature required for any but the simplest configurations?

SRDF/A Remote Site Data Loss

Why does SRDF/A have such a relatively high amount of data loss at the remote site following unplanned termination of data propagation from the local site (e.g., due to a local site disaster)? Won't it require a significant customer effort to deal with the amount of data that can be lost in an SRDF/A environment? What is the cost of that lost data to the business?

Page 37: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 37

Background. Unlike more efficient asynchronous remote copy designs, in SRDF/A data is not sent by the local Symmetrix to the remote Symmetrix as soon as possible after it is received, but instead is artificially delayed before transmission can even begin by being collected and held back for a period of time in the local system. The local collection of data to be sent is called a "delta set". The delay between delta set transmissions is a user-specified amount of time. The default is 30 seconds and EMC seems to recommend using that default. Not until the end of the delay period does the collected delta set begin being transmitted to the remote site. This design adds additional seconds of data loss in case of an unplanned outage because none of the data in the delta set being sent is usable for remote site restart or recovery unless and until the entire delta set is successfully received by the remote Symmetrix. Data collected in the local system that hasn’t been transmitted at all is obviously lost if a disaster occurs. If transmission of a delta set is in process but a disaster occurs before transmission completes - if even one record in that delta set is not received by the remote system - then the entire portion of the delta set that was received is discarded by the remote system. Therefore, the amount of data loss at the remote site can be estimated as the sum of the collection time (default of 30 seconds) plus all transmission time up to the moment that transmission failed. EMC itself acknowledges that, using the default 30 second delta set collection time, this can mean up to one minute (60 seconds) of data loss. (If the rate of incoming writes exceeds the bandwidth of the communications links – perhaps during peak periods - the data loss can be even higher.) The problem this can cause can be expressed in terms of lost business. Assume a business processes 1000 transactions per second. Then each second of data loss at the remote site represents 1000 lost transactions. If one vendor’s remote copy implementation results in, say, under five seconds of data loss while SRDF/A results in more than thirty seconds of loss, the difference to the business is measured in tens of thousands of transactions. Can implementing a disk system with such relatively high data loss built into its d/r design be justified by the customer?

SRDF/A Cache Usage

Given that SRDF/A maintains two "delta sets" that reside in local Symmetrix cache, and two more that reside in remote Symmetrix cache… a) Won't this reduce the cache space available for use by production applications? Won’t the performance impact be worst during peak I/O loads when having cache available for application data is the most important? b) Is it true that if cache fills up (which EMC calls a “cache full” condition), then SRDF/A may terminate which disables current d/r protection, or application performance may be slowed down equivalent to a synchronous protocol even if the distance means the performance impact on local applications is very severe? (In 2007 EMC added new facilities to SRDF/A (i.e., a delta set extension option) to help avoid a cache full condition in some situations, but those optional facilities add cost and complexity to SRDF/A not found in competitive asynchronous remote copy designs that address the issue more effectively.) c) Given the issues it can create, why did EMC design SRDF/A to have such high cache usage while other vendors have successfully been able to design asynchronous remote copy features that require little to no additional cache?

SRDF/A Consistency Groups - Host Software Overhead

To support the ongoing process of forming consistency groups of data (SRDF/A delta sets) in environments where write order-dependent data (such as a database and its log) resides in multiple Symmetrix systems – and even in some configurations where a consistency group resides within one Symmetrix – must EMC host software be installed and maintained to coordinate this process? How much utilization does this add to my servers? How much time and effort is required to maintain this software? Since EMC's past philosophy has often been to move storage functions off of hosts and onto disk systems, isn't the requirement for constant host-based involvement with an active disk system function a step backward? Background. EMC requires customers obtain a license for the SRDF/CG (consistency group) feature to support SRDF/A multi-session consistency (MSC) configurations. MSC configurations are those where data consistency

Page 38: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 38

must be maintained across multiple Symmetrix systems (e.g., for a database, parts of which reside on more than one local Symmetrix system), and even within a Symmetrix system in some configurations.

SRDF/A Remote Site Data Integrity Issues

Isn't the remote copy of data in an SRDF/A environment exposed to loss of data integrity if a local unplanned outage occurs during the time the remote copy of data is being resynchronized from the local Symmetrix following a temporary suspension of data propagation (e.g., following temporary loss of communications links or a temporary outage of the remote system)? To avoid this problem, won’t I need to install additional disk capacity in the target Symmetrix to hold a copy of remote data that I have to create prior to the resynchronization process? If production must run on those target copies, can a changes-only failback operation be supported even though the remote target copies are not integrated into the SRDF/A design? Is it the case that the optional SRDF/A Delta Set Extension (DSE) function only partially addresses the problem? Why doesn’t SRDF/A integrate comprehensive support for this important data integrity protection so I don’t have to handle it on my own? Background. EMC acknowledges this significant SRDF/A data integrity exposure and recommends customers manage it on their own. For example, EMC writes: “It is very important to capture a gold copy of the dependent-write consistent data on the secondary Symmetrix R2 devices prior to any resynchronization. Any resync process compromises the dependent-write consistent image. The gold copy can be captured on a remote set of BCVs or Clones.” (Source: EMC Symmetrix Remote Data Facility (SRDF) Product Guide, March, 2009) Note that TimeFinder/Snap is not listed, because TimeFinder/Snap is not supported for making copies against active SRDF/A target volumes. EMC has known about this SRDF/A issue for years: “The use of BCVs is strongly encouraged to retain a consistent restartable image of the data volumes on the R2 side during periods of resynchronization”. (Source: Using Asynchronous Remote Replication for Business Continuity…A Technical White Paper on SRDF/A…, December, 2004, page 9.) A press article also mentions this SRDF/A issue, indicating many customers appear to not be aware of it:

[A customer attending Storage Networking World] who is using EMC Corp.’s SRDF-A between two sites 160 miles apart, warned that at the secondary DR site regular point-in-time copies are necessary, because a lost WAN link will destroy the write order—something other users in the session said they were glad they learned. --- Source: More tidbits from SNW, published by www.searchstorage.com http://storage.blogs.techtarget.com/2007/04/20/more-tidbits-from-snw/?track=NL-52&ad=585861&asrc=EM_NLN_1327107&uid=84607

It could not be determined whether a failback to the production site can be done by sending only changed data if the customer needs to use the SRDF/A remote site “gold copy” for production and updates those volumes. That is, since support for the gold copy is not integrated into the SRDF/A design, it is unclear whether changes to those volumes are appropriately tracked so that only those changes need to be transmitted to resynchronize the local site source volumes, avoiding the need to do full volume resynchronization. In 2007 EMC added a Delta Set Extension (DSE) function to SRDF/A to help address problems with high cache utilization caused by SRDF/A. DSE lets the customer set aside dedicated disk capacity that is used to offload SRDF/A delta set data to help free up some cache under certain conditions. This may allow SRDF/A to ride through some link outages that it could not ride through without DSE, but DSE does not appear to be designed to handle all scenarios (e.g., arbitrarily long link outages). DSE adds local system overhead which can impact production application performance, requires additional disk at both the local and remote sites, and is somewhat complex to manage when you look at the details. (Since DSE buffers need to be implemented in both the local and remote systems, if the buffer was configured large enough to handle the worst-case scenario of a full system resync, the customer would need to double the disk capacity at each site. Implementing an SRDF/A gold copy, required only at the remote site, is less costly, but must be managed manually.)

Page 39: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 39

In summary, SRDF/A lacks a comprehensive, integrated, self-managed solution to protect remote site data integrity during local-remote data resynchronization. In contrast, such a solution has been integrated and automated in IBM’s DS8000 Global Mirror from its inception. Even the late-in-coming SRDF/A DSE add-on function doesn’t provide the protection, integration, or ease-of-management that has always been supported by Global Mirror. SRDF/A users must implement such protection on their own outside of SRDF/A (using a “gold copy” as EMC recommends in its manuals), or risk exposures to remote site data integrity.

SRDF/A and TimeFinder/Snap Restriction

Is TimeFinder/Snap supported to make copies of SRDF/A target volumes? Background. The use of TimeFinder/Snap to make a copy of SRDF/A target volumes (called “R2” volumes in EMC’s terminology) has been unsupported for a long time. Customers may want to check with EMC on current status of this support.

SRDF/A Write Folding Inapplicability

Isn’t the SRDF/A design optimized for those applications that can take advantage of what EMC calls “write folding” (which is the ability of SRDF/A to overlay data in cache with more recently written data intended for the same physical disk location)? If an organization’s applications generally don’t write and rewrite data to the same disk locations repeatedly in relatively short intervals of time, wouldn’t SRDF/A be inefficient for those applications? In particular, isn’t a design oriented around write folding of no value for applications doing sequential writes since those applications don’t rewrite data to the same locations? Won’t SRDF/A’s design cause the data that is being artificially held back in the local Symmetrix cache, just in case it may be overwritten, to take up more and more cache which in turn can slow down application performance? Doesn’t holding data back in cache just in case it might be overlaid by other writes also end up increasing remote site data loss following an unplanned local outage?

SRDF/A Link Bandwidth Issues

Doesn’t SRDF/A’s design require relatively high (and thus potentially costly) link bandwidth for at least the following reasons? 1) SRDF/A is designed to transmit data at fixed intervals, so that even if there is data to send and link bandwidth is available, that bandwidth cannot be used until the next transmission interval begins. 2) SRDF/A accumulates data to be sent in system cache until the next transmission interval begins, reducing the amount of cache available to help improve application performance; therefore, high link bandwidth can be important to allow the data accumulated in cache by SRDF/A be sent quickly once the transmission interval begins, to make more cache available for use by applications. Given these aspects of the SRDF/A design, wouldn’t less link bandwidth be needed by disk systems using asynchronous remote mirroring features designed to send data immediately if bandwidth is available, as well as designed to also minimize cache use? Background. Prospective users of SRDF/A may want to consult with EMC, other disk system vendors, and bandwidth providers (e.g., telecommunications companies) to understand link bandwidth requirements and costs during disk system evaluation, since link bandwidth costs can be significant beyond just the costs of the disk systems and remote copy features alone.

SRDF/A Link Outage Management Issues

Does SRDF/A automatically handle link outages in a simple, direct way, or is the process more complex than that? Background. The SRDF/A facilities to handle link (communications) outage situations are not straightforward. The first line of defense after the system detects that all links are down is a timer interval. (This is sometimes referred to as the “link limbo” value.) If, after this interval expires (default 10 seconds) links have not come back online, SRDF/A considers the condition to be a permanent link loss and drops active sessions; in that case the locations of changed tracks are recorded for later data transmission.

Page 40: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 40

In 2007, EMC introduced a “transmit idle” option the user can set that is invoked after the timer interval expires such that the session remains active. This means that local write data continues to be collected in cache even though it cannot be transmitted. This can result in a “cache full” condition if cache fills up before links come operational. That condition can terminate SRDF/A. To help deal with that situation, in 2007 EMC also introduced a “Delta Set Extension” (DSE) option that allows the user to pre-configure additional local and remote disk space that is used as a buffer to offload data from cache before it causes a “cache full” condition. The above process is made more complex due to the need for the customer to implement and manage their own remote site “gold copy” to protect data integrity during data resynchronization following link outages, because that support is not integrated into SRDF/A.

SRDF/A DSE Sharing Issue

Is it the case that separate, unshared DSE (Delta Set Extension) pools must be created for different host types: open, IBM System z, and IBM System i? Doesn’t this potentially increase costs and management complexity?

Concurrent SRDF/Star and Concurrent SRDF Performance Impact

Given the design of concurrent SRDF/Star and concurrent SRDF that transmit write requests to two remote systems for each single write I/O received by the local Symmetrix, what is the impact of that overhead on application performance and disk system scalability? Doesn’t significant local system overhead also occur during resynchronization of either remote site (or potentially even both remote sites at the same time) from the local site (e.g., following a temporary communications link outage)? Background. These kinds of local disk system performance overheads are drawbacks of multi-target designs. In contrast, cascading designs have less overhead while also supporting multiple target locations.

Multi-hop and Cascaded SRDF Issues

What are some of the drawbacks of multi-hop SRDF and cascaded SRDF 3-site configurations? Background. Symmetrix multi-hop and cascaded SRDF are relatively simple designs that might be called “basic” 3-site or “basic” cascading designs. They lack sophisticated features supported by newer 3-site designs from IBM and EMC. Multi-hop SRDF: An additional set of volumes managed by TimeFinder in the B system is required to periodically make copies of received data to be sent to the C site. This increases costs and tends to cause target site data currency to lag significantly. Multi-hop does not support an A—C link in cases such as when site B is unavailable. Cascaded SRDF: This is basically an enhancement to multi-hop. It eliminates multi-hop’s TimeFinder requirement at the intermediate site, but, like multi-hop, does not support an A---C link in cases such as when site B is unavailable.

SRDF/EDP Issues

What issues and restrictions apply to SRDF/EDP? Background. SRDF/EDP is a 3-site remote mirroring cascaded configuration, A B C where the A-to-B connection is SRDF/S and the B-to-C connection is SRDF/A. The intermediate site B does not use physical disk space to hold copies of the data being transmitted from A through B to C.

Page 41: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 41

The following points apply. This list is not intended to be complete. Prospective users should consult with EMC to identify all such considerations. • The intermediate systems cannot be used for d/r recovery (because they do not contain a usable copy of the data at A

or C) • Hosts cannot access the volumes in the intermediate systems that are used for A-to-C pass-through • The intermediate systems cannot be used as a target for IBM’s z/OS GDPS/HyperSwap, for IBM’s z/OS Basic

HyperSwap, or for EMC’s AutoSwap • The intermediate systems must be V-Max models

SRDF/EDP intermediate system is not “diskless”

EMC sometimes uses the term “diskless” or “diskless device” when referring to its SRDF/EDP configuration (Extended Distance Protection). Does that mean no disks are required in the intermediate site? Background. EMC’s use of the term “diskless” may mislead potential users of this capability. SRDF/EDP is a 3-site remote mirroring cascaded configuration, A B C where the A-to-B connection is SRDF/S and the B-to-C connection is SRDF/A. The intermediate site B does not use physical disk space to hold copies of the data being transmitted from A through B to C; it is “diskless” in that sense only. However, the Symmetrix system(s) at B do require physical disk capacity. Consider:

1) The configuration-dependent rules for minimum drive counts appear to still apply. The minimum number of drives depends on the number of V-Max engines and model. Even a single engine V-Max system requires a minimum of 96 drives, and a V-Max SE requires a minimum of 48 drives. (However, one source indicated the requirements may be reduced to spares and vault disks, so customers should check with EMC for details on physical disk requirements.)

2) Because the B-to-C connection is SRDF/A, all the issues around SRDF/A’s extensive use of cache arise. EMC has added the Delta Set Extension (DSE) function to SRDF/A to help deal with this, but DSE requires disk capacity configured at both the B and C sites.

Page 42: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 42

Virtual (Thin) Provisioning Questions

Feature Effectiveness Issues

There seems to be a wide range of potential savings in physical capacity due to thin provisioning cited in the industry. How can a given customer determine what benefit they might receive? Background. The potential physical capacity savings can vary widely. A vendor may be able to cite a given customer who had substantial savings, but customers shouldn’t make assumptions about potential capacity savings in their environment without evidence. For example, the benefit will be small for customers who already are disciplined at minimizing over-allocating storage. Some customers may use dynamic volume expansion to add capacity to volumes only when it is needed, rather than pre-allocating capacity that may or may not eventually be used. Some applications format volumes with data that negates the value of thin provisioning.

Functional Limitations

What functional limitations apply to the Symmetrix thin provisioning feature which EMC calls virtual provisioning?

Background. EMC documents multiple functional limitations and restrictions for using virtual provisioning. In addition to a few items identified here, interested customers should ask EMC to identify the full list. Symmetrix virtual provisioning is supported for open systems, but (at least initially) is not supported for IBM System z servers (mainframes). Many restrictions apply when using thin volumes with TimeFinder or SRDF. For example, use of TimeFinder/Clone (same system volume copy) and SRDF (remote copy) is supported, but is limited to thin-provisioned volumes copied to/from other thin-provisioned volumes. It appears that increasing the size of a thin provisioned volume beyond the maximum size of a hypervolume requires the same complex mechanism as is required for standard hypervolumes, i.e., managing metavolumes.

Performance Issues

How might virtual provisioning impact performance? Background. No measurements of Symmetrix virtual provisioning vs. standard configurations could be found. EMC describes the performance characteristics of virtual provisioned volumes in terms like "performance variability". Symmetrix virtual provisioning uses a relatively small 768KB “chunk size”. “Chunk size” is a common term for the amount of physical capacity added to a thin volume as it grows beyond previously assigned capacity. It is interesting to contrast the Symmetrix thin provisioning design in this regard to what other vendors are doing. The IBM DS8000 uses a 1GB chunk size called an “extent”, the same allocation unit it uses for standard volumes. This means that Symmetrix requires 1GB/768KB > 1000 times more visits to the add-capacity procedure than DS8000 needs. The high-end Hitachi-based systems are also more efficient than Symmetrix in this regard, because their 42MB chunk size means Symmetrix requires 42MB/768KB > 50 times more visits to the add-capacity procedure than the Hitachi systems. It might be argued that a smaller chunk size can potentially increase overall space savings compared to standard (“thick”) volumes. However, considering that users typically allocate volumes in multiples of GBs – sometimes over 1000GB (1 TB) for one volume – even a 1GB chunk size can provide the bulk of potential space savings while offering a potential performance advantage.

Page 43: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 43

Pooling Complexity

Must standard (“thick”) volumes and thin-provisioned volumes be kept in separate capacity pools even if they are related to the same application? Doesn’t this increase management complexity and get in the way of easily sharing physical capacity among these volumes? Background. Many customers may prefer to pool physical capacity based on criteria other than whether a given volume is a standard (“thick”) volume versus a thin volume. (After all, the user does not need to create separate pools of capacity for each RAID type supported by Symmetrix.) Consider an example. It may be desirable to pool physical capacity for all volumes associated with a given application, regardless of the fact that some of the application’s volumes happen to be thick and others happen to be thin. This flexibility would not only help reduce the number of pools to be managed, but would also promote controlling and sharing physical capacity across the entire application, rather than having some capacity dedicated to only thick volumes and other capacity dedicated to only thin volumes, with that capacity not readily sharable across the pools.

TCO Considerations

In 2009 EMC announced that Virtual Provisioning would be offered at no charge on Symmetrix. Is this a significant advantage over competitors? Background. What generally matters most to a customer is the overall value of a storage system, including Total Cost of Ownership (TCO). Comparing the prices of selected features and components may not be very helpful when assessing TCO. Different storage system vendors have differing pricing policies in many aspects. For example, EMC charges for its Symmetrix host-based multipath I/O driver, PowerPath. In contrast, IBM includes its host-based multipathing driver, Subsystem Device Driver (SDD), at no extra charge when you install various IBM disk systems including DS8000. The vendors selling the high-end Hitachi-based systems charge extra for similar device drivers. The DS8000 1-to-4 year customer choice warranty covers hardware and licensed (software) features, while in contrast EMC’s and the Hitachi-based system vendors’ warranty for licensed software expires after only 90 days.

Page 44: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 44

Total Cost of Ownership Questions

Hardware and Software Warranty Durations

What are the standard hardware and software warranty periods for Symmetrix? How do these compare to other vendors’ warranties? Background. EMC’s standard Symmetrix V-Max hardware warranty period is 3 years. EMC’s standard warranty period for licensed Symmetrix features (i.e., software), including Enginuity, is only 90 days and that covers media defects only, not “bugs”.

Hardware Upgrade Warranty Duration Limitation

If a customer adds hardware features, such as additional disk drives, to a Symmetrix at some date after the base Symmetrix system is installed, do those upgrades get their own warranty starting from their installation date with the same duration as the base system warranty? Or, does the warranty of the upgrades end at the same date the original warranty for the base Symmetrix ends, even if that is the next day? Background. Industry analysts have indicated that upgrade warranties are co-terminus with the base system warranty. For example, if a Symmetrix has a 3-year warranty and new disks are added 1 day before the end of that warranty, those disks have only a 1-day warranty. Many customers may not realize this.

Warranty versus Prepaid Maintenance

If a customer requests a longer-term warranty than EMC’s standard warranty (e.g., to match a competitor’s longer warranty), and EMC responds by offering to include prepaid maintenance in its initial charges to cover the additional time period, isn’t that less attractive than an actual warranty? For example, won’t maintenance for any hardware upgrades to the initial system made during the standard warranty period be covered only during that period which started the day the initial system was installed, but not be covered during the prepaid maintenance period? Doesn’t that mean a customer will need to pay separate and additional maintenance charges for those upgrades starting as soon as the base system warranty expires (because the prepaid maintenance applies only to the initial configuration)?

Hardware Upgrade Charges

Is EMC’s list price higher for adding a hardware feature (such as disk drives) to an installed system than for installing the same feature at the factory on the original system? In other words, must customers, in effect, pay a financial surcharge if they upgrade the system after it is installed? If so, how much higher are the upgrade list prices? Background. Historically, the charges for Symmetrix hardware feature upgrades have been substantially higher compared to ordering the same feature with the original system. Even if EMC offers the same discount for upgrades as for features on the base system, the higher upgrade list price results in a higher discounted price.

Rules Requiring you to Buy More Hardware than Needed

In one Symmetrix system, can hardware resources such as the size of cache, the number of disk drives, and the number of host ports be flexibly configured independently of each other to meet an individual customer’s specific needs? If only some combinations of these resources are supported, won’t that potentially limit options and increase costs? For example, are there cases where, in order to add more host connection ports, a customer is required to also buy other hardware components they do not need? Background. In Symmetrix V-Max, only some combinations of numbers of host ports, cache sizes, and numbers of disks are allowed. These resources cannot be configured or scaled independently of each other. This lack of resource flexibility and granularity can result in unanticipated costs as customers grow over time and need more of one resource

Page 45: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 45

but are forced to also buy more of other resources they don’t need. For example, each V-Max engine supports a fixed maximum amount of cache, so adding cache beyond that requires buying additional engines which include other required components such as host ports, disk paths, processors, and a minimum number of disks that must be paid for even if they are not needed. Similarly, if a customer needs more host ports than the maximum number supported by the installed V-Max engine(s), the customer must buy additional engines, not just additional host ports. EMC’s Symmetrix V-Max Product Guide has more information on this topic.

Unusable Disk Capacity can Raise Costs

In V-Max, why are 100s of gigabytes – up to 1.6 terabyte - of disk capacity ordered by the customer taken away by the system for vaulting data? (Source: Symmetrix V-Max Product Guide.) Does the proposed system configuration include additional physical disks to compensate? For software features priced based on capacity, is the price reduced to compensate for the lost capacity taken by the system? Since price and post-warranty maintenance charges for software features generally increase for each additional terabyte of physical disk capacity installed or added, do I have to pay more if the reserved capacity I can’t even use pushes these charges into the next terabyte price tier? Background. Symmetrix data vault capacity is used to preserve data in cache in case of loss of external power. The issue isn’t that such a facility isn’t needed – it is. The issue is that customers may not anticipate that Symmetrix is designed to take the needed capacity out of customer-ordered capacity, so that additional disk capacity should be ordered to compensate. Not ordering that additional capacity can make the acquisition price appear lower than it should be. In contrast, the IBM DS8000 provides a similar data vaulting facility, but uses built-in dedicated disks for that purpose rather than taking capacity away from disks ordered by customers.

Disk Mirroring (RAID-1/10) can Raise Hardware and Software Costs

If EMC recommends that disks in Symmetrix be mirrored (i.e., protected by RAID-1 or RAID-10) to provide acceptable performance, won’t my costs be higher than if I use other RAID techniques - even though the usable capacity is the same in either case? Background. Higher costs due to configuring RAID-1 or RAID-10 accumulate in multiple ways. Initially, 100% more disk capacity must be ordered (not counting spare disks or capacity Symmetrix takes away from customer-ordered capacity for cache vaulting). This increases initial prices for licensed software features that are typically priced based on physical (raw) disk capacity. When new disk capacity is added, there are “delta” charges for those software licenses. When the hardware warranty expires, users can be billed for post-warranty monthly maintenance based on the physical disk capacity. When licensed software feature warranties expire, users can be billed for post-warranty monthly maintenance based on physical disk capacity.

RAID-5 3D+P (3+1) Costs

Symmetrix supports RAID-5 in 3D+P and 7D+P (a.k.a. 3+1 and 7+1) configurations. What are some cost implications of 3D+P? Background. RAID-5 3D+P (3Data + 1Parity) means groups of four drives where the equivalent of one drive’s capacity is used for parity data that provides the redundancy to avoid data loss if one drive or one data block fails. 75% (3/4) of physical capacity in the group is available for application data. In contrast, in 7D+P arrays 87.5% (7/8) of capacity is available for application data. RAID-5 3D+P stripes logical volumes across only 4 drives, while RAID-5 7D+P stripes data across 8 drives for improved performance. Because RAID-5 3D+P requires more physical disks to support a given usable capacity than RAID-5 7D+P, a 3D+P configuration raises costs in four ways: the initial cost for disks, the post-warranty maintenance cost for disks, the initial cost for licensed features charged based on physical capacity, and the post-warranty maintenance costs for those features. More disks can also mean more frames (a.k.a. bays) and more internal disk adapters may be required, further raising costs.

Page 46: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 46

BIN File Potential Charges

See the BIN File Issues discussion in the Management Questions chapter.

Predicting Software Costs

Will EMC put in writing all software charges – one-time charges and monthly maintenance charges - so we can estimate in advance what we will be asked to pay over the expected life of the system? What additional charges can we expect if we add new disk drives to the system because many software initial and maintenance charges are based on physical disk capacity used by the software (e.g., TimeFinder/Clone)? Background. This is especially important due to EMC’s standard short-duration licensed (software) feature warranty lasting only 90 days.

PowerPath Pricing Concerns

Is it true that the PowerPath multipathing Symmetrix feature is charged for based on the number of processors (i.e., CPUs) in the servers on which it is installed? Doesn’t this mean that, rather than “pay for value”, the charge is the same regardless of how much data is accessed on Symmetrix, or how frequently? Doesn’t this also mean that I can be charged if I upgrade a server by adding CPUs even if applications accessing Symmetrix don’t change? Is PowerPath required for some Symmetrix functions (such as cross-Symmetrix TimeFinder volume consistency groups), so that I may have to pay for PowerPath even if my server already has its own multipathing software? Background. EMC may offer a minimal-function version of PowerPath called PowerPath SE at no extra charge, but it is likely of limited value to most customers. In contrast to this costly EMC feature, the IBM DS8000 includes host-based multipathing software as standard.

System Resale Restrictions

Will EMC's contract restrict who I can resell a Symmetrix to even if I purchased the disk system? Background. EMC will sell Symmetrix hardware and permit its customers to transfer ownership of the hardware. However, EMC will generally not permit its customers to also transfer the internal microcode required to operate Symmetrix, including any optional features paid for by the customer. Microcode and features needed by the new installation must be relicensed from EMC by the new owner. This restriction, uncommon in the rest of the industry, may reduce residual value and resale flexibility.

Fibre Channel / SRDF Cost Issue

If a Fibre Channel port is used for SRDF another Fibre Channel port is automatically disabled. Doesn’t this effectively double the cost of Fibre Channel ports used for SRDF? Doesn’t it also reduce the maximum number of usable ports for host connections? Why should I pay for ports I cannot use? Background. That configuring one Fibre Channel port for SRDF use disables another Fibre Channel port is documented in the Symmetrix V-Max Product Guide.

Fibre Channel FCP - FICON Conversion

Since both FCP (SCSI protocol) and System z FICON are based on Fibre Channel technology, can the same Symmetrix channel director be reconfigured in-place for its ports to support either protocol if my needs change?

Page 47: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 47

Background. Symmetrix does not support this flexibility. V-Max engine I/O modules that contain ports for host connection must be ordered as Fibre Channel or FICON. No way to convert an I/O module from one protocol to the other has been identified. (In contrast, the IBM DS8000 has for years supported the flexibility for users to configure and reconfigure the same port for either protocol without having to order new hardware.)

GbE and iSCSI Ports – Total Costs

Symmetrix supports native iSCSI and gigabit Ethernet (GbE) ports. It sounds like this might save money compared to using gateways, but are there other costs to be aware of that could offset any potential savings? Background. Symmetrix V-Max GbE port performance is only 1Gb/s, relatively slow considering available Fibre Channel and Ethernet speeds. It may take multiple 1Gb/s GbE ports to provide performance equivalent to a smaller number of Fibre Channel ports. Whether there is a savings in the external network may vary by case. For example, the native GbE links may need to be connected to an external (WAN) network. Given Symmetrix GbE’s 1Gb/s speed, more network equipment ports may be needed compared to using faster Fibre Channel links. Are ports available on existing network equipment to support this, or are new ports or new equipment needed? Gateways offer the benefit of portability/reuse among different disk systems. It is the total cost of ownership of the solution that matters more than the cost of one aspect of a disk system in isolation.

Cache Cost Issues

Since only half of V-Max cache is “usable” (EMC’s term) – meaning EMC must bid 2n GB of cache when other vendors can generally bid n GB of cache - won’t this potentially raise acquisition and/or post-warranty maintenance costs unnecessarily? If I were to upgrade Symmetrix cache over time to increase performance or to support additional disk capacity, won’t the inefficient cache design significantly raise the cost of the system even more? Since EMC can likely not afford to just give large amounts of cache away to compensate for this design inefficiency, are there post-installation charges, whether for cache or other product elements, that will be used to make up the difference? Background. V-Max, just as in previous DMX-3/4 models, doubles the physical cache size needed to hold a given amount of customer data compared to previous Symmetrix models. For example, it takes about 512GB of cache in V-Max or in DMX-3/4 to provide the equivalent of 256GB of cache in DMX-2. EMC refers to half of the cache configured in a DMX-3/4 system as “usable” or “effective” cache. (EMC has used both terms.) While the DMX-3 and DMX-4 spec sheets at least mentioned this issue (as a footnote), the V-Max spec sheets do not mention this issue so it might elude many customers. What is behind this inefficient cache implementation? Symmetrix mirrors read data, not just write data, in cache. It is common for disk system designs to mirror writes in cache to help protect new/changed data against loss before it is destaged from cache to (RAID-protected) disk. This requires much less additional cache than doubling the entire cache available for data. Duplicating read data in cache is unnecessary for data protection since, if data read into cache from disk became unavailable due to a cache failure, the data is readily accessed from disk. Moreover, mirroring reads in cache degrades performance since there’s internal overhead in Symmetrix to make and manage the extra copy of read data. Since no claimed benefit of this extra cache requirement has been identified, even though it has cost and performance drawbacks, it can be speculated that the doubling of cache size is required due to unspecified design issues in Symmetrix.

Page 48: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 48

Enterprise Storage Platform Charge

Is it true that EMC charges extra for a feature called Enterprise Storage Platform (ESP) just to let a customer attach both open systems and IBM System z (mainframe) hosts to the same Symmetrix? Can EMC identify any competitor who has a comparable charge?

Page 49: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 49

IBM System i Support Questions

System i Commitment Issues

Hasn’t EMC frequently delayed support for IBM System i environments (a.k.a. IBM i, formerly iSeries) or provided less support than for other servers? Isn’t this evidence that EMC is not seriously committed to the System i environment? Doesn’t this history of delays show that EMC – in practice – considers System i support to be low priority compared to Symmetrix support for other servers? Background. Consider EMC’s history of spotty System i support. This is not intended to be a complete list.

The initial delay before the DMX family supported System i servers at all (even though it was supported on prior Symmetrix models)

The additional delay before Symmetrix supported i5/OS (V5R3) The dropped support for CopyPoint and SRDF/TimeFinder Manager in initial releases of Enginuity 5671 (Symmetrix

microcode) in 2005 The delay of System i support on DMX-3 (which was announced in July 2005) until 2006 The delay of support for boot from SAN Limitations for which physical disk capacities are supported for System i compared to support for other servers. For example,

support for 1TB disks was announced in 2008 for all hosts except System i. Lack of support for System i by Symmetrix Enginuity 5772 when it was delivered in early 2007. When Symmetrix SRDF/A Delta Set Extension was first available in 2007, System i was not supported. Support for 500GB disks was provided for all other hosts but not for System i on any Symmetrix model (at least through

March 2009). Support for flash drives was announced for DMX-4 in 2008, but without System i support. (V-Max does support flash drives

for System i.) Support for disk encryption via PowerPath Encryption with RSA announced in early 2008, but System i is not supported.

i5/OS V6R1 Functions – Support Issues

Does Symmetrix support new System i storage functions announced in January 2008, including:

Virtual I/O Server (VIOS)? High Availability Solutions Manager (HASM) integrated support for disk system copy services?

Cache Restrictions

Why has EMC recommended to at least some customers that they disable System i Expert Cache because Symmetrix cache and Expert Cache algorithms can conflict with each other? Since the purpose of Expert Cache is to improve performance by keeping more frequently referenced data in processor memory which is accessed by a host much faster than external cache in a disk system, wouldn’t EMC’s recommendation degrade application performance?

SAN Port Sharing Restrictions

If a System i host is connected to a Symmetrix Fibre Channel port via a SAN switch, can other heterogeneous hosts on the SAN also be connected to that Symmetrix port? If not, does that mean additional Symmetrix ports at additional cost may be required? Background. Symmetrix supports this kind of port sharing for many hosts, but has only limited port sharing support for System i hosts. Reference: EMC Support Matrix, online at www.emc.com.

Page 50: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 50

Disk Capacity Support Limitations

Are all the disk drives supported in Symmetrix usable in an IBM System i environment? What about 1TB disks in particular? Background. Symmetrix spec sheets are at www.emc.com. 1TB disks are not supported by Symmetrix V-Max for System i.

Copy Services Automation

Does EMC provide products or services equivalent to IBM’s HASM (High Availability Solutions Manager) integrated support for DS8000 copy services? Is IBM’s System i Copy Services Toolkit supported to help automate internal and external copy functions?

Page 51: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 51

IBM System p Support Questions

Remote Copy Integration with HACMP Issue

Does SRDF/Synchronous support HACMP clustering in an integrated way? Is Symmetrix SRDF/S failover automated/coordinated with AIX/HACMP failover? Background. HACMP stands for High Availability Cluster Multiprocessing. HACMP runs "on top of" the AIX operating system and supports a clustered server environment for IBM System p servers. The AIX HACMP/XD (Extended Distance) feature includes integrated support specifically for IBM’s DS8000 Metro Mirror. This support will not work with Symmetrix SRDF/S. This integrated support means HACMP coordinates automatic failover of disk volumes that are Metro Mirror pairs. In this way, HACMP/XD in combination with DS8000 Metro Mirror manages a clustered environment to ensure mirroring of critical data is maintained at all times. By automating the management of Metro Mirror, recovery time is minimized after an outage, regardless of whether the clustered environment is local or geographically dispersed.

System p / AIX Support Limitations

Does Symmetrix V-Max support the I/O performance-oriented System p capabilities that require disk system cooperation: 1) cooperative caching and 2) end-to-end I/O priorities? Does Symmetrix support a function comparable to the long busy wait host tolerance (a.k.a. intelligent I/O retry) support in IBM’s DS8000?

Page 52: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 52

IBM System z Support Questions

History of Inconsistent Mainframe Support

Isn’t the Symmetrix history of significantly delayed support for important mainframe functions strong evidence that EMC is not seriously committed to comprehensively supporting the IBM mainframe server environment? Doesn’t the history of delays show that EMC – in practice – considers comprehensive mainframe support to be relatively low priority? Background. Examples: • FICON was not supported at all when DMX first shipped, and was not made available until late in 2003. • ESCON performance in some Symmetrix models has been degraded because only half the ESCON ports on a given

DMX Channel Director could transfer data concurrently. • When EMC announced DMX-3 in July 2005, ESCON was announced as delayed until 2006. • PAV alias support lagged the capabilities supported by z/OS for several years, and is still only partially supported on

systems prior to DMX-3. • Support for larger 3390 volumes generally trails z/OS support by relatively long times. • Support for XRC was almost a decade behind that of IBM and Hitachi-based disk systems. When Dynamic Cache

Partitioning was first delivered, there was a delay before XRC was allowed to run on the same system as this feature. • Support for the FICON MIDAW performance enhancement was delayed. • 4Gb/s FICON trailed both IBM DS8000 and Hitachi-based disk system support. • The DMX-3 950 has never supported System z. • HyperPAV, supported by the IBM DS8000 in 2006 was not planned to be supported on Symmetrix until sometime in

2008. • Enginuity 5772, when delivered in early 2007, had multiple restrictions for mainframes such as no support for

ESCON, for 3380 devices, and other restrictions. • When EMC announced its thin provisioning feature (which EMC calls “virtual provisioning”), mainframes were not

supported. • Some Symmetrix functions are not supported in JES3 environments, such as AutoSwap and GDDR. • Symmetrix V-Max does not support ESCON.

System z Disk Storage Functions Not Supported by Symmetrix

Can EMC identify for me all the disk system storage related functions supported by System z, mainly z/OS, that are currently not supported by Symmetrix so I can determine the capabilities I would be giving up by deploying a Symmetrix in this environment? How do I know EMC has a comprehensive list of these functions? Background. The following list is not necessarily complete, but is believed to be accurate as of the end of July 2009. Symmetrix does not currently support the following capabilities that IBM DS8000 has supported for time periods ranging from months to years. 1. Control Unit Initiated Reconfiguration (CUIR): This function helps promote system availability by allowing

authorized service representatives to vary host channels offline and back online from the disk system without having to be at a z/OS operator console. This is useful during activities such as installing disk system upgrades or performing disk system maintenance (e.g., when selected channel ports or even the entire system may have to be taken offline).

2. High Performance FICON (zHPF) uses advanced protocols to significantly reduce channel utilization. Main values are improved performance, ease-of-management, growth, and TCO.

3. High Performance FICON – multitrack reduces channel utilization for channel programs accessing more than one 3390 track of data. (The first release of zHPF was limited to operations no larger than a single track.) Main

Page 53: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 53

values are improved performance, ease-of-management, growth, and TCO.

4. Self-encrypting disks. This protects data against unauthorized access if individual disks are removed, or when an entire disk system is redeployed or retired. Because System z is deployed in so many mission-critical environments, this security enhancement brings a lot of value with it. The main values are ease-of-management, security, and TCO.

5. DFSMS recognition of SSDs helps customers identify high-performance solid state disks for controlling data set allocation, and for reporting purposes. The main value is ease-of-management.

6. Basic HyperSwap allows a single z server to swap I/Os from one local disk system to another local disk system while applications remain online in case a disk system fails. Basic HyperSwap is standard in z/OS. It is not as comprehensive as the GDPS HyperSwap capability, but can satisfy the needs of many customers. Main values are availability and TCO.

7. Remote Pair FlashCopy provides benefits for Metro Mirror (synchronous) remote mirroring. When a FlashCopy command is issued to the local disk system to make a copy of a volume or a data set, the Remote Pair FlashCopy capability will automatically send the command to the remote disk system. This has two main benefits: It reduces link utilization by avoiding sending each local disk system track as it is copied, and it facilitates the recovery process if operations are moved to the remote disk system. Main values are availability, performance, ease-of-management, and TCO.

8. z/OS Global Mirror (formerly XRC) multiple readers. z/OS Global Mirror is a long-distance remote copy feature optimized for System z and used by many customers. It is supported by DS8000 and also by Symmetrix and Hitachi disk systems. However, IBM often enhances z/OS Global Mirror. Multiple readers improves performance by improving parallelism of processing writes to be transmitted to the remote disk system. Main values are performance and ease-of-management; it also helps improve remote site data currency.

9. z/OS Metro/Global Mirror incremental resync after HyperSwap. This synergy item applies to a remote disk mirroring configuration that, in its most basic form, has three disk systems. You can imagine a triangle with a horizontal base where the three points represent the three disk systems and the three sides represent communication links. The lower left point of the triangle represents a local disk system handling production I/Os – that disk system is connected to another local disk system at the lower right via Metro Mirror and GDPS/HyperSwap. Both these disk systems are connected via z/OS Global Mirror to the third disk system at the top point of the triangle that represents a remote disaster recovery site. Only the production disk system at the lower left normally transmits I/Os to the recovery site disk system. In the past, if a HyperSwap to the other local disk system at the lower right occurred, synchronizing the HyperSwap target system with the remote z/OS Global Mirror system required transmitting full volumes from scratch. The z/OS Metro/Global Mirror incremental resync after HyperSwap synergy enhancement allows only changes to be transmitted, potentially saving hours of time. Main values are availability, performance, and TCO.

General purpose functions not supported for System z

Can the vendor identify all the general-purpose Symmetrix functions that are supported for open systems but not for System z, so customers can determine the general-purpose capabilities they would be giving up by deploying a Symmetrix system in a mainframe environment? Background. A vendor may claim a disk system supports a given function, but a customer might not realize the function is not supported for System z. The following list of Symmetrix functions supported only for open systems but not for System z is not necessarily complete. Customers should check with the vendor for the status of items of interest. Dynamic volume expansion. This function let’s authorized users increase the capacity of a 3390 volume dynamically; it can provide an alternative to thin provisioning. Not supported for mainframes.

Page 54: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 54

Open Replicator for Symmetrix. This function can copy volumes between a Symmetrix and selected other disk systems. Not supported for mainframe volumes. Optimizer and Virtual LUN. The Optimizer has restricted support for CKD volumes. Customers should check with EMC for details. RAID-10 support. For open systems, RAID-10 arrays can consist of from 2 to 255 mirrored pairs of volumes. For mainframes, however, only a 4+4 (four volume pairs) configuration is supported. Thin provisioning. Helps reduce physical capacity requirements. Not currently supported for mainframes. >16 space-efficient snapshots. TimeFinder/Snap supports up to 128 copies of a given source volume, but only for open system LUNs. For mainframes, only up to 16 copies per source volume are supported.

Compatibility – General Considerations

EMC claims various system features are “compatible” with certain IBM disk system features for the System z environment. What does this really mean? Background. Determining the degree of compatibility of non-IBM vendor offerings with IBM offerings can be a significant challenge to prospective buyers of non-IBM disk systems for the IBM System z environment. IBM sometimes licenses specifications (i.e., documentation, not programming code) for its popular disk system features to other vendors. Such a license agreement was announced with EMC in 2007 - see http://www.emc.com/news/emc_releases/showRelease.jsp?id=5252. However, it is up to those other vendors to find development resources, then design, code, integrate, test, and support their own offerings based on those specifications. “Compatible” is a vague term without a formal industry-standard definition. For example, it could mean “is functionally identical to with no exceptions”, or “is similar to” or “is a subset of” or “is similar to with restrictions”, or something else. Customers should investigate non-IBM vendors’ claims of feature compatibility and in particular consider areas such as exceptions, different results for the same commands, performance, supporting management tools, and services.

FlashCopy, PPRC, XRC Compatibility Questions

EMC claims to provide Symmetrix support for its implementation of the following IBM offerings: Metro Mirror (formerly PPRC), Global Mirror (formerly XRC), and FlashCopy. How compatible are these EMC offerings with the IBM offerings which they attempt to emulate? Can EMC show me a list of all the functions supported by these IBM specifications (and by IBM disk systems) that are and are not supported on Symmetrix? Has IBM certified any of these EMC offerings as being “compatible” with the IBM offerings? How many customer references can EMC provide for each function? Background. While EMC put the word “compatible” in the feature names, these functions are technically complex and it may be difficult for a customer to determine the degree of compatibility provided by the features. Customers should ask EMC for documentation that identifies exceptions to compatibility for these features. For example, EMC’s Feature Specification [for] Compatible Native Flash for Mainframe, June 2008, identifies multiple incompatibilities with the IBM-developed FlashCopy feature.

Mainframe Function Support “Contact” issue

Why does the Symmetrix V-Max Product Guide (April 2009) state for some mainframe functions that you should contact EMC for details about support?

Page 55: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 55

Background. Example: “Note: Contact your local EMC representative for specific details regarding your Symmetrix system's support for sequential data striping.” It is not clear what this means. Is support limited? Is support not available?

GDPS Support Questions

How many z/OS Geographically Dispersed Parallel Sysplex (GDPS) installations are running GDPS using Symmetrix? In those environments, how many are using EMC Compatible Peer (for function IBM used to call PPRC) and how many are using EMC Compatible Extended (for function IBM used to call XRC)? How many customer references can EMC provide that are using the GDPS/PPRC HyperSwap Manager function, a lower-cost subset of the full GDPS offering, on Symmetrix? Has GDPS been qualified by IBM for Symmetrix? Background. GDPS helps manage a powerful mainframe cluster environment including storage. GDPS is available only from IBM. It will work with non-IBM disk systems that correctly support required interfaces. IBM architected this solution and has extensive experience implementing it in customer installations. Customers will want to be confident that the disk systems they install are backed by a vendor with proven experience in successfully operating in the planned GDPS environment. PPRC and XRC are open architectures in that IBM licenses the specifications to non-IBM disk vendors who can then choose to design/implement/test all or portions of the function according to those specifications; a sufficiently compatible implementation should run in a GDPS environment. (Note that specifications are documents, not code, and non-IBM vendors must generally design/develop/test and support their implementation on their own.) Different “levels” of PPRC and XRC specifications may be required for different functions. Vendors can hire IBM to test whether their systems work with GDPS. The list of qualified disk systems can be found at http://www-03.ibm.com/systems/z/gdps/qualification.html. As of July, 2009, no EMC disk systems are on this list. Customers are advised to ask EMC to identify all known incompatibilities. Example:

EMC states: “Remote FlashCopy commands (FlashCopy commands sent over PPRC links, formerly known as ‘inband FlashCopy’) are not supported. Remote FlashCopy commands may be required in GDPS environments.” (Source: Feature Specification, Compatible Native Flash for Mainframe, June 2008.)

AutoSwap Issues

How many references can EMC provide of customers actively using AutoSwap? How does AutoSwap compare to IBM’s Basic HyperSwap (standard in z/OS) and to GDPS offerings supporting HyperSwap? What performance measurements can EMC cite and how do they compare to performance information IBM has published for GDPS HyperSwap? Background. AutoSwap is a function to swap Symmetrix systems that are connected via SRDF/S and attached to the same host. If one Symmetrix has a system failure, for example, the other Symmetrix can be substituted for it. Potential advantages of z/OS Basic HyperSwap compared to AutoSwap include: • Basic HyperSwap is included with z/OS at no extra charge. Potential advantages of GDPS/PPRC HyperSwap Manager compared to AutoSwap include: • GDPS support. The GDPS/PPRC HyperSwap Manager is a subset of the full GDPS/PPRC offering, providing the HyperSwap

function at a lower price than full GDPS/PPRC. GDPS (Geographically Dispersed Parallel Sysplex) is an IBM offering that

Page 56: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 56

provides z/OS cluster failover support as well as the HyperSwap capability. Users running GDPS/PPRC HyperSwap Manager can optionally upgrade to the full IBM GDPS offering. Symmetrix AutoSwap, though similar in concept to HyperSwap, is not supported by GDPS and cannot be upgraded to GDPS. Customers should not be misled by EMC’s claim that “[AutoSwap is] a cost-effective subset of GDPS functionality.” (Source: EMC AutoSwap Data Sheet, May 2004, the latest found at www.emc.com as of July 2009.) (Keep in mind that GDPS/PPRC HyperSwap Manager is lower cost than full GDPS, and Basic HyperSwap is included in z/OS at no extra charge.)

• Openness. Like the full GDPS offering, GDPS/PPRC HyperSwap Manager is an open solution. If non-IBM disk systems

implement the required level of PPRC function, HyperSwap Manager should also work on those non-IBM systems. (Symmetrix may have this support.) In contrast, EMC’s AutoSwap is limited to Symmetrix only.

• Performance. No AutoSwap performance measurements could be found; customers should ask EMC about whether it can

supply that important information. IBM is upfront in this regard, a sign of its confidence in HyperSwap performance. From the IBM redbook: GDPS Family - An Introduction to Concepts and Capabilities, SG24-6374-04, April 2009:

Recent enhancements to both the DS microcode and within GDPS/PPRC have resulted in significant HyperSwap performance improvements over previous versions of the software. In tests at IBM, using a currently supported release of GDPS/PPRC in a controlled environment, a planned HyperSwap involving 46 LSS pairs, with 6,545 volume pairs, resulted in the production systems being paused for 15 seconds. A test of an unplanned HyperSwap of the same configuration took 13 seconds. While results will obviously depend on your configuration, these numbers give you a ballpark figure for what to expect. The elapsed time of the swap is primarily a factor of the largest number of devices in any of the affected LSSs.

• zLinux support. GDPS/PPRC HyperSwap Manager supports zLinux as well as z/OS volumes in the disk system. AutoSwap

does not appear to support zLinux. • Paging data set support. EMC states that, for paging data sets, “Each device must be dedicated to page dataset usage, and only

online to one host.” (Source: EMC AutoSwap Product Guide, April 2009.) At first sight this may not seem that significant. But it seems that AutoSwap cannot swap paging volumes if they are online to more than one system. This means the customer must maintain multiple configurations, one per system with each system's paging pack defined/online to itself and not to any other system. In contrast, GDPS/PPRC HyperSwap can manage all production systems with a single OSConfig. The BCP Internal Interface is able to reply to any duplicate volume serial numbers prompts that may occur in some recovery scenarios. So, the fact that, using AutoSwap, paging devices cannot be online to all systems in the Sysplex translates to a systems management issue - exception handling - increasing the possibility of human error.

• Customer experience. GDPS/PPRC with HyperSwap has been implemented by many customers. It is not known how many

Symmetrix users have implemented AutoSwap. The more customers that have implemented a product, the increased likelihood that the product is more stable and that the vendor will continue to invest in it.

Potential advantages that apply to both z/OS Basic HyperSwap and GDPS HyperSwap Manager compared to AutoSwap include: • Triggers. IBM, with its intimate knowledge of z/OS and z/OS I/O processing in particular, has developed a list of triggers for

unplanned HyperSwaps based on many years of problem analysis in actual customer environments. IBM has refined this list over time. This list is proprietary and not available to EMC.

• JES2 and JES3 support. HyperSwap supports both JES2 and JES3 environments. AutoSwap is limited to JES2 only. • Paging data set support. HyperSwap appears to have more comprehensive support for paging data sets than AutoSwap.

GDDR Issues

What issues should I be concerned about for the EMC Geographically Dispersed Disaster Restart (GDDR) offering? How many GDDR customer references can EMC provide for environments like mine? How many GDDR installations are there worldwide - if not many, why would EMC continue to support and invest in the product?

Page 57: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 57

Background. EMC introduced GDDR in 2007. This timing compares poorly to IBM’s ten years of experience at that time in this kind of complex environment with its GDPS offering. GDPS is as much about the mainframe server and applications as about storage. IBM designs/develops/manufactures the mainframe, its architecture, and its operating systems. And IBM has extensive business consulting experience. EMC may be able to help manage the Symmetrix part of a d/r plan, but clearly has less capability to help manage the server part of a d/r plan or broader business considerations. The stability of the relatively new and complex GDDR offering is certainly open to question. Both customer and EMC experience with GDDR seem to be very limited. Considering the importance and cost of d/r to a business, GDDR arguably appears to be a greater business risk, at least at this time, than more proven offerings such as GDPS. GDDR is a closed, proprietary offering that supports Symmetrix only. In contrast, GDPS runs with any vendor’s disk system that correctly implements required function based on the appropriately licensed specifications. In this way, GDPS allows customers to flexibly intermix different vendor’s disk systems, while EMC’s GDDR requires that all participating systems be Symmetrix only. GDDR has multiple prerequisite products, including AutoSwap, all of which can increase total cost. EMC sometimes asserts that GDDR is a product while GDPS is a service offering, but EMC strongly recommends that customers pay for EMC services to help with GDDR implementation. As of 2009, SRDF/Star configurations for mainframes require GDDR, increasing overall cost. An industry analyst paper discussing GDDR is online at http://www.joshkrischer.com/files/ZSO03012USEN.pdf.

Mainframe Snap Usability Issue

The TimeFinder/Clone mainframe snap feature can make copies of data onto SRDF/S and SRDF/A source (R1) volumes in the same Symmetrix. While the copies can be usable almost immediately at the local site, does the function support the copies also being usable at the same time at the recovery site? If not, what are the ramifications? Background. This capability likely has very limited practical application because its use can lead to serious operational issues. It is invoked by the SRDFS_R1_target and SRDFA_R1_target parameters. A key attribute of remote mirroring in general is that updates to local volumes are reflected in-order on associated remote site volumes, preserving data integrity in the remote copy. That promotes a relatively straightforward recovery process. Symmetrix, however, supports the ability for users to issue snap volume and snap dataset commands that copy data onto SRDF source volumes in such a way that compromises data integrity by not reflecting local updates in order at the recovery site. Consider a scenario. A job makes a SNAP VOLUME or SNAP DATASET copy. The copy appears to be immediately (logically) complete at the local site and applications start using it. Based on using that data, applications may modify other data (on other volumes) that is transmitted immediately by SRDF. Catalogs that may reside on other volumes may be changing and those changes are also sent immediately. However, only as the physical copy process moves data in the background to the local site SRDF source volume is that data transmitted to the remote site. If there is a disaster during this time, the original copy may not have been completely received at the recovery site, yet other data may have been changed based on that copy. It isn’t clear there is even a way for the recovery site to readily determine the status of the copy at that site, let alone to identify a trail of all changes made based on referencing that copy. And this situation is worse if multiple local snap operations were in process at the same time. This kind of resulting data inconsistency and operational confusion is generally intolerable in a production environment. How would recovery site operations handle the situation? EMC does provide a way to inhibit use of the function, and careful reading of EMC product documentation reveals EMC warns against using this function for data that is important to recovery operations. But, by supporting the

Page 58: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 58

function there is the risk that users will use it because it appears convenient, either not realizing the consequences or in spite of the consequences. EMC might claim there is a benefit by offering the function, but perhaps other vendors don’t offer an equivalent function because it can be much more trouble than it’s worth. The IBM DS8000 has elegantly addressed this situation for its Metro Mirror (synchronous remote mirroring) feature through a capability called Remote Pair FlashCopy. When a FlashCopy command is issued to the local disk system to make a copy of a volume or a data set, the Remote Pair FlashCopy capability will automatically send the command to the remote disk system. This has two main benefits: It reduces link utilization by avoiding sending each local disk system track of data as it is copied, and it facilitates the recovery process if operations are moved to the remote disk system.

Page 59: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 59

Miscellaneous Product and Vendor Questions

EMC Criticizes its own V-Max Announcement

Why did EMC focus on future, unavailable capabilities in its April 2009 Symmetrix V-Max announcement when EMC itself has disparaged such tactics (at least when used by other vendors)? Background. EMC’s V-Max announcement was a typical next-generation storage system announcement including improvements in performance and other system attributes. Vendors “leap frog” each other his way all the time. What was unusual in EMC’s announcement was its focus on future capabilities that are not available (i.e., FAST and the ability to someday scale-out a single system by connecting multiple systems together). At least a time-frame for the first release of FAST was provided: “later this year [2009]”, though it is still not available more than 6 months after the April 2009 press release. No information on the timing of even one scale-out capability was identified. EMC executive Chuck Hollis states: “Nothing wrong with announcing something that won't be available until some date in the future -- within reason, that is. I usually have a ‘three month’ rule: if GA is promised within 90 days, I take that as a serious announcement.” (Source: http://chucksblog.emc.com/chucks_blog/2008/10/raiding-the-roa.html.)

Inaccurate or Misleading Claims

Why does EMC make claims that Symmetrix is the only system capable of certain functions when that is not the case? Background. Examples: EMC claims “Symmetrix systems add additional bits to the data record to ensure that the information transmitted belongs to the record specified. These bytes contain an internal LBA and Cyclical Redundancy Check (CRC)…This second level of protection, available only in Symmetrix systems, further ensures data integrity by preventing incorrect data from being transferred.” (Source: EMC Symmetrix V-Max Series Product Guide P/N 300-008-603 REV A01, April 2009). Contrary to EMC’s claim, the IBM DS8000 has supported this capability for years. “When application data enters the DS8000, special codes or metadata, also known as redundancy checks, are appended to that data. This metadata remains associated with the application data as it is transferred throughout the DS8000. The metadata is checked by various internal components to validate the integrity of the data as it moves throughout the disk system. It is also checked by the DS8000 before the data is sent to the host in response to a read I/O request. Further, the metadata also contains information used as an additional level of verification to confirm that the data returned to the host is coming from the desired location on the disk.” (Source: IBM System Storage DS8000: Architecture and Implementation, SG24-6786-06, May 2009) EMC claims “Symmetrix is the only high-end storage solution that delivers truly online, nondisruptive [microcode] upgrades without requiring additional hardware and/or software on the hosts.” (Source: Enginuity: The EMC Symmetrix Storage Operating Environment, C1033.7. April 2009.) EMC itself often documents exceptions to the ability to nondisruptively upgrade Symmetrix microcode in its Enginuity Release Notes available to customers. Further, contrary to EMC’s claim, the IBM DS8000 also supports concurrent changes to system microcode without requiring additional host hardware or software. “The DS8000 series is designed to help address the needs of dynamic environments requiring the highest levels of availability. It is designed to support dynamic system changes, such as online system microcode updates and online hardware upgrades.” (Source: IBM System Storage DS8000 Turbo series, data sheet, TSD00374-USEN-17, February 2009) EMC claims “Unlike competing products, Symmetrix DMX and V-Max systems [microcode] can be upgraded without interruption to most hosts and applications, even if those hosts have only a single connection to the array.” (Source: Enginuity: The EMC Symmetrix Storage Operating Environment, C1033.7. April 2009) Contrary to EMC’s claim about competing products, the IBM DS8000 also supports this capability. “One of the final steps in the concurrent code load process is the update of the host adapters. Normally every code bundle contains new host adapter code. For Fibre Channel cards, regardless of whether they are used for open systems attachment, or System z

Page 60: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 60

(FICON) attachment, the update process is concurrent to the attached hosts. The Fibre Channel cards use a technique known as adapter fast-load. This allows them to switch to the new firmware in less than two seconds. This fast update means that single path hosts, hosts that are fiber boot, and hosts that do not have multipathing software do not need to be shut down during the update. They can keep operating during the host adapter update, because the update is so fast.” (Source: IBM System Storage DS8000: Architecture and Implementation, SG24-6786-06, May 2009)

Vendor Lock-in / Proprietary Orientation

Has EMC moved away from its history of having a significantly proprietary orientation to its products? Should customers remain concerned that EMC’s products still tend to promote vendor lock-in and limit flexibility? Background. EMC has a history of taking a proprietary, vendor lock-in approach to its products. EMC is certainly not the only vendor with proprietary storage offerings, but it has long had a propensity to focus on such offerings even when competitors have taken more open approaches. Fortunately for customers, EMC’s approach to the market has not been successful in preventing migrations from Symmetrix to other vendors’ disk systems. Customers and industry analysts have frequently criticized EMC for its often proprietary approach to the market place. " ‘EMC storage is too proprietary for our continued use,’ one [customer] survey respondent said.” (Source: HDS sweeps Quality Awards on arrays, http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1113662,00.html ) • Consider EMC's own statements about its proprietary strategy:

Following an EMC meeting with financial analysts in August, 2000, multiple analysts reported that EMC spoke of deploying software and services as "barriers to customer exit". From "Boom", Forbes Magazine, Daniel Lyons, October 2, 2000: "The real key is EMC software...[which runs] only on EMC hardware. So once you start depending on those apps, it is ever tougher to switch to cheaper hardware from IBM or Sun...Some 80% of sales are to existing customers. 'It's virtually impossible for our customers to leave us,' [Mike] Ruettgers [EMC CEO] crowed to Wall Street analysts at a recent conference."

• EMC now supports the industry standard SMI-S open API management specification, following the lead of other

vendors. However, EMC continues to support and enhance its proprietary API formerly called WideSky, though the name was changed to Solutions Enabler following significant negative criticism about this proprietary API strategy.

• EMC representatives have often said Symmetrix is open because it supports multiple, heterogeneous servers.

Competitive systems also support multiple, heterogeneous servers - that is not what most customers mean these days when they ask about openness and interoperability.

• From "EMC: A Checkered Past in Storage Openness", Enterprise Systems Journal,

http://esj.com/articles/2002/01/01/emc-a-checkered-past-in-storage-openness.aspx

"EMC's participation in Storage Area Networks (SANs) has been dotted with doublespeak about open and proprietary solutions. When the notion of SANs caught on in the industry in the late 1990s, EMC responded with its own solution, the Enterprise Storage Network (ESN). It's a subject of considerable debate and revisionism today, but EMC is said to have endeavored to impose a 'warranty lock' to control what equipment customers could and couldn't include in an ESN. EMC also formed an 'open standards' group of its own, The Fibre Alliance, with a goal more than anything else of fostering an EMC-centric Fibre Channel SAN solution. This, plus the statement last year by IETF IP Storage Working Group Chairman (and EMC Senior Engineer) David Black, suggesting that EMC already 'owned' the emerging IP SAN protocol, iSCSI—even before its adoption by IETF as a SAN standard—have given much credence to competitor claims that EMC's dedication to 'openness' is less than sincere." "EMC may be talking the talk of open storage. But plenty of vendors--and customers--are waiting to see whether it will also walk the walk. Until the evidence is compelling, rumors of a kinder, gentler EMC will remain just that.

Page 61: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 61

www.searchstorage.com, a web site dedicated to storage industry news, published an article titled "EMC walks away from SPC". (SPC is the Storage Performance Council that develops and promotes open disk storage performance benchmarks – see www.storageperformance.org) The article quotes Roger Reich, the SPC founder: "We've worked very hard to come up with the benchmark to aid customers by giving them a fundamental selection methodology for storage subsystem products. The goal is to give the customer honest-to-god, quantitative fact about competitors in the market, so they can make an intelligent purchasing decision...The SPC simply wasn't in line with the goals of EMC." (Source: http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci205977,00.html) As of 2009, EMC continues to refuse to join the Storage Performance Council. (IBM and many other storage vendors are not only SPC members, but have published benchmark results for many products so that customers have a standard reference point to compare product performance.)

While it had some positive things to say about ControlCenter, an InfoWorld analysis says "its reliance on agents and the limited support for competing hardware shoot it down as an integration solution for all but EMC-centric storage networks". (Vendors square off in SAN integration challenge - EMC, HP, IBM, and Fujitsu Softek grapple with synching up three heterogeneous SAN, InfoWorld, http://www.infoworld.com/article/03/09/12/36FEsanint_1.html.)

In 2007 EMC introduced GDDR (Geographically Dispersed Disaster Restart) to help manage Symmetrix in mainframe environments. Unlike IBM’s open GDPS (Geographically Dispersed Parallel Sysplex) offering that can be supported on heterogeneous vendors’ disk systems, EMC’s offering is proprietary and limited to Symmetrix only. Similarly, EMC’s Symmetrix AutoSwap feature is limited to Symmetrix only, in contrast to IBM’s GDPS/HyperSwap offering that can be supported on heterogeneous vendors’ disk systems.

“I was disappointed to hear almost nothing about interoperability or standards [at EMC World 2008]…Following Tucci was Howard Elias, who runs EMC's management software division…But particularly as it pertained to Control Center, the talk was almost strictly about how EMC managed EMC products. So much so that at one point Elias used the word heterogeneous to refer to managing two different EMC product lines from the same management tool." (Source: EMC's Own Not-So-Little World Posted by Art Wittmann, May 19, 2008. http://www.informationweek.com/blog/main/archives/2008/05/emcs_own_nosoli.html

It is telling to contrast EMC’s proprietary orientation to IBM’s openness in licensing technical specifications for popular IBM disk system functions so that other vendors can choose to implement those functions on their systems. Other vendors, including EMC, have licensed such IBM specifications. Examples include FICON, PAV, HyperPAV, PPRC, XRC, FlashCopy, and more. It isn’t clear whether EMC even offers to license specifications for any Symmetrix functions to other vendors, or, if they do, whether any other vendor has decided there would be value in implementing compatible functions.

Limited Solution Perspective

Won't EMC's narrow focus limited primarily to storage products limit its recommendations for solving my business problems? For example, isn't EMC likely to recommend just disk, and not tape (which it resells) or a disk/tape combination, even when the latter is more appropriate and less expensive? Isn't EMC likely to offer/propose disk system-based solutions for data transfer such as its proprietary Symmetrix InfoMover feature, but not advise customers of the many alternative solutions in the industry, many of which work with any vendor’s disk system? With its relatively limited product portfolio and perspective, how can I be sure EMC is proposing what is best for me rather than what is best for EMC? Background. EMC generally tends to have less total system awareness than a full system vendor such as HP or IBM that has deep understanding and experience with servers, operating systems, networks, databases, and business applications. When a vendor has a limited product portfolio and associated expertise, it can be tempting for its sales representatives to try to characterize every customer problem as a nail only their particular hammer can handle.

Page 62: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 62

"EMC remains a one-trick pony in a circus where the audience wants more. When it comes to enterprise systems, customers tend to seek storage systems in the context of greater technology needs, rather than as a distinct offering." - Source: Some of our favorite technology companies didn't make the cut for this year's list. We have our reasons. http://www.redherring.com/Home/921.

Lack of Total Solution Offering

EMC may say that being able to buy disk storage, SAN switches, and SAN management from EMC simplifies the acquisition. Wouldn't the same argument apply even more so to buying from a full systems vendor that offers a more complete storage product line, and other IT products such as servers, operating systems, databases, and more? Background. By acquiring multiple products from one vendor, overall cost may be lower and the complexity of managing multiple vendor relationships is reduced. EMC sales representatives may try to persuade a customer to not obtain a multi-product packaged solution from an EMC competitor - but is that to the customer's advantage, or to EMC's?

Value for Money?

Why should we pay more for Symmetrix when competing products can provide comparable or superior benefits at lower cost? Background. EMC may focus on the initial Symmetrix acquisition price – which may or may not be lower than competition – to help avoid customer awareness of what can be the relatively high total cost of ownership (TCO) for Symmetrix. A true cost picture, of course, includes initial costs, upgrade costs, post-warranty maintenance costs, and more. If a customer becomes aware of the high total cost of Symmetrix, EMC may try to claim the high costs are justified because Symmetrix provides benefits such as storage consolidation, and features such as internal copy and remote copy. However, competitive systems can provide these same benefits at a potentially lower Total Cost of Ownership (TCO). Reference: ITG Executive Summary: Value proposition for IBM System Storage, International Technology Group (ITG), March, 2008, http://www-03.ibm.com/solutions/sap/us/detail/resource/B615608Y55893X08.html.

Inconsistent Messages

Hasn’t EMC frequently changed its technology messages and product strategies? Doesn’t this make it difficult for customers to look to EMC for guidance when making storage strategy decisions? Background. EMC has said tape storage is not needed, but now resells tape products. EMC has said parity RAID (a.k.a. RAID-S/R) is superior to RAID-5, but finally implemented RAID-5 in 2004 on Symmetrix and dropped parity RAID support altogether in DMX-3. EMC has said mirrored cache to protect writes against data loss isn’t important, but finally introduced mirrored cache to protect writes in DMX-3 in 2005. EMC has said external disk system virtualization isn’t important, but introduced Invista in 2005. EMC promoted its SRDF semi-sync option as offering better performance than synchronous mode in spite of documented data integrity issues, but has apparently dropped support for semi-sync mode as of DMX-3. EMC has publicly announced product capabilities which it never delivered.

“In the late '90s, EMC Corporation's Mike Ruettgers was known to say to anyone willing to report the story, ‘Tape is dead!’ Yet tape is still alive and well, in spite of his pronouncements.”

-- Backup and recovery is not dead - Storage Management, Computer Technology Review, Phil Pascarelli, November 2003

“The case for providing block mode virtualization across intelligent storage arrays that already do this has yet to emerge -- and is unlikely to in the near future.”

Page 63: Insights into EMC Symmetrix V-Max into EMC Symmetrix V-Max This paper provides insights that help identify various issues, limitations, and restrictions that apply to EMC Symmetrix

Questions to ask about Symmetrix V-Max

DS-Product Analysis - CE 63

-- EMC exec details storage management gaps, http://searchstorage.techtarget.com/qna/0,289202,sid5_gci819418,00.html

“Symmetrix DMX-4 [will have] support for 750 GB SATA II disk drives later in 2007.” But, contrary to its promise, EMC never delivered this support.

--- EMC press release from July 2007 at http://www.emc.com/about/news/press/us/2007/07162007-5180.htm

-------------------------------------------------------------------------------------------------


Recommended