+ All Categories
Home > Documents > 1001 - EDPCML - Section #1 - StorageCenter Fundamentals

1001 - EDPCML - Section #1 - StorageCenter Fundamentals

Date post: 12-Oct-2015
Category:
Upload: hugo-cg
View: 253 times
Download: 0 times
Share this document with a friend
Description:
Curso Compellent
Popular Tags:

of 51

Transcript
  • 1-1

    This section discusses the hardware fundamentals of the Storage Center. At the end of this section, the student will be able to identify the various components that make up the Storage Center.

    Section Objectives:

    Storage Center hardware components include:

    SC8000 controller

    Controller port identification

    Controller FRUs

    Enclosures

    Enclosure FRUs

    Disk Types

    IO Cards

  • 1-2

    Whats in it for you?

    At the conclusion of this module, you should be able to:

    Understand the basics of a Storage Center solution.

    Define the core software included in a Storage Center solution.

    Define the application software available in a Storage Center solution.

    Discuss updates included in version 6.3 and 6.4 of the Storage Center software.

  • 1-3

    The Dell Compellent Storage CenterTM Storage Center is a modular product offering a build-as-you-go approach that offers a low cost of entry with unlimited growth potential. An open-systems hardware architecture enables simultaneous mixing and matching of any server/host interface or drive technology, providing the flexibility to use the optimum technology for the deployment-specific requirements. Features include: Technology independence allows the mix and match of server interfaces and

    drive technology Flexible growth allows for growth of the capacity, connectivity and

    functionality of the Storage Center Configurable performance designed with the speed and bandwidth the

    deployment requires Enterprise level features, reliability and availability Online migration allows adding additional capacity, connectivity or

    functionality while online Core software with capabilities that integrate functionality others price

    separately Powerful options that offer additional application software that optimizes

    utilization and accelerates data recovery A single robust interface that streamlines administration of all storage

    resources

  • 1-4

    Deployment Architecture Options Storage Center can be deployed in a variety of configurations:

    Single Controller, Single Enclosure

    Single Controller, Multiple Enclosures

    Dual Controller, Single Enclosure

    Dual Controller, Multiple Enclosures

  • 1-5

    Multi-Site Architecture Storage Center supports the use of multiple systems in a local site / remote site architecture in support of data replication. Replication is used to duplicate volume data, possibly in support of a disaster recovery plan or simply for rapid local access to a remote data volume. Typically, data is replicated over some distance to safeguard against local or even regional data threats as part of an overall disaster recovery plan. The source system is referred to as the replicating system and the destination system is referred to as the target system. The connections between the replicating system and the target system can be either Fibre Channel or iSCSI. The replication process can be configured as either synchronous or asynchronous replication.

  • 1-6

    The Storage Center Storage Center consists of the following components: Controllers

    Performs the overall control of the Storage Center Processes the RAID functionality Processes all core and application software Redundant firmware on industrial flash Real-time embedded operating system Redundant hot swappable power supplies Modular PCI-based IO cards to provide connectivity Redundant hot swappable fans

    Enclosures SAS 12 or 24 disk drive capacity Fibre Channel 16 disk drive capacity Redundant hot swappable power supplies

    Switches Provides server connectivity

    Host Bus Adapters (in the servers) Provide server connectivity to FC, FCoE and iSCSI

  • 1-7

    HBA (Host Bus Adapter) Server Configuration Host bus adapters may need certain settings modified to operate with the Storage Center. The following references a Qlogic HBA with a Windows OS. Other HBAs will require similar changes. Qlogic HBA Server Configuration Recommendations Host Adapter Settings:

    Host Adapter BIOS = Enable (if booting from SAN; if booting locally accept the default of Disable) Connection Options = 1 (for point to point connections on the Front End) Spinup Delay = Disabled

    Advanced Adapter Settings: Luns per Target = (as appropriate for your site) Enable LIP Reset = Yes Login Retry Count = 60 Port Down Retry Count = 60 Link Down Timeout = 30 Execution Throttle = Number of buffers/number of connected servers

    There are additional settings that may need to be configured. It is recommended that you refer to the 6.x User Guide for a complete list of HBA settings. If additional information is needed, check with the HBA manufacturer or Dell Compellent Copilot Services. 3rd party HBA vendors tools to view these values: Qlogic - SANSurfer Emulex HBAnywhere To view the WWN of HBAs in a physical Windows server - fcinfo

  • 1-8

    Fibre Channel Zoning Fibre Channel Switched Fabrics can be segmented into areas that are known as zones through a process known as zoning. Zoning is the partitioning of a fabric into smaller fabrics to restrict access and/or add security. Zoning applies only to the Fibre Channel - Switched Fabric (FC-SW) topology. Zoning is used to create dedicated paths between Initiators (devices requesting IO) and Targets (devices IO is requested from such as a hard drive). Zoning can be created using either ports on the switches or the WWNs of the end-devices.

    Port zoning restricts communication at the port level This prevents ports from communicating with unauthorized ports

    Name zoning restricts access by World Wide Name This is more flexible, but WWNs can be spoofed, reducing security

    Note: Dell Compellent Best Practice is to use single initiator, multiple target

    zones.

  • 1-9

    Storage Center Software Overview

    Storage Center - Core The Storage Center core software provides the essential storage functions, providing functionality previously available only in high-end systems.

    Storage Center - Applications The Storage Center provides a rich set of licensed applications These applications have been designed to support the dynamic storage needs of the customer.

    Storage Center - Unified Management Storage Center provides a common management and unified control interface to both the controller and the enclosure. Through this interface, the administrator can configure both controller and enclosure resources.

  • 1-10

    Storage Center - Core Using Dell Compellents Fluid Data Architecture, the Storage Center provides the essential storage features providing functionality previously available only in high-end systems.

    Virtualization

    Efficient utilization of storage resources by managing disks as a single pool and presenting disk resources to any server Manages all disks (any capacity, any speed, any interface) as single disk pool Upgrades and modifications can be performed in a non-disruptive matter

    Cache Enhances overall performance and availability through multi-threaded read ahead and mirrored write cache (in a Dynamic Controller environment)

    Boot from SAN Allows servers to boot from the SAN eliminating the need for internal drives in the servers

  • 1-11

    Storage Center - Core (cont.)

    Copy-Mirror-Migrate Copy, mirror and migrate volumes without impacting users across different disk types, RAID levels, or enclosures

    Thin Import

    Convert data from legacy systems into thin provisioned volumes on a Dell Compellent SAN

    Virtual Ports

    Eliminates the requirement for reserve ports in a Cluster Controller configuration now allowing all front end ports to act as primary ports

    Charting Viewer

    The Storage Center Manager comes with a complete suite of management tools and real-time charting capabilities are provided

    Phone Home

    Contacts Dell Compellent technical support regularly, on-demand or upon a system affecting event

  • 1-12

    Storage Center - Core (cont.) Dynamic Capacity

    Use significantly less storage and improve overall performance by allocating space only when data is physically stored

    Data Instant Replay

    Recover from data hazards in minutes by creating any number of space-efficient Replays that restore data instantaneously

    Enterprise Manager

    Manage multiple Storage Centers from one location, configure and manage Remote Instant Replay and Live Volumes, establish trends using multiple reports and configure user defined threshold alerts for notification about system performance

    Datapage Sizes

    Optimized application performance through variable page sizes

    Dynamic Controllers Add a second Storage Center controller to increase system availability, capacity and performance

  • 1-13

    Storage Center Applications The Storage Center provides a rich set of licensed applications that have been designed to support the dynamic storage needs of any customer. Storage Center applications can be added to any configuration, allowing users to scale hardware and software from entry-level to enterprise with one common interface. Remote Instant Replay Synchronous or asynchronous Remote Instant Replay sends

    volume replays to remote systems Data Progression Lower total storage costs by automatically tracking usage and

    migrating data between storage classes based on user defined rules. Retain frequently accessed data on high performance storage devices, and infrequently accessed data on lower cost storage devices

    FastTrack Dynamically places the most active data on the outer disk tracks Replay Manager Allows for application consistent replays for SQL, Exchange and Hyper-V volumes Live Volume Volumes are replicated between primary and secondary Storage Centers.

  • 1-14

    The page pool is the technical term that refers to all the physical disks that reside in a Storage Center. In the Storage Center interface the page pool is referred to as a Disk Folder. It is a collection of allocated and unallocated disk blocks (pages).

    The default page size is 2MB.

    The page pool takes up a percentage of the total space. That percentage varies based on the overall space in the Storage Center. Some pages are ready to be used as RAID 10, some RAID 5 and some RAID 6.

    The page pool provides automated, sophisticated block management to make it much easier for the user to get his data in the optimum location on the SAN.

    The page pool grows and shrinks based on the demands the users place on it. It is intelligent enough to automatically defragment and tune itself.

    Metadata that is saved for each page (block): Creation, access and modification time Frequency of access Type and tier of disk drive RAID level Corresponding volume Active (RW) or Frozen (Read Only)

    The blocks metadata is not stored on the block itself.

  • 1-15

    Core: Disk Virtualization The Storage Center manages all disks as a single pool of resources from which the administrator can dynamically select and assign disk resources to server resources. Virtualization employs full disk parallelization across heterogeneous servers and disks.

    Overcome conventional problems: Volumes span a fixed subset of disks Storage restricted to single servers Homogeneous drives and interfaces Long-term configuration decisions must be made before usage patterns are understood

    Disk Virtualization benefits: Manage all disks as a single pool Present disk resources to any server Mix and match any capacity, speed, or interface Changes can be made in a non-disruptive matter Reduce costs Optimize performance Increase availability Maximize resource utilization

  • 1-16

    Common RAID Levels RAID (Redundant Array of Independent Disks) is the method of using multiple drives acting as one storage pool. There are different methods of storing the data and each method offers different performance and redundancy levels. RAID 0 offers fast performance but no redundancy. All drives are used to store data but if one drive fails, the information is lost. RAID 1 offers fast performance and high availability. All data is written to one drive and then mirrored on a second drive. If either drive fails, the remaining drive maintains access to the data. The trade off is in cost. While both drives are used, the available storage space is only half the raw disk space. This makes the cost of storage higher. RAID 5 offers good performance and efficient storage. Data is striped across multiple drives but a parity bit is added so if a drive fails, the missing data can be reconstructed from the remaining drives and the parity. The parity information consumes the equivalent of one drive so usable storage is the total drive space minus one drive. RAID 10 offers fast performance and high availability. Data blocks are mirrored to other drives and then striped. If a drive fails, data is still accessible from the mirror. Rebuild times are shorter since no parity needs to be calculated. The trade off is again cost. Usable space is half the raw disk space.

  • 1-17

    RAID Consumption Considerations The use of the different RAID types that the Storage Center supports has a direct impact on the amount of storage consumed on the disks when a write occurs. Dell Compellent implements RAID 5 in either of two methods; RAID 5-5 and RAID 5-9. RAID 5-5 includes a parity bit every 5th disk and has a 20% overhead while RAID 5-9 includes a parity bit every 9th disk and has an 11% overhead. Dell Compellent uses RAID 5-9 by default. Below is a comparison of the different RAID types. RAID 10

    Fastest performing RAID type Consumes 2 times the RAW disk space

    RAID 5-5

    Medium performance on WRITEs, little or no impact on READs Consumes 20% of storage for overhead Provides better performance in degraded mode because it uses half the I/Os

    RAID 5-9

    Provides the same performance on WRITEs as RAID 5-5, little or no impact on READs Consumes approximately 11% of storage for overhead Need at least 9 disks to use Greater possibility of losing a disk in the RAID set Provides lesser performance in degraded mode due to I/O usage

  • 1-18

    Dual Redundancy Dual Redundancy is offered through the use of RAID 6 and RAID 10 Dual Mirror. Each addresses the concern of losing two drives within the same stripe set. RAID 10 DM offers protection by striping the data and then creating two mirrors. This has a higher overhead than normal RAID 10. RAID 6 offers similar protection by striping the data, then calculating a parity bit twice and writing each of the parity bits to a separate disk. While the writes are slower than RAID 10 or RAID 10 DM, RAID 6 offers good read performance while still protecting against a two drive failure. RAID 10 DM overhead is 66% RAID 6 is offered in two versions; RAID 6-6 and RAID 6-10 RAID 6-6 overhead is 34% while RAID 6-10 overhead is 20% Best Practice: If individual drive capacities in a tier are greater than 900GB, set the tier to Dual Redundancy. Tiers may be modified at anytime, however within a tier it may only be R5/R10 or R6/R10DM.

  • 1-19

    Core: Boot From SAN The boot from SAN function allows servers to use an external SAN volume as the servers boot volume. This removes the requirement that servers contain internal DAS drives.

    Benefits

    Diskless servers reduce cost Easier transition to blade servers Improves server management and recovery Allows multiple boot images to be centrally stored for easy

    backup, management and restoration No license-per-server is required for Dell Compellents boot

    from SAN function

    Considerations HBA BIOS needs to be configured to boot from storage The server BIOS needs to be configured to boot from the HBA When loading the O/S, the O/S may need to be configured to

    boot from a remote server

  • 1-20

    Core: Dynamic Capacity Dynamic Capacity allows the system to allocate space only when data is physically written to the volume, significantly reducing the initial storage needs and improving overall performance. In the first example, you will notice how 10 terabytes can be presented to servers from the Storage Center but because of how Dynamic Capacity works only 1 Terabyte of actual space is used. The second scenario shows how a traditional SAN (without Thin Provisioning) allocates disk space and how the Storage Center allocates space. Notice how much space is allocated but not used on a traditional SAN. Both volume 1 and volume 2 have lots of unused allocation. In the Storage Center you dont need to purchase extra space until you actually intend to use it. Be aware - A user can possibly over allocate beyond physically available storage.

  • 1-21

    Core: Data Instant Replay Features

    Leverages Fluid Data Architecture

    No pre-allocation needed

    No duplication of data

    Automatic coalescence

    Unrestricted number of replays per volume

    Multiple schedules per volume

    Recovered replays are full read/write volumes

    Can be mapped to any server

  • 1-22

    Core: Data Instant Replay A replay is a point-in-time-copy (PITC) of a volume to provide fast recovery from hazards such as viruses, power outages, hardware failure, or human error. Restoring a replay can recover lost data or revert the volume to a previous point in time. Dell Compellent's replays differ from the traditional PITC concept in that the blocks or pages are frozen and not copied. No user data is moved, making the process efficient in both time taken to complete the replay and space used by replays.

    Benefits Unlimited, space efficient, time-based replays at any time interval Instantaneous recovery from data hazards by restoring a replay Minimize system downtime Reduce dependence on tape-based backups Intuitive profiles and scheduling mechanism Replays can also be used to test new applications on actual data without

    risk of data loss or corruption

  • 1-23

    Replay Recovery When a volume is recovered, it is labeled as a view. View volumes share some of the data with the replay it was recovered from and data from the source volume.

    Volume 1 (Source Volume - Blue)

    Recovery of Volume 1 is started at Time 2

    Volume 2 (Recovered Volume - Red) Data in Volume 2 consist of Volume 1 coalesced data and the deltas at the time of the replay was taken on the Volume 1.

    This becomes a new branch The new branch is only dependant on the old branch

    while read data is needed, that replay cannot be expired until Volume 2 no longer needs that data.

    DIR not enabled on Volume 2 by default Any new writes become part of the newest data set and

    considered active on Volume 2

    If a replay was taken on Volume 2, only the new writes on Volume 2 would be in that replay and would be independent of Volume 1 data

  • 1-24

    How Data Instant Replay Works: A replay is a point-in-time-copy (PITC) of a volume to provide fast recovery from hazards such as viruses, power outages, hardware failure, or human error. Restoring a replay can recover lost data or revert the volume to a previous point in time. Dell Compellent's replays differ from the traditional PITC concept in that the blocks or pages are frozen and not copied. No user data is moved, making the process efficient in both time taken to complete the replay and space used by replays. Replay is taken - Once a replay has been scheduled or initiated manually, only the

    data that has been changed (Deltas) between the time marks makes up the size of the replay

    Follow-on Replays - Only the Deltas will consume space in successive replays

    A Replay Expires - When a DIR expires the information is coalesced

    Newer written data takes precedence in a conflict Space is then released back into the common pool

    Expiration is complete - Data has finished coalescing

    Space has been released back to the common pool

  • 1-25

    Data Instant Replay Data Pages Data in various pages have different accessibility characteristics. Their interaction describes the process of the DIR.

    Active Pages Is Data that is Read/Writable. This is the newest or most current data set

    Accessible Data Populated in R/O Pages

    This data is Read/Only, but is accessible. It is from an earlier time mark, but has not been changed

    Inaccessible Data Populated R/O Pages Data that has had changes to them. This data is Read/Only and can only be accessed through recovering a view volume

    Replay Data consists of only the data that has been changed (Deltas). After a replay is taken on a volume, all active pages become Read/Only

  • 1-27

    Data Progression, a licensed application, leverages cost and performance differences between storage tiers allowing the maximum use of lower cost, higher capacity SAS (7.2K RPM) drives for stored data, while maintaining performance oriented (15K RPM) SAS or SSD drives for frequently-accessed data. Data Progression takes advantage of this by automatically tracking block usage and migrating data between storage classes based on storage profiles. Best Practice: For replay data progression to work properly, there must be at least a daily replay and an hour overlap for replay expiration. If only a single daily replay is being taken, the minimum expiration time should be 25 hours.

    Benefits Cost and performance are optimized when volumes use the Recommended

    profile Automate data movement to the proper storage class based on real-time

    usage All applications and operating systems require zero modification Minimize data management administrative time

    Note: If Data Progression is not licensed and a system uses RAID 10 and RAID 5, data is migrated within a tier (drive class) but cannot be migrated between tiers. How drives are defined into Tier 1, 2 or 3: Tier 1 fastest drives in the Storage Center Tier 3 slowest drives in the Storage Center Tier 2 all other drives that are not in tier 1 or tier 3.

  • 1-28

    Storage Profiles used in Data Progression

    Here is the interface that shows the four different system defined Storage Profiles that can be used in support of Data Progression.

    The profiles are defined on the previous slide and will be discussed in more detail in a later chapter. We wanted to show you the slide here so you can see where these profiles come into play in the Storage Center interface.

  • 1-29

    Data Progression Volumes: Wheres My Data

    The Data Progression application allows the user to create volumes which will take advantage of using multiple RAID types and multiple classes of disks. This will enable frequently accessed data to be on the highest performance RAID type on the highest performance class of disk, while less frequently accessed data may reside on a lower performance RAID type on a lower performance class of disk.

    All initial writes to the volume are written to the highest performance RAID type and class of disk, providing optimum performance for write operations. The System Manager groups disks by speed; 7K, 10K, 15K and Solid State drives. Data Progression uses Fluid Data Architecture to move data to performance appropriate and cost-effective disk tiers.

    Data Progression is configurable with storage profiles, although it is strongly suggested to use the Recommended profile for optimal performance and storage usage. Note: Data Progression works with VMware just as it would any other operating system. All virtual hard disks contained in a single volume will share the same data progression configuration

  • 1-30

    Here is an example of how Data Progression would work with an Exchange file. A 10GB Exchange edb file is initially created and placed on Tier-1,15Krpm SAS storage to benefit from its speed and performance. (It could also be placed on SSD drives.)

    An edb file contains components such as user InBox, Sent items, Contact information and Calendar data. Initially this information is new and current for a given user and requires maximum performance.

    Over time, the user data becomes stale and infrequently accessed. Traditionally, this older data remains on more expensive tier-1 storage even though it is infrequently or never accessed. Other data migration solutions granularity is either the entire volume or the entire .edb file. In this example, the full 10GB .edb file will remain in Tier 1 taking up the most expensive disks.

    Dell Compellents Data Progression software will automate this migration process at the block level. The Data Progression software makes the decision to migrate the stale data blocks to a lower, more cost effective tier of storage and not affect the blocks needing higher performance.

    Because Data Progression works at the block level, the 10GB Exchange.edb file will actually be made up of blocks from all 3 tiers of storage in the Storage Center. The result will typically look something like this: with 500MB of active data (Inbox, calendar, etc.)in Tier 1. 500MB of data that was accessed a couple weeks ago (last months calendar, deleted items, etc.) 9GB of data that have not been accessed in a while sitting on the less expensive drives.

    If data that has been migrated becomes active again the Storage Center will migrate the data, at the block level, back up the tiers to provide the performance needed.

    All this is done without a host agent deployed.

  • 1-31

    Page Lifecycle - Concepts When data is first written, by default, it will be striped in Tier 1 using RAID 10 (blocks A&A,B&B,C&C,D&D). When a Replay is taken, the blocks will be marked as READ ONLY. Then when Data Progression runs, the blocks will be restriped into RAID 5 (5-9 by default) blocks A,B,C,D with Parity (P). When newer data is written (blocks C1&C1) they are striped as a RAID 10. Block C is now older data. The next time Data Progression runs, block C is moved to the lowest Tier-R5/R6 as Inaccessible Data. This means that the only way to view this data is to create a view volume from the Replay. In addition block B has now aged. While there is no newer data, since the block has aged, it is now progressed down to the next defined tier.

    Note: Conceptual model. In reality, data layout is more complex

  • 1-32

    Applications: FastTrack FastTrack is a licensed Storage Center feature. With this enabled data is promoted and demoted from outer to inner tracks of a disk using the same algorithms implemented for Data Progression.

    FastTrack is enabled for all volumes if it is licensed Use Online Storage report to see a system-wide view of used space on

    FastTrack blocks Use the Statistics screen to view FastTrack use on a per volume basis

    FastTrack Features:

    Reserves the outer tracks for the most active blocks Dynamically places the most frequently used data on the fastest tracks

    of each drive Automated FastTrack configuration optimizes storage usage Seek time is minimized through intelligent data placement

  • 1-33

    Core: Datapage Sizes 2 MB is Dell Compellents default datapage size. This selection is appropriate for most applications. Additional page sizes can be selected. These additional sizes can be made accessible by changing Advanced User Options under Configure My Volume Defaults. 512 KB: This datapage size is appropriate for applications with high performance needs or environments in which replays are taken frequently under heavy IO. Selecting this size reduces the amount of space the System Manager can present to servers. 4 MB: This datapage size is appropriate for systems that use a large amount of disk space with infrequent replays. Caution: If you are considering using either the 512 KB or 4 MB datapage setting, it is recommended you contact Copilot services so that system resources remain balanced and the impact on performance is considered.

  • 1-34

    Command Utility (CompCU) Requirements The Storage Center Command Utility (CU) is a java program that is used to run scripted commands to the Storage Center. It can be run from multiple operating systems (such as Windows, Linux, AIX, etc.) and can automate functions such as:

    Creating and/or modifying a volume Creating replays on volumes Mapping volumes to servers Creating and mapping replay view volumes Creating, modifying or deleting users and user groups Perform alert management and copy/mirror/migrate operations

    Commands can be executed from a command prompt or included in a script file for batch execution. Note:

    To create or modify Storage Center objects, administrative rights are required on the Storage Center

    For a complete list of all CU functions refer to the Command Utility User Guide

  • 1-35

    Command Set for Windows PowerShell PowerShell is a command line shell and scripting technology from Microsoft for enterprise automation. PowerShell is a powerful tool to manage Windows servers, workstations, and applications. PowerShell increases administrative control and productivity. PowerShell features include:

    Included with Windows Server 2008 Download PowerShell 1.0 or 2.0 for Server 2003 Uses an interactive command shell Uses an admin focused scripting language Easy to adopt, learn and use Works with existing Microsoft IT infrastructures Automates repetitive tasks Ensures complex, error-prone routines are performed consistently

  • 1-36

    Storage Center User Interfaces There are two interfaces that will be utilized during the configuration and operation of the Storage Center:

    Serial Command Line Interface (CLI)

    The CLI is used primarily for initial serial number and IP address configuration The CLI may also be utilized to support engineering-level troubleshooting NOTE: CLI is to be used only under the direction of Copilot Support

    Graphical User Interface (GUI) After serial number and IP address configuration, the GUI interface can be accessed via a web browser. This GUI connection will be used for the configuration, operation, administration and management of the Storage Center.

  • 1-37

    Storage Center Graphical User Interface

    The Storage Center Graphical User Interface is utilized after initial serial number and IP address configuration. The GUI interface is accessed via a web browser. This GUI connection will be used for the setup, operation, administration and management of the Storage Center.

    The Storage Center GUI requires the use of the Java plug-in. For optimum performance, use of the latest version of the Java plug-in is recommended. The Java plug-in can be downloaded from http://www.sun.com/java

    The Graphical User Interface also uses the Secure Socket Layer (SSL) technology to provide advanced client-server security. Any pop-up blocker applications should be disabled when utilizing the GUI.

    To login to the Storage Center, open a web browser and enter the eth0 (Single Controller) or the management (Dual Controller) IP address.

    The default login credentials are as follows:

    Default User ID: Admin (case sensitive)

    Default Password: mmm (case sensitive)

  • 1-38

    Storage Center Version 6.3 Features The Storage Center Operating System (SCOS) version 6.3 adds new features including: Performance improvements Active Directory and OpenLDAP support IPv6 support New Synchronous Replication options

    High Availability Mode High Consistency Mode Smart Resynchronization minimizes the amount of data replicated when

    the remote site becomes available after an interruption W2012 support

    Thin provisioning support (TRIM/UNMAP) Offload Data Transfer

    Support for 16Gb Fibre Channel Volume Advisor SNMP Enhancements Login Security Enhancements include support for customer installed 2048-bit

    SSL Certificates Upgrading: SC040 controllers must be at v5.5.4 or above to update to v6.3.1;

    SC8000 controllers must be at v6.0 or above Requirements:

    Web browser: IE 7,8, and 9. IE Desktop 10. Mozilla Firefox Java v6 or later.

  • 1-39

    Storage Center Version 6.4 Features

    Storage Center version 6.4 adds additional features including:

    All Flash systems

    On-Demand Data Progression for flash optimization

    New Flash Optimized Storage Profile

    Supports the new SC280 High Density Enclosures

    Enhancements to Enterprise Manager

  • 1-40

    All Flash Solution The all Flash solution is made possible thanks to the introduction of Read Intensive (RI) SSDs

    - these Read Intensive SSDs are often referred to as MLCs. The MLC SSDs provided the opportunity for Dell Compellent engineers to design a new Storage Profile that will do automatic tiering across write intensive (WI) SSDs

    - often referred to as SLCs as well as the MLCs. That same profile can also be used on traditional spinning disks. Lets take a look at an SC220 enclosure and what options are available for SSDs. Customers can start with:

    A 6 pack of 400GB, write intensive SSDs and then add

    A 6 pack of 1.6TB, read intensive SSDs. The final 12 drives can be a combination of 400GB write intensive SSDs, 1.6TB read intensive SSDs and/or traditional spinning disks (this example shows 1TB 7.2K RPM drives but they can be any type of spinning disk.)

  • 1-41

    Flash Optimized Solution The Flash optimized solution released with version 6.4 takes full advantage of SSD technology and allows for Storage Center solutions that include both SLC (Single-Level Cell SSD drives which are write intensive) and MLC (Multi-Level Cell SSD drives are read intensive). The solution uses enterprise class SAS SSDs that are dual ported for high performance and high availability. They also have built-in wear monitoring and non-volatile write cache to protect data during write operations. The 1.6TB MLC flash drives are new and are the densest 2.5 flash drives available. These flash drives improve performance for data-intensive apps and Online Transaction Processing (OLTP) workloads. Dell testing has shown >300K IOPs with low latency as well as >100K IOPs

    with sub-millisecond latency running OLTP workloads. The cost of the system is optimized because of the new Storage Profiles that

    are optimized for multiple tiers of flash.

  • 1-42

    There are several types of SSDs on the market, each with different attributes designed to deliver the best performance, cost or endurance. Most of the available flash shared storage solutions use a single type of SSD, usually a write-intensive (WI) SLC [Single Level Cell] SSD which has high endurance but lower capacity and a high cost. This is what Dell Compellent has always offered in the past.

    In Compellents innovative approach in which two types of flash drives are deployed in a single enclosure, flash is tiered across the SLC SSDs and MLC [Multi Level Cell] SSDs, which have a higher capacity and lower endurance but a considerably lower price, blending the attributes of these SSDs to achieve a superior $/GB.

    Since competitive flash solutions do not separate data reads from data writes, they are not able to use the read-intensive MLC SSDs and rely only on a single SSD type, eMLC or SLC. As these systems address all performance needs with a single flash tier, they waste the expensive capacity on data that is frequently read but not frequently written to, keeping their $/GB pricing very high.

    CML uses enterprise class SSD drives, SLCs and MLCs, for reliability and performance. These area characteristics of enterprise class SSD drives.

    Built-in wear monitoring; SSD Gas Gauge

    Dual port SAS

    Over-provisioned for endurance and sustained performance

    Non-volatile write cache on each drive

    Full end-to-end data protection (IOEDC/IOECC)

  • 1-43

    Here are a few different options for configuring a flash optimized solution. The solution on the left shows one SC220 enclosure that combines SLC and RLC SSDs for an all flash option. For Data Progression purposes, the 12 WI drives would be in tier 1 and the 12 RI drives would be in tier 2.

    This middle option is showing a similar all SSD solution but with two SC220 enclosures. Again, the customer would purchase the drives in packs of six.

    The system on the right is the hybrid option. With this option we are adding traditional, spinning disks to complete the solution. The 1TB, 7200RPM drives would be in tier 3.

    A few things to be aware of with the flash optimized solutions:

    1. The flash solution is only available with SC8000 controllers. Series 40 controllers do not support the flash solution.

    2. Drives are only sold for the flash solution in bundles of six.

    3. At initial release the flash solution is only available for brand new installations. An enclosure with RLC & SLC drives cant be added to an existing system.

    4. The Storage Center must be running a minimum SCOS of 6.4.

  • 1-44

    On demand Data Progression To take full advantage of the new read intensive MLC SSD drives, Dell Compellent engineers developed a new approach to data progression referred to as On-demand data progression. With on-demand data progression data can be automatically tiered from write intensive SLC SSDs to lower cost read intensive MLC SSDs. On-demand data progression is effectively scheduled to run following a Replay. Whether the replay is created by a Replay profile, by an application replay (such as Replay Manager) or by a manual replay, pages will be moved from SLC (Tier 1) to MLC (Tier 2) once the replay is completed for that volume. These replays are mountable. On-demand data progression introduces a new type of replay called a Space Management Replay. A Space Management Replay is created when the SLC drives (Tier 1) fills to approximately 95% of capacity. The Space Management Replay is taken and the blocks are then moved from Tier 1 to Tier 2 SLC to MLC. This will free up space on the SLC drives so those drives have more room for new writes. Space Management Replays are NOT mountable.

  • 1-45

    To support On Demand Data Progression a new default Storage Profile has been created. Upon initialization of a new Flash Optimized Storage Center with SCOS 6.4 there will be five options for Storage Profiles. The options are:

    Low Priority (Tier 3)

    Flash Optimized with Progression (Tier 1 to All Tiers)

    Write Intensive (Tier 1)

    Flash Only with Progression (Tier 1 to Tier 2)

    Low Priority with Progression (Tier 3 to Tier 2)

    These will be defined further in a later section.

  • 1-46

    Dell Compellent vSphere Plug-in (VSP)

    The Dell Compellent vSphere Client Plug-in allows an administrator to manage the Storage Center directly from vSphere.

    Tasks include:

    View system info and statistics

    Provision volumes as VMFS datastores or RDMs

    Take VM consistent replays

    Extend/grow VMFS datastores

    Create and manage replications

    Deploy new virtual machines

    The Dell Compellent vSphere Client Plug-in is available for download via Knowledge Center http://kc.compellent.com

  • 1-47

    vStorage APIs for Array Integration (VAAI)

    Uses industry standard T10 spec

    Reduces:

    CPU utilization

    Memory utilization

    IO between Host and Array

    Time to complete tasks

    Complexity when managing thinly provisioned storage

    Four Primitives:

    Block Zeroing

    Copy Offload

    Hardware Assisted Locking

    Thin Provisioning

    Unmap

    Stun

  • 1-48

    vSphere APIs for Storage Awareness (VASA) Dell Compellent VASA provider gathers information about the available storage topologies, capabilities and statuses of the Storage Centers and passes this information to VMware vCenter, making it accessible to vSphere clients. This information helps VMware administrators make intelligent decisions when selecting the datastores on which to place new virtual machines. The VASA provider supports Profile-Driven Storage, which categorizes volumes by performance and stores this information in storage profiles. Benefits include:

    Rapid, intelligent provisioning of applications Application service levels that match available storage Better visibility into the available storage pool

    Policy-Based Storage Management helps further provision virtual machines by automating datastore placement decisions. Requirements:

    Enterprise Manager v6.1.1 or above Storage Center v5.5 or above VMware ESX/ESXi v5.0 or above VMware vCenter Server v4.1 or above

  • 1-49

    Section Review

    This section discussed the fundamentals of Storage Area Networks as well as the Storage Center.

    Section Objectives:

    Storage Center functional overview

    Storage Center deployment architecture

    Review Storage Center communication links

    Review the Storage Center user interfaces

  • 1-51

    Review Questions

    1. Fluid Data Architecture what information does the Storage Center track for each block of data?

    2. Which Storage Center core function manages all disks as a single pool of resources from which the administrator can select and assign volumes to a server/host?

    3. Which application provides for the movement of different categories of data to different types of physical disks?

  • 1-52

    Review Questions (cont.) 4. Which core function provides for the ability to allocate space only when data is

    physically stored?

    5. What is the Dell Compellent licensed module that reserves the outer tracks of

    the disk for most active blocks?

    6. Which core function provides for the ability to create space-efficient replays that restore data instantaneously?

  • 1-53

    Review Questions (cont.)

    7. What type of SSD drive is considered read intensive SLC or MLC?

    8. What is the feature in 6.3 called that allows movement of existing volumes from one Storage Center to another?

    9. When an expiration of a replay is compete, what happens to that space?

    10. What solution is part of the Storage Center 6.4 release that allows multiple tiers of SSD drives to work with HDD drives?


Recommended