+ All Categories
Home > Documents > Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Date post: 24-Apr-2015
Category:
Upload: gaddam-narender
View: 104 times
Download: 3 times
Share this document with a friend
24
Hitachi Adaptable Modular Storage 2000 Family Best Practices with Hyper-V Best Practices Guide By Rick Andersen and Lisa Pampuch April 2009
Transcript
Page 1: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Hitachi Adaptable Modular Storage 2000 Family Best Practices with Hyper-V Best Practices Guide

By Rick Andersen and Lisa Pampuch

April 2009

Page 2: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Summary Increasingly, businesses are turning to virtualization to achieve several important objectives, including increase return on investment, decreasing, total cost of operation, improving operational efficiencies, improving responsiveness and becoming more environmentally friendly.

While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization requires that IT administrators adopt a new way of thinking about storage infrastructure and application deployment. Improper deployment of storage and applications can have catastrophic consequences due to the highly consolidated nature of virtualized environments.

The Hitachi Adaptable Modular Storage 2000 family of storage systems is best-in-class for Windows Server 2008 Hyper-V environments. It is ideal for business that are planning new deployments in existing or new environments, and for business considering virtualizing their servers and need a storage system that increases reliability and performance and reduces total cost of operations.

This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It provides guidance on how to configure both the Hyper-V environment and a 2000 family storage system to achieve the best performance, scalability and availability.

Page 3: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Contributors The information included in this document represents the expertise, feedback and suggestions of a number of skilled practitioners. The authors recognize and sincerely thank the following contributors and reviewers of this document:

• Mark Adams, Product Marketing

• Robert Burch, Advanced Technical Consultants

• Alan Davey, Storage Platforms Product Management

• Rob Simmons, Application Solutions

• Eric Stephenson, Hardware and Alliances

• Bin Bin Zhang, Global Services Technical Support

Page 4: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Table of Contents Hitachi Product Family......................................................................................................................................................... 1

Adaptable Modular Storage 2000 Family Features .................................................................................................. 1 Hitachi Storage Navigator Modular 2 Software ........................................................................................................ 3 Hitachi Performance Monitor Software..................................................................................................................... 3

Hyper-V Architecture............................................................................................................................................................ 4 Windows Hypervisor ................................................................................................................................................ 5 Parent and Child Partitions....................................................................................................................................... 5 Integration Services ................................................................................................................................................. 5 Emulated and Synthetic Devices.............................................................................................................................. 6

Hyper-V Storage Options..................................................................................................................................................... 6 Disk Type ................................................................................................................................................................. 6 Disk Interface ........................................................................................................................................................... 7 I/O Paths .................................................................................................................................................................. 7

Basic Hyper-V Host Setup ................................................................................................................................................... 8 Basic Storage System Setup............................................................................................................................................... 9

Fibre Channel Storage Deployment ......................................................................................................................... 9 Storage Provisioning .............................................................................................................................................. 15

Hyper-V Protection Strategies........................................................................................................................................... 16 Backups ................................................................................................................................................................. 16 Storage Replication................................................................................................................................................ 17 Hyper-V Quick Migration ........................................................................................................................................ 17 Hitachi Storage Cluster Solution ............................................................................................................................ 17

Hyper-V Performance Monitoring ..................................................................................................................................... 18 Windows Performance Monitor .............................................................................................................................. 18 Hitachi Performance Monitor Feature .................................................................................................................... 19 Hitachi Tuning Manager Software .......................................................................................................................... 19

Page 5: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Hitachi Adaptable Modular Storage 2000 Family Best Practices with Hyper-V Best Practices Guide

By Rick Andersen and Lisa Pampuch

Increasingly, businesses are turning to virtualization to achieve several important objectives:

• Increase return on investment by eliminating underutilization of hardware and reducing administration overhead

• Decrease total cost of operation by reducing data center space and energy usage

• Improve operational efficiencies by increasing availability and performance of critical applications and simplifying deployment and migration of those applications

In addition, virtualization is a key tool companies use to improve responsiveness to the constantly changing business climate and to become more environmentally friendly.

While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization requires that IT administrators adopt a new way of thinking about storage infrastructure and application deployment. Improper deployment of storage and applications can have catastrophic consequences due to the highly consolidated nature of virtualized environments.

The Hitachi Adaptable Modular Storage 2000 family of storage systems is best-in-class for Windows Server 2008 Hyper-V environments. It offers a robust storage solution that reduces setup and management costs and eliminates performance bottlenecks. This is accomplished through the use of the 2000 family’s advanced point-to-point SAS-based architecture for concurrent back-end I/O capacity and symmetric active-active front-end architecture that dynamically spreads I/O workloads across resources and allows I/O through any path. The 2000 family is ideal for business that are planning new deployments in existing or new environments, and for business considering virtualizing their servers and need a storage system that increases reliability and performance and reduces total cost of operations.

This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It provides guidance on how to configure both the Hyper-V environment and a 2000 family storage system to achieve the best performance, scalability and availability.

Hitachi Product Family Hitachi Data Systems is the most trusted vendor in delivering complete storage solutions that provide dynamic tiered storage, common management, data protection and archiving, enabling organizations to align their storage infrastructures with their unique business requirements.

Adaptable Modular Storage 2000 Family Features The Hitachi Adaptable Modular Storage 2000 family provides a reliable, flexible, scalable and cost-effective modular storage system for Hyper-V. The 2000 family of modular storage systems is ideal for demanding applications that require enterprise class-like performance, capacity and functionality.

1

Page 6: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

The 2000 family is the only midrange storage product with symmetric active-active front-end and dynamic back-end controller architecture that provides integrated, automated hardware-based front-to-back-end I/O load balancing. This ensures I/O traffic to back-end disk devices is dynamically managed, balanced and shared equally across both controllers, even if the I/O load to specific logical units (LUs) is skewed. Storage administrators are no longer required to manually define specific affinities between LUs and controllers, simplifying overall administration. The 2000 family’s architecture takes full advantage of native OS multipathing capabilities, thereby eliminating mandatory requirements to implement proprietary multipathing software.

No other midrange storage product has an advanced serial-attached SCSI (SAS) drive interface. The new point-to-point back-end design virtually eliminates I/O transfer delays and contention associated with Fibre Channel arbitration. It also provides significantly higher bandwidth and I/O concurrency and isolates any component failures that might occur on back-end I/O paths.

Flexibility • Choice of Fibre Channel and iSCSI server interfaces or both

• Resilient performance using LUs that can be configured to span multiple drive trays and back-end paths

• Choice of high-performance SAS and low-cost SATA disk drives

• Lowered costs using SAS or SATA drives that can be intermixed in the same tray

• Support for all major open systems operating systems, host bus adapters (HBAs) and switch models from major vendors

Scalability • Ability to add capacity, connectivity and performance as needed

• Concurrent support of large heterogeneous open systems environments using up to 2048 virtual ports with host storage domains and 4096 LUs

• Ability to scale capacity to 472TB

• Ability to scale performance to more than 900K IOPS

• Seamless expansion due to data-in-place upgrades from Adaptable Modular Storage 2100 to Adaptable Modular Storage 2300 and to Adaptable Modular Storage 2500

• Large-scale disaster recovery and data migration using integration with Hitachi Universal Storage Platform V and Hitachi Universal Storage Platform VM

• Complete lifecycle management solutions within tiered storage environments

Availability • Outstanding performance and non-disruptive operations using Hitachi Dynamic Load Balancing Controller

• 99.999% data availability

• No single point of failure

• Hot swappable major components

• Dual-battery backup for cache

• Non-disruptive microcode updates

• Flexible drive sparing with no copy back required after a RAID rebuild

• Host multipathing capability

• In-system SQL Server and Exchange backup and snapshot support through Windows Volume Shadow Copy Service

• Remote site replication

• RAID-5, RAID-1, RAID-1+0 and RAID-0 (SAS drives) support

2

Page 7: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

• RAID-6 dual parity support for enhanced reliability when using large SATA and SAS drives

• Hi-Track® Monitor support

Performance • No performance bottlenecks in highly utilized controllers due to Hitachi Dynamic Load Balancing Controller

• Point-to-point SAS backplane with a total bandwidth of 96 gigabits per second (Gbps) and no overhead from loop arbitration

• Full duplex 3Gbps SAS drive interface that can simultaneously send and receive commands or data on the same link

• Up to 32 concurrent I/O paths provide up to 9600 megabytes per second of total system bandwidth

• 4Gbs host Fibre Channel connections

• Cache partitioning and cache residency to optimize or isolate unique application workloads

Simplicity • Simplified RAID group placement using SAS backplane architecture

• Highly intuitive management software that includes easy-to-use configuration and management utilities

• Command line interface and command control interface (CCI) that match GUI functionality

• Seamless integration with Hitachi storage systems, managed with a single set of tools using Hitachi Storage Command Suite software

• Consistency among most Hitachi software products whether run on Hitachi modular storage systems or Hitachi Universal Storage Platform™ family models

Security • Role-based access to Adaptable Modular Storage management systems

• Ability to track all system changes with audit logging

• Ability to apply system-based write once, read many (WORM) data access protection to logical volumes to provide regulatory-compliant protection

• Encrypted communications between management software and storage system using SSL and TSL

• Internet Protocol version 6 (IPv6) and Internet Protocol Security (IPsec) compliant maintenance ports

Hitachi Storage Navigator Modular 2 Software Hitachi Storage Navigator Modular 2 software is the integrated interface for Adaptable Modular Storage 2000 family firmware and software features. Use it to take advantage of all of the 2000 family’s features. Storage Navigator Modular 2 software provides both a Web-accessible graphical management interface and a CLI to allow ease of storage management.

Storage Navigator Modular 2 software is used to map security levels for SAN ports and virtual ports and for inter-system path mapping. It is used for RAID-level configurations, for LU creation and expansion, and for online volume migrations. It also configures and manages Hitachi replication products. It enables online microcode updates and other system maintenance functions and contains tools for SNMP integration with enterprise management systems.

Hitachi Performance Monitor Software Hitachi Performance Monitor software provides detailed, in-depth storage performance monitoring and reporting of Hitachi storage systems including drives, logical volumes, processors, cache, ports and other resources. It helps organizations ensure that that they achieve and maintain their service level objectives for performance and availability, while maximizing the utilization of their storage assets. Performance Monitor software’s in-depth troubleshooting and analysis reduce the time required to resolve storage performance problems. It is an essential tool for planning and analysis of storage resource requirements.

3

Page 8: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Hyper-V Architecture Microsoft® Hyper-V is a hypervisor-based virtualization technology from Microsoft that is integrated into Windows Server 2008 x64 editions of the operating system. Hyper-V allows a user to run multiple operating systems on a single physical server. To use Hyper-V in Windows Server 2008, enable the Hyper-V role on the Microsoft Windows Server 2008 server.

Figure 1 illustrates Hyper-V architecture.

Figure 1. Hyper-V Architecture

The Hyper-V role provides the following functions:

• Hypervisor

• Parent and child partitions

• Integration services

• Emulated and synthetic devices

4

Page 9: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Windows Hypervisor The Windows Hypervisor is a thin layer of software that allows multiple operating systems to run simultaneously on a single physical server, and is the core component of Hyper-V. The Windows Hypervisor is responsible for the creation and management of partitions that allow for isolated execution environments. As shown in Figure 1, the Windows Hypervisor runs directly on top of the hardware platform, with the operating systems running on top.

Parent and Child Partitions To run multiple virtual machines with isolated execution environments on a physical server, Hyper-V technology uses a logical entity called a partition. These partitions are where the operating systems and its applications execute. Hyper-V defines two kinds of partitions, parent and child.

Parent Partition Each Hyper-V installation consists of one parent partition, which is a virtual machine that has special or privileged access. Some documentation might also refer to parent partitions as host partitions. This document uses the term parent partition.

The parent partition is the only virtual machine with direct access to hardware resources. All of the other virtual machines, which are known as child partitions, go through the parent partition for device access.

To create the parent partition, enable the Hyper-V role in Server Manager and restart the system. After the system restarts, the Windows hypervisor is loaded first, and then the rest of the stack is converted to become the parent partition. The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions that house the guest operating systems.

Child Partition Hyper-V executes a guest operating system and its associated applications in a virtual machine, or child partition. Microsoft documentation sometimes also refers to child partitions as guest partitions. This document uses the term child partition.

Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, which are referred to as virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition. The VMBus is a logical channel that enables inter-partition communication.

The parent partition runs a Virtualization Service Provider (VSP), which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirects the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS.

Integration Services Integration services are made up of a number of services that are installed on the guest OS that improve performance while running under Hyper-V: enlightened I/O and integration components. The version of the guest OS deployed determines which of these two services can be installed on the guest OS.

Enlightened I/O Enlightened I/O is a Hyper-V feature that allows virtual devices in a child partition to use host resources better because VSC drivers in these partitions communicate directly with VSPs directly over the VMBus for storage, networking and graphics subsystems access. Enlightened I/O is a specialized virtualization-aware implementation of high-level communication protocols like SCSI that take advantage of VMBus directly, bypassing any device emulation layer. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O. At the time of this writing, Windows 2008, Windows Vista and SUSE Linux are the only operating systems that support Enlightened I/O, allowing them to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware.

5

Page 10: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Integration Components Integration components (ICs) are sets of drivers and services that enable guest operating systems to use synthetic devices, thus creating more consistent child partition performance. By default, guest operating systems only support emulated devices. Emulated devices normally require more overhead in the hypervisor to perform the emulation and do not utilize the high-speed VMBus architecture. By installing integration components on the supported guest OS, you can enable the guest to utilize the high-speed VMBus and utilize synthetic SCSI devices.

Emulated and Synthetic Devices Hardware devices that are presented inside of a child partition are called emulated devices. The emulation of this hardware is handled by the parent partition. The advantage of emulated devices is that most operating systems have built-in device drivers for them. The disadvantage is that emulated devices are not designed for virtualization and thus have lower performance than synthetic devices.

Synthetic devices are optimized for performance in a Hyper-V environment. Hyper-V presents synthetic devices to the child partition. Synthetic devices are high performance because they do not emulate hardware devices. For example, with storage, the SCSI controller only exists as a synthetic device. For a list of guest operating systems that support synthetic SCSI devices, see the Hyper-V Planning and Deployment Guide.

Hyper-V Storage Options Hyper-V deployment planning requires consideration of three key factors: the type of disk to deploy and present to child partitions, the disk interface and the I/O path.

Disk Type The Hyper-V parent partition can present two disk types to guest operating systems: virtual hard disks (VHD) and pass-through disks.

Virtual Hard Disks Virtual hard disks (VHDs) are files that are stored on the parent hard disks. These disks can either be SAN attached or local to the Hyper-V server. The child partition sees these files as its own hard disk and uses the VHD files to perform storage functions.

Three types of VHD disks are available for presentation to the host: • Fixed VHD — The size of the VHD is fixed and the LU is fully allocated at the time the VHD is defined.

Normally this allows for better performance than dynamic or differencing VHDs. This is due to less fragmentation since the VHD is always pre-allocated and the parent partition file system does not need to incur the overhead required to extend the VHD file, since all the blocks have been pre-allocated. A fixed VHD has the potential for wasted or unused disk space. Consider also that after the VHD is full, any further write operations fail even though additional free storage might exist on the storage system.

• Dynamic VHD — The VHD is expanded by Hyper-V as needed. Dynamic VHDs occupy less storage as compared to fixed VHDs, but at the cost of slower throughput. The initial size that the disk can expand to is set at creation time and the writes will fail when the VHD is fully expanded. Note that this dynamic feature only applies to expanding the VHD. In other words, the VHD does not automatically decrease in size when data is removed. However, Dynamic VHDs can be compacted under the Hyper-V virtual hard disk manager to free any unused space.

• Differencing VHD — VHD that involves both a parent and child disk. The parent VHD disk contains the baseline disk image with the guest operating systems and most likely an application and data associated with that application. After the VHD parent disk is configured for the guest, a differencing disk is assigned as a child to that partition. As the guest OS executes, any write operations are stored on the child differencing disk. Differencing VHDs are good for test environments but performance can degrade because the majority of I/O must access the parent VHD disk as well as the differencing disk.

6

Page 11: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Because dynamic VHDs have more overhead, best practice is to use fixed VHDs in most circumstance. For heavy application workloads such as Exchange or SQL, create multiple fixed VHDs and isolate applications files such as database and logs on their own VHDs.

Pass-through Disks A Hyper-V pass-through disk is a physical disk or LU that is mapped or presented directly to the guest OS. Hyper-V pass-through disks normally provide better performance than VHD disks.

After the pass-through disk is visible to and offline within the parent partition, it can be made available to the child partition using the Hyper-V Manager. Pass-through disks have the following characteristics:

• Must be in the offline state from the Hyper-V parent perspective, except in the case of clustered or highly available virtual machines

• Presented as raw disk to the parent partition

• Cannot be dynamically expanded

• Do not allow the capability to take snapshots or utilize differencing disks

Disk Interface Hyper-V supports both IDE and SCSI controllers for both VHD and pass-through disks. The type of controller you select is the disk interface that the guest operating system sees. The disk interface is completely independent of the physical storage system.

Table 1 summarizes disk interface considerations and restrictions.

Table 1. Disk Interface Considerations

Disk Interface

Considerations Restrictions

All child partitions must boot from an IDE device.

None.

A maximum of four IDE devices are available for each child partition.

A maximum of one device per IDE controller for a maximum of four devices per child partition.

IDE

Virtual DVD drives can only be created as an IDE device.

None.

Best choice for all volumes based on I/O performance.

None.

Requires that Integration Services be installed on the child partition.

Guest OS specific.

SCSI

Can define a maximum of four SCSI controllers per child partition.

A maximum of 64 devices per SCSI controller for a maximum of 256 devices per child partition.

I/O Paths The storage I/O path is the path that a disk I/O request generated by an application within a child partition must take to a disk on the storage system. Two storage configurations are available, based on the type of disk selected for deployment.

VHD Disk Storage Path With VHD disks, all I/O goes through two complete storage stacks, once in the child partition and once in the parent partition. The guest application disk I/O request goes through the storage stack within the guest OS and onto the parent partition file system.

7

Page 12: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Pass-through Disk Storage Path When using the pass-through disk feature, the NTFS file system on the parent partition can be bypassed during disk operations, minimizing CPU overhead and maximizes I/O performance. With pass-through disks, the I/O traverses only one file system, the one in the child partition. Pass-through disks offer higher throughput because only one file system is traversed, thus requiring less code execution. When hosting applications with high storage performance requirements, deploy pass-through disks.

Basic Hyper-V Host Setup Servers utilized in a Hyper-V environment must meet certain hardware requirements. For more information, see the Hyper-V Planning and Deployment Guide. Note: Best practice is to install Integration components on any child partition to be hosted under Hyper-V. The integration components install enlightened drivers to optimize the overall performance of a child partition. Enlightened drivers provide support for the synthetic I/O devices, which significantly reduces CPU overhead for I/O when compared to using emulated I/O devices. In addition, it allows the synthetic I/O device to take advantage of the unique Hyper-V architecture not available to emulated I/O devices, further improving the performance characteristics of synthetic I/O devices. For more information, see the Hyper-V Planning and Deployment Guide.

Multipathing The 2000 family supports active-active multipath connectivity. To obtain maximum availability, design and implement your host-storage connections so that at least two unique paths exist from the host to the storage system. Hitachi Data Systems recommends the use of dual SAN fabrics, multiple HBAs and host-based multipathing software when deploying Hyper-V servers.

Multipathing software such as Hitachi Dynamic Link Manager software and Windows Server 2008 native MPIO are critical components of a highly available system. Multipathing software allows the Windows operating system to see and access multiple paths to the same LU, enabling data to travel any available path so that users experience increased performance and continued access to data in the event of a failed path. While multiple load-balancing settings exist in both Hitachi Dynamic Link Manager software and Windows Server 2008 native MPIO, the symmetrical active-active controller feature of the 2000 family enables either controller to respond to I/O regardless of the originating HBA port, without having to select a host load-balancing option. However, if the workload is large enough to consume more bandwidth than a single HBA port can handle, Hitachi Data Systems recommends using the round robin load-balancing algorithm in both Hitachi Dynamic Link Manager software and Windows Server 2008 native MPIO to distribute load evenly over all available HBAs.

Note: Hitachi Dynamic Link Manager software can only be used on the parent partition.

Queue Depth Settings on the Hyper-V Host Queue depth settings determine how many command data blocks can be sent to a port at one time. Setting queue depth too low can artificially restrict an application’s performance, while setting it too high might cause a slight reduction in I/O. Setting queue depth correctly allows the controllers on the Hitachi storage system to optimize multiple I/Os to the physical disk. This can provide significant I/O improvement and reduce response time.

Applications that are I/O intensive can have many concurrent, outstanding I/O requests. For that reason, better performance is generally achieved with higher queue depth settings. However, this must be balanced with the available command data blocks on each front-end port of the storage system.

The 2000 family has a maximum of 512 command data blocks available on each front-end port. This means that at any one time, up to 512 active host channel I/O commands can be queued for service on a front-end port. The 512 command data blocks on each front-end port are used by all LUs presented on the port, regardless of the connecting server. When calculating queue depth settings for your Hyper-V HBAs, you must

8

Page 13: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

also consider queue depth requirements for other LUs presented on the same front-end ports to all servers. Hitachi recommends setting HBA queue depth on a per-target basis rather than per-port basis.

To calculate queue depth, use the following formula:

512 ÷ total number of LUs presented through the front-end port = HBA queue depth per host

For example, suppose that four servers share a front-end port on the storage system, and between the four servers, 16 LUs are assigned through the shared front-end port and all LUs are constantly active. The maximum dynamic queue depth per HBA port is 32, that is:

512 command data blocks ÷ 16 LUs presented through the front-end port = 32 HBA queue depth setting

Basic Storage System Setup The Hitachi Adaptable Modular Storage System 2000 family has no system parameters that need to be set specifically for a Hyper-V environment. With the 2000 family’s Dynamic Load Balancing Controllers that feature symmetric active-active controller architecture, the LUN-ownership concept no longer exists and the associated parameters (for example, LUN ownership change disable mode) that are available on predecessor storage systems are obsolete.

Fibre Channel Storage Deployment When deploying Fibre Channel storage on a 2000 family system in a Hyper-V environment, it is important to properly configure the Fibre Channel ports and to select the proper type of storage for the child partitions that are to be hosted under Hyper-V.

Fibre Channel Front-end Ports Provisioning storage on two Fibre Channel front-end ports (on one port per controller) is sufficient for redundancy on the Hitachi Adaptable Modular Storage Systems 2000 family. This results in two paths to each LU from the Hyper-V host's point of view. For higher availability, ensure that the target ports are configured to two separate fabrics to make sure multiple paths are always available to the Hyper-V server.

Hyper-V servers that access LUs on 2000 family storage systems must be properly zoned so that the appropriate Hyper-V parent and child partitions can access the storage. With the 2000 family, zoning is accomplished at the storage level by using host storage domains (HSDs). Zoning defines which LUs a particular Hyper-V server can access. Hitachi Data Systems recommends creating a HSD group for each Hyper-V server and using the name of the Hyper-V server in the HSD for documentation purposes.

Selecting Child Partition Storage It is important to correctly select the type of storage deployed for the guest OS that is to be virtualized under Hyper-V. Consider also whether VHD or pass-through disks are appropriate. The following questions can help you make this determination:

• Is the child partition’s I/O workload heavy, medium, or light? If the child partition has a light workload, you might be able to place all the storage requirements on one VHD LU. If the child partition is hosting an application such as SQL or Exchange, allocate files that are accessed heavily, such as log and database files, to individual VHD LUs. Attach each individual LU to its own synthetic controller.

• What is the maximum size LU required to support the child partition? If the maximum LU is greater that 2040GB, you must either split the data or utilize pass-through disks. This is due to the size limitation of 2040GB for a VHD LU.

9

Page 14: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Dedicated VHD Deployment Figure 2 shows dedicated VHDs for the application files and the mapping within the 2000 family storage system to the mapping within the Hyper-V parent partition, and the child partition. Note that this scenario uses synthetic SCSI controller interface for the application LUs.

Figure 2. Dedicated VHD Connection

Key considerations:

• For better performance and easier management of child partitions, assign a single set of LUs.

• To enable the use of Hyper-V quick migration of a single child partition, deploy dedicated VHDs.

• To enable multiple child partitions to be moved together using quick migration, deploy shared VHDs.

• To achieve good performance for heavy I/O applications, deploy dedicated VHDs.

10

Page 15: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Shared VHD Deployment This scenario utilizes a shared VHD disk, with that single VHD disk hosting multiple child partitions. Figure 3 shows a scenario where Exchange and SQL child partitions share a VHD disk on a 2000 family storage system and SharePoint and BizTalk child partitions also share a VHD disk on the 2000 family storage system.

Figure 3. Shared VHD Connection

Key considerations:

• It is important to understand the workloads of individual child partitions when hosting them on a single shared VHD. It is critical to ensure that the RAID group on the 2000 family storage system that is to host the shared VHD LU can support the aggregate workload of the child partitions. For more information, see the “Number of Child Partitions per VHD, per RAID Group” section of this paper.

• If using quick migration to move a child partition, understand that all child partitions hosted within a shared VHD move together. Whether the outage is due to automated recovery from a problem with the child partition or because of a planned outage, all the child partitions in the group are moved.

11

Page 16: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Pass-through Deployment This scenario uses pass-through disks instead of VHD disks. A dedicated VHD LU is still required to host virtual machine configuration files. Do not share this VHD LU with other child partitions on the Hyper-V host. Figure 4 shows a scenario in which virtual machine configuration files, guest OS binaries, the page file and SQL Server application libraries are placed on the VHD LU, and the application files are deployed as pass-through disks.

Figure 4. Pass-through Connection

Key considerations:

• For higher throughput, deploy pass-through disks. Pass-through disks normally provide higher throughput because only the guest partition file system is involved.

• To achieve an easier migration path, deploy pass-through disks. Pass-through disks can provide an easier migration path because the LUs used by a physical machine on a SAN can be moved easily to a Hyper-V environment, and allow a new child partition access to the disk. This scenario is especially appropriate for partially virtualized environments.

• To support multi-terabyte LUs, deploy pass-through disks. Pass-through disks are not limited in size, so a multi-terabyte LU is supported.

12

Page 17: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

• Pass-through disks appear as raw disks and offline to the parent.

• If snapshots are required, remember that pass-through disks do not support Hyper-V snapshot copies.

iSCSI Storage Deployment The Hyper-V architecture offers two methods for deploying iSCSI storage disks on 2000 family storage systems: direction connection and dual connection to the child partition. iSCSI ports must be configured to the Hyper-V host to access LUs on 2000 family storage systems.

iSCSI Port Settings Hyper-V servers that access iSCSI LUs on 2000 family storage systems must be properly zoned so that the appropriate Hyper-V parent and child partitions can access the storage. With the 2000 family, zoning at the storage level is accomplished by using HSDs. Zoning defines which LUs a particular Hyper-V server can access. Hitachi Data Systems recommends creating an HSD group for each Hyper-V server and using the name of the Hyper-V server in the HSD for documentation purposes.

Direct Connection One iSCSI deployment method is a direct connection to the 2000 family storage system from the child partition. The child partition must support the Microsoft iSCSI software initiator and must have the correct device driver.

Figure 5 shows a direct connection configuration. Notice that the child partition simply boots from the VHD on the Hyper-V parent partition. The storage system has no external LU to contain the configuration files, OS binaries or application libraries.

Figure 5. iSCSI Direct Connection to Child Partition

13

Page 18: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Key considerations:

• For increased availability due to dynamic addition of LUs to the child partition, choose the direct connection configuration.

• For simpler physical to virtual conversions (because reconfiguration of the iSCSI initiator and targets is not required), choose the iSCSI direct connection configuration.

• Use dedicated NICs for iSCSI traffic and do not share the iSCSI traffic with other network traffic. Dedicated NICs ensure greater performance and throughput because other network traffic does not interfere with the storage traffic on the iSCSI NICs.

• For easier movement of child partitions between Hyper-V host (because no changes are required to the 2000 family storage system), choose the direct connection configuration.

• Snapshots using VSS are not supported by the direct connection configuration

Dual Connection Another iSCSI deployment option is to connect the child partition to both the Hyper-V parent partition and the child partition. Implementing this combination of connections provides more flexibility for configuring iSCSI connections to meet the storage needs of the child partition. Figure 6 illustrates the dual connection deployment option.

Figure 6. iSCSI Parent and Child Dual Connection

14

Page 19: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Key considerations:

• For better network performance when a child partition with iSCSI attached storage is executing in the parent partition, choose the dual connection configuration. This is because the NIC in the parent partition is communicating directly with the iSCSI storage system. This also allows for the leveraging of jumbo frame support on the physical NIC in the Hyper-V parent partition.

• For better server performance due to reduced CPU load, leverage TCP offload engine (TOE) NICs and TCP offload cards if possible.

• Use dedicated NICs for iSCSI traffic and do not share the iSCSI traffic with other network traffic. Dedicated NICs ensure greater performance and throughput because other network traffic does not interfere with the storage traffic on the iSCSI NICs.

• For presentation of multiple LUs to the child partition over a single iSCSI interface, choose the dual connection configuration.

• The dual connection configuration does not support dynamic addition of disks to the child partition.

Storage Provisioning Capacity and performance cannot be considered independently. Performance always depends on and affects capacity and vice versa. That’s why it’s very difficult or impossible in real-life scenarios to provide best practices for the best LU size, the number of child partition that can run on a single VHD and so on without knowing capacity and performance requirements. However, several factors must be considered when planning storage provisioning for a Hyper-V environment.

Size of LU When determining the right LU size, consider the factors listed in Table 2. These factors are especially important from a storage system perspective. In addition, the individual child partition’s capacity and performance requirements (basic virtual disk requirements, virtual machine page space, spare capacity for virtual machine snapshots, and so on) must also be considered.

Table 2. LU Sizing Factors

Factor Comment

Guest base OS size The guest OS resides on the boot device of the child partition.

Guest page file Size Recommended size is 1.5 times the amount of RAM allocated to the child partition.

Virtual machine files Define the size the same as the size of the child partition memory plus 200MB.

Application data required by the guest machine

Storage required by the application files such as database and logs.

Modular volume migration

Smaller LUs can be migrated using Hitachi Modular Storage Migration to a broader range of possible target RAID groups.

Data replication Using more but smaller LUs offers better flexibility and granularity when using replication within a storage system (Hitachi ShadowImage® Replication software, Hitachi Copy-on-Write Snapshot software) or across storage systems (Hitachi TrueCopy® Synchronous or Extended Distance software).

Number of Child Partitions per VHD LU, per RAID Group If you decide to run multiple child partitions on a single VHD LU, understand that the number of child partitions that can run simultaneously on a VHD LU depends on the aggregated capacity and performance requirements of the child partitions. Because all LUs on a particular RAID group share the performance and capacity offered by the RAID group, Hitachi Data Systems recommends dedicating RAID groups to a Hyper-V host or a group of Hyper-V hosts (for example, a Hype-V failover cluster) and not assigning LUs from the same RAID group to

15

Page 20: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

other non-Hyper-V hosts. This prevents the Hyper-V I/O from affecting or being affected by other applications and LUs on the same RAID group and makes management simpler.

Follow these best practices:

• Create and dedicate RAID groups to your Hyper-V hosts.

• Always present LUs with the same H-LUN if they are shared with multiple hosts.

• Create VHDs on the LUs as needed.

• Monitor and measure the capacity and performance usage of the RAID group with Hitachi Tuning Manager software and Hitachi Performance Monitor software.

Monitoring and measuring the capacity and performance usage of the RAID group results in one the following cases:

• If all of the capacity offered by the RAID group is used but performance of the RAID group is still good, add RAID groups and therefore more capacity. In this case, consider migrating the LUs to a different RAID group with less performance using Hitachi Modular Volume Migration or Hitachi Tiered Storage Manager.

• If all of the performance offered by the RAID group is used but capacity is still available, do not use the remaining capacity by creating more LUs because this leads to even more competition on the RAID group and overall performance for the child partitions residing on this RAID group is affected. In this case, leave the capacity unused and add more RAID groups and therefore more performance resources. Also consider migrating the LUs to a different RAID group with better performance.

In a real environment, it is not possible to use 100 percent of both capacity and performance of a RAID group, but the usage ratio can be optimized by actively monitoring the systems and moving data to the appropriate storage tier if needed using Hitachi Modular Volume Migration or Hitachi Tiered Storage Manager. An automated solution using these applications from the Hitachi Storage Command Suite helps to reduce the administrative overhead and optimize storage utilization.

Hyper-V Protection Strategies A successful Hyper-V deployment requires careful consideration of protection strategies for backups, disaster recovery and quick migration.

Backups Regularly scheduled backups of the Hyper-V servers and the data that resides on the child partitions under Hyper-V are an important part of any Hyper-V protection plan. With Hyper-V, the backup and protection process involves both the Hyper-V parent partition and the child partitions that execute under Hyper-V, along with the applications that reside within the child partition.

When protecting child partitions, two protection strategies are available. You can create application-aware backups of each child partition as if they are hosted on individual physical servers, or you can backup the parent partition at a point in time, which then creates a backup of the child partitions that were executing on the parent partition.

When backing up the parent partition, it’s important to keep the state of the physical server in mind. For example, if a backup of the parent partition is created while two child partitions are executing applications, the backup is a point-in-time copy of the parent and the child partitions. Any applications that are executing in the child partitions are unaware that a backup occurred. This means that applications such as Exchange or SQL cannot freeze writes to the databases, set the appropriate application checkpoints, or flush the transaction logs.

Best practice is to perform application-aware backups in the child partitions.

16

Page 21: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Storage Replication Another important part of protection strategy is storage replication. The 2000 family has built-in storage replication features such as ShadowImage Replication software and Copy-on-Write that can provide rapid recovery and backup in a Hyper-V environment. As more and more child partitions are placed on a physical Hyper-V server, the resources within the Hyper-V server might become constrained, thus affecting the backup window. By using solutions such as ShadowImage Replication software on 2000 family storage systems, backups can created with little effect on the Hyper-V host. These ShadowImage Replication software copies can also be backed up to tape or disk. This means that child partitions hosted by Hyper-V can be recovered very quickly.

Hyper-V Quick Migration Hyper-V quick migration provides a solution for both planned and unplanned downtime. Planned downtime allows for the quick movement of virtualized workloads to service the underlying physical hardware. This is the most common scenario when considering the use of quick migration.

Quick migration requires the use of failover clustering because the storage must be shared between the physical Hyper-V nodes. For a planned migration, quick migration saves the state of a running child partition (memory of original server to the disk and shared storage), moves the storage connectivity from one physical server to another and restores the partition to the second server (the disk and shared storage to memory on the new server).

Consider the following when configuring disks for quick migration:

• Pass-through disks:

– Require that the virtual machine configuration file be stored on a separate LU from the LUs that host the data files. Normally this is a VHD LU that is presented to the Hyper-V parent partition.

– Do not allow any other child partitions to share the virtual machine configuration file or VHD LU. Sharing the virtual machine configuration file or VHD LU among child partitions can lead to corruption of data.

– For a child partition with a large number of LUs, the 26 drive letter limitation might become an issue. Pass-through disks alleviate this. Pass-through disks do not require a drive letter because they are offline to the parent.

• VHD disks:

– Best practice is to use one child partition per LU.

– Ability does exist to provision more than one child partition per LU, but remember all child partitions on the VHD LU failover as a unit.

• Best practice is to leverage MPIO and Hitachi Dynamic Link Manager software for path availability and improved I/O throughput within the Hyper-V cluster.

Hitachi Storage Cluster Solution Integrating Hyper-V with Adaptable Modular Storage 2000 family replication solutions provides high availability for disaster recovery scenarios. This solution leverages the Quick Migration feature of Hyper-V to allow for the planned and unplanned recovery of child partitions under Hyper-V.

Disaster recovery solutions consist of remote LU replication between two sites, with automated failover of child partition resources to the secondary site in the event that the main site goes down or is otherwise unavailable.

Data replication and control are handled by the Hitachi Storage Cluster (HSC) software and the storage system controllers. This has little effect on the applications running in the child partition and is fully automated. Consistency groups and time-stamped writes ensure database integrity.

Child partitions run as clusters resources within the Hyper-V cluster. If a node within the cluster that is hosting the child partition fails, the child partition automatically fails over to an available node. The child partitions can

17

Page 22: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

be quickly moved between cluster nodes to allow for planned and unplanned outages. With HSC, the replicated LUs and the child partition are automatically brought online.

Figure 7 illustrates how multiple child partitions and their associated applications can be made highly available using HSC.

Figure 7. Hitachi Storage Cluster for Hyper-V Solution

Hyper-V Performance Monitoring A complete, end-to-end picture of your Hyper-V Server environment and continual monitoring of capacity and performance are key components of a sound Hyper-V management strategy. The principles of analyzing the performance of a guest partition installed under Hyper-V are the same as analyzing the performance of an operating system installed on a physical machine. Monitor servers, operating systems, child partition application instances, databases, database applications, storage and IP networks and the 2000 family storage system using tools such as Windows Performance Monitor (PerfMon) and Hitachi Performance Monitor feature.

Note that while PerfMon provides good overall I/O information about the Hyper-V parent and the guests under the Hyper-V parent, it cannot identify all possible bottlenecks in an environment. For a good overall understanding of the I/O profile of a Hyper-V parent and its guest partitions, monitor the storage system’s performance with Hitachi Performance Monitor feature. Combining data from at least two performance-monitoring tools provides a more complete picture of the Hyper-V environment. Remember that PerfMon is a per-server monitoring tool and cannot provide a holistic view of the storage system. For a complete view, use PerfMon to monitor all servers that are sharing a RAID group.

Windows Performance Monitor PerfMon is a Windows-based application that allows administrators to monitor the performance of a system using counters or graphs, in logs or as alerts on the local or remote host. The best indicator of disk performance on a Hyper-V parent operating system is obtained by using the \Logical Disk(*)\Avg. sec/Read and \Logical Disk(*)\Avg. sec/Write performance monitor counters. These performance monitor counters measure the latency time that read and write operations take to respond to the operating system. In general, average disk latency response times greater than 20ms on a disk are cause for concern. For more information about monitoring Hyper-V related counters, see Microsoft® TechNet’s Measuring Performance on Hyper-V article.

18

Page 23: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

19

Hitachi Performance Monitor Feature Hitachi Performance Monitor feature is a controller-based software application, enabled through Hitachi Storage Navigator 2 software, which monitors the performance of RAID groups, logical units and other elements of the disk subsystem while tracking utilization rates of resources such as hard disk drives and processors. Information is displayed using line graphs in the Performance Monitor windows, as shown in Figure 8, and can also be saved in comma-separated value (.csv) files.

Figure 8. Hitachi Performance Monitor Feature

You can measure utilization rates of disk subsystem resources, such as load on disks and ports, with Hitachi Performance Monitor feature. When a problem such as slow response occurs in a host, an administrator can use Hitachi Performance Monitor feature to quickly determine if the disk subsystem is the source of the problem.

Hitachi Tuning Manager Software Hitachi Tuning Manager software enables you to proactively monitor, manage and plan the performance and capacity for the Hitachi modular storage that is attached to your Hyper-V servers. Hitachi Tuning Manager software consolidates statistical performance data from the entire storage path. It collects performance and capacity data from the operating system, switch ports, storage ports on the storage system, RAID groups and LUs and provides the administrator a complete performance picture. It provides historical, current and forecast views of these metrics. For more information about Hitachi Tuning Manager software, see the Hitachi Data Systems support portal.

Page 24: Hitachi Ams 2000 Family Best Practices With Hyper v Wp

Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA Contact Information: + 1 408 970 1000 www.hds.com / [email protected]

Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USA� Contact Information: + 1 408 970 1000 www.hds.com / [email protected]

Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United Kingdom Contact Information: + 44 (0) 1753 618000 www.hds.com / [email protected]

Hitachi is a registered trademark of Hitachi, Ltd. in the United States and others countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries. ShadowImage and TrueCopy are registered trademarks of Hitachi Data Systems.

All other trademarks, service marks and company names mentioned in this document or website are properties of their respective owners.

Notice: This document is for information purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that may be configuration dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for information on feature and product availability.

© Hitachi Data Systems Corporation 2009. All Rights Reserved.

AS-005-01 April 2009


Recommended