+ All Categories
Home > Technology > VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization

VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization

Date post: 13-May-2015
Category:
Upload: ibm-india-smarter-computing
View: 532 times
Download: 1 times
Share this document with a friend
Description:
The values of server virtualization are well understood today. Customers implement server virtualization to increase server utilization, handle peak loads efficiently, decrease total cost of ownership (TCO), and streamline server landscapes. Similarly, storage virtualization helps to address the same challenges as server virtualization. Storage virtualization also expands beyond the boundaries of physical resources and helps to control how IT infrastructures adjust to rapidly changing business demands. Storage virtualization benefits customers through improved physical resource utilization and improved hardware efficiency, as well as reduced power and cooling expenses. In addition, consolidation of resources obtained through virtualization offers measurable returns on investment for today’s businesses. Finally, virtualization serves as one of the key enablers of cloud solutions, which are designed to deliver services economically and on demand.
Popular Tags:
102
IBM Corporation 2011 1 | Page VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization Lab report: vSphere 5, vMotion, HA, SDRS, I/O Control, vCenter, VAAI and VASA
Transcript
Page 1: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 1 | P a g e

VMware vSphere 5 and

IBM XIV Gen3 end-to-end virtualization

Lab report: vSphere 5, vMotion, HA, SDRS, I/O Control, vCenter, VAAI and VASA

Page 2: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 2 | P a g e

Contents

1. Executive summary .............................................................................................................. 4

2. Introduction ............................................................................................................................ 5

2.1. VMware vSphere 5 features and benefits ................................................................. 5

2.2. Introduction to new XIV Gen3 features ...................................................................... 6

2.3. Testing goals .................................................................................................................. 7

2.4. Description of the equipment ....................................................................................... 7

3. Test structure......................................................................................................................... 7

3.1. Hardware setup.............................................................................................................. 7

3.1.2. ISCSI configuration..............................................................................................................8

3.1.3. VMware vSphere..................................................................................................................8

3.2. VMware 5.0 environment Software setup installation.............................................. 8

3.2.1. VMware 5.0 Configuration ..................................................................................................8

3.2.2. VM OS software ...................................................................................................................8

3.2.3. Testing software ...................................................................................................................9

4. Test procedures .................................................................................................................... 9

4.1. Iometer for performance testing .................................................................................. 9

4.1.1. Disk and network controller performance.........................................................................9

4.1.2. Bandwidth and latency capabilities of buses ...................................................................9

4.2. vSphere vMotion.......................................................................................................... 10

4.2.1. vSphere vMotion - Transfer time of VMs to a local disk (DAS) ..................................10

4.2.2. vSphere vMotion - Transfer times of VMs to XIV LUN (SAN).....................................10

4.3. vSphere High Availability............................................................................................ 11

4.5. Profile-Driven Storage ................................................................................................ 13

4.6. vSphere Storage I/O Control ..................................................................................... 14

4.7. vCenter .......................................................................................................................... 15

4.8. VMware vSphere Storage API Program .................................................................. 15

4.8.1. vSphere Storage APIs for Array Integration (VAAI) .....................................................15

• Full copy, Hardware-Assisted Locking, and Block Zeroing .........................................16

4.8.2. vStorage APIs for Storage Awareness (VASA).............................................................19

Page 3: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 3 | P a g e

5. Conclusion ........................................................................................................................... 20

Appendix A (Iometer for performance testing) ...................................................................... 22

Appendix B (vSphere vMotion) ................................................................................................ 47

Appendix C (Transfer times of VMs to XIV LUNs (SAN))................................................... 51

Appendix D ( vSphere High Availability)................................................................................. 55

Appendix E (vSphere Storage DRS)....................................................................................... 59

Appendix F (Profile-Driven Storage) ....................................................................................... 75

Appendix G (Storage I/O Control) ........................................................................................... 90

Trademarks and special notices ............................................................................................ 101

Page 4: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 4 | P a g e

1. Executive summary

The values of server virtualization are well understood today. Customers implement server virtualization to increase server utilization, handle peak loads efficiently, decrease total cost of ownership (TCO), and streamline server landscapes.

Similarly, storage virtualization helps to address the same challenges as server virtualization. Storage virtualization also expands beyond the boundaries of physical resources and helps to control how IT infrastructures adjust to rapidly changing business demands. Storage virtualization benefits customers through improved physical resource utilization and improved hardware efficiency, as well as reduced power and cooling expenses. In addition, consolidation of resources obtained through virtualization offers measurable returns on investment for today’s businesses. Finally, virtualization serves as one of the key enablers of cloud solutions, which are designed to deliver services economically and on demand.

The features of VMware vSphere 5.0 and IBM XIV® Gen3 storage together build a powerful end to end virtualized infrastructure, covering not only servers and storage, but also end-to-end infrastructure management, leading to more-efficient and higher-performing applications. VMware is a leading manufacturer of virtualization software. VMware vSphere 5 is the first version of VMware vSphere built exclusively on ESXi, a hypervisor purpose-built for virtualization that runs independently from a general purpose operating system. With an ultra-thin architecture, ESXi delivers industry-leading performance, reliability and scalability all within a footprint of less than 100 MB. The result is streamlined deployment and configuration as well as simplified patching, updating and better security.

The IBM XIV Storage System Gen3 uses an advanced storage fabric architecture built for today’s dynamic data centers with an eye towards tomorrow. With industry leading storage software and a high-speed InifiniBand fabric, the XIV Gen3 delivers storage features and performance demanded in VMware infrastructures including:

• Automation and simplicity

• Multi-level integration with vSphere

• Centralized management in vCenter

• vStorage APIs for Array Integration (VAAI)

• vStorage APIs for Storage Awareness integration (VASA)

• Storage Replication Adapter (SRA) for Site Recovery Manager (SRM)

• Engineering-level collaboration for vSphere 5, and beyond

A global partnership with IBM and VMware coupled with the forward thinking architecture of IBM XIV Gen3 Storage System provide a solid foundation for virtual infrastructures today and into the future. On top of this solid foundation, VMware vSphere 5.0 and IBM XIV Gen3 complement each other to create a strong virtualization environment. Evidence of how seamlessly these features work together

Page 5: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 5 | P a g e

to provide this powerful virtualized environment are found in the following sections. Testing details can be found in Appendices A through G.

2. Introduction

2.1. VMware vSphere 5 features and benefits

Enhancements and new features in VMware vSphere 5 are designed to help deliver improved application performance and availability for all business-critical applications. VMware vSphere 5 introduces advanced automation capabilities including:

• Four times larger virtual machines (VMs) scale to support any application. With VMware vSphere 5, VMware helps make it easier for customers to virtualize. VMware vSphere 5 is capable of running VMs four times more powerful than VMware vSphere 4, supporting up to 1 terabyte of memory and up to 32 virtual processors. These VMs are able to process in excess of 1 million I/O operations per second, helping surpass current requirements of the most resource-intensive applications. For example, VMware vSphere 5 is able to support a database that processes more than two billion transactions per day.

• Updates to vSphere High Availability (HA) offer reliable protection against unplanned downtime. VMware vSphere 5 features a new HA architecture that is easier to set up than with the previous vSphere 4.1 release (customers can get their applications set up with HA in minutes), is more scalable, and offers availability guarantees.

• Intelligent Policy Management: Three new automation advancements deliver cloud agility. VMware vSphere 5 introduces three new features that automate datacenter resource management to help IT respond to the business faster while reducing operating expenses. These features deliver intelligent policy management: A “set it and forget it” approach to data center resource management. Customers define the policy and establish the operating parameters, and VMware vSphere 5 does the rest. VMware vSphere 5 intelligent policy management features include:

• Auto-Deploy enables automatic server deployment “on the fly” and e.g. reduces the time that it takes to deploy a non-virtualized data center with 40 servers from 20 hours to 10 minutes. After the servers are up and running, Auto-Deploy also automates the patching process, making it possible to instantly apply patches to many servers at once.

• Profile-Driven Storage reduces the number of steps required to select storage resources by grouping storage according to user-defined policies (for example, gold, silver, bronze, and so on). During the provisioning process, customers simply select a level of service for the VM, and VMware vSphere automatically uses the storage resources that best align with that level of service.

• Storage Distributed Resource Scheduler (DRS) extends the automated load-balancing capabilities that VMware first introduced in

Page 6: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 6 | P a g e

2006 with DRS to include storage characteristics. After a customer has set the storage policy of a VM, Storage DRS automatically manages the placement and balancing of the VM across storage resources. By automating the ongoing resource allocations, Storage DRS eliminates the need for IT to monitor or intervene, while ensuring the VM maintains the service level defined by its policy.

2.2. Introduction to new XIV Gen3 features

The XIV Storage System has received rapid market success with thousands of installations in diverse industries worldwide, including financial services, healthcare, energy, education and manufacturing. IBM XIV integrates easily with virtualization, email, database, analytics and data protection solutions from IBM, SAP, Oracle, SAS, VMware, Symantec and others. The XIV Gen3 model exemplifies the XIV series’ evolutionary capability: Each hardware component has been upgraded with the latest technologies, while the core of the architecture remains intact. The XIV Gen3 model gives applications a tremendous performance boost, helping customers meet increasing demands with fewer servers and networks. The XIV Storage System series common features enable it to:

• Self-tune and deliver consistently high performance with automated balanced data placement across all key system resources, eliminating hot spots

• Provide unprecedented data protection and availability through active-active N+1 redundancy of system components and rapid self-healing (< 60 minutes for 2 TB drives)

• Enables unmatched ease of management through automated tasks and an intuitive user interface

• Help promote low TCO enabled by high-density disks and optimal utilization

• Offer seamless and easy-to-use integrated application solutions with the leading host platforms and business applications

XIV Gen3 adds ultra-performance capabilities to the XIV series compared to its previous generation by providing:

• Up to 4 times the throughput, cutting time and boosting performance for business intelligence, archiving and other extremely demanding applications

• Up to 3 times speedier response time, enabling faster transaction processing and greater scalability with online transaction processing (OLTP), database and email applications

• Power to serve even more applications from a single system with a comprehensive hardware upgrade that includes InfiniBand inter-module connect, larger cache, faster disk controllers, increased processing power, and more fibre-channel (FC) and iSCSI connectivity.

• Option for future upgradeability to solid-state drive (SSD) caching for breakthrough SSD performance levels at a fraction of typical SSD storage costs, combined with very high-density drives helping achieve even lower TCO.

Page 7: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 7 | P a g e

2.3. Testing goals

The purpose of the following test cases is to show that VMware vSphere 5 and the IBM XIV Storage System Gen3 storage solution seamlessly complement each other as an efficient storage virtualization solution. The testing in this paper is for proof of concept and should not be used as a performance statement.

2.4. Description of the equipment

The test setup utilizes the following IBM equipment:

• (3) IBM System x® 3650 M3 servers

• (2) IBM System Storage® SAN24B-4 Express switches

• (3) Qlogic QLE2562 HBAs

• IBM XIV Storage System Gen3 series hardware, Firmware Version 11.0

3. Test structure

3.1. Hardware setup

Figure 1 shows the vSphere 5.0 system x and XIV reference architecture diagram.

Page 8: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 8 | P a g e

FC F

C

FC

Ethernet

Ethernet

FC

FC

FCFC FC

Eth

ern

et

Figure 1. vSphere 5.0 System x and XIV reference architecture diagram

Fibre Channel configuration

• (3) IBM x3650 M3 servers

• (2) SAN24B-4 Express (8GB) (SAN A and SAN B)

• (3) Qlogic QLE2562 HBAs (8GB)

3.1.2. ISCSI configuration

• (2) IBM x3650 M3 servers

• 1 GB Ethernet Switch

3.1.3. VMware vSphere

• (1) VMware vSphere VM (Microsoft® Windows® 2008 R2)

3.2. VMware 5.0 environment Software setup installation

3.2.1. VMware 5.0 Configuration

• VMware 5.0 Enterprise Plus

3.2.2. VM OS software

• Windows 2008 R2

• Linux rhel 6.0

Page 9: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 9 | P a g e

3.2.3. Testing software Iometer for I/O testing Note: Iometer is downloaded from www.iometer.org and distributed under

the terms of the Intel Open Source License. The iomtr_kstat kernel

modules, as well as other future independent components, are distributed

under the terms of the GNU Public License)

4. Test procedures

4.1. Iometer for performance testing

When implementing storage, whether the storage is directly attached to a server (direct-attach storage or DAS), connected to a file-based network (network-attached storage or NAS), or resides on its own dedicated storage network (storage area network or SAN — Fibre Channel or iSCSI), it is important to understand storage performance. Without this information, managing growth becomes difficult. Iometer can help deliver this critical performance data to help you make better decisions about the storage needed or whether the current storage solution can handle an increased load.

4.1.1. Disk and network controller performance

The following two tests show the possible throughput of a three-VM setup and the IBM XIV Gen3 storage array configuration without any special tuning. See “Appendix A (Iometer for performance testing)” for test procedures.

Test object Performance of disk and network controllers. Setup (3) VMs, (1) processor, 4 GB memory, (3) 40GB XIV LUNs for

test Test steps Install Windows 2008 R2

Install Iometer Set up test with Iometer 40 Workers 8k block size, 30% write and 70% reads Run-time 1 hour See “Appendix A (Iometer for performance testing)”

Results VM (1) 76737 IOPS VM (2) 77296 IOPS VM (3) 72248 IOPS

Test notes *This is not a performance measurement test.

4.1.2. Bandwidth and latency capabilities of buses

Test object Bandwidth and latency capabilities of buses Setup (3) VMs (1) processor 4 GB memory (3) 40GB XIV LUNs for

test Test Steps Install Windows 2008 r2

Install Iometer Set up test with Iometer 40 Workers

Page 10: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 10 | P a g e

8k block size, 30% write and 70% reads Run-time 1 hour See “Appendix A (Iometer for performance testing)”

Results VM (1) 588 Mbps, 0.4641 ms average latency VM (2) 603 Mbps, 0.0257 ms average latency VM (3) 565 Mbps, 0.8856 ms average latency

Test notes *This is not a performance measurement test.

The Iometer testing shows that the IBM XIV Gen3 performed exceptionally well with 70000+ IOPS range with a latency well below the 1 ms range. Figure 2 shows the Iometer measured performance results for VM1.

Figure 2. Iometer VM1 results for 40 workers

4.2. vSphere vMotion

VMware vSphere vMotion technology enables live migration of VMs from server to server. This test demonstrates the difference in transfer times between moving VMs between local server disks (DAS) and moving VMs to the IBM XIV Gen3 (SAN). This demonstration also shows that the XIV Gen3 can move data at computer bus speeds.

4.2.1. vSphere vMotion - Transfer time of VMs to a local disk (DAS)

Test object Transfer time of VMs to local disk Setup VM Size 14.44 GB Test steps See “Appendix B (vSphere vMotion)” Results 10 min 3 seconds Test notes None

4.2.2. vSphere vMotion - Transfer times of VMs to XIV LUN (SAN)

Test Object Transfer time of VMs to XIV LUN

Page 11: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 11 | P a g e

Setup VM Size 14.44 GB Test steps See “Appendix C (Transfer times of VMs to XIV LUNs (SAN)))” Results 1 minute 31 seconds Test notes None

Overall test results: For the two tested VMs, transferring all data from the server to XIV was 6.7x faster than from the server to the local disk for the tested configuration, demonstrating the synergy between XIV and vSphere vMotion. See “Appendix B (vSphere vMotion)” and “Appendix C (Transfer times of VMs to XIV LUNs (SAN))” for test details.

4.3. vSphere High Availability

The vSphere High Availability (HA) feature delivers reliability and dependability needed by many applications running on virtual machines, independent of the operating system and applications running within it. vSphere HA provides uniform, cost-effective failover protection against hardware and operating system failures within VMware virtualized IT environments.

Test object Failover of an ESX server Setup See “Appendix D ( vSphere High Availability)”

Test steps See “Appendix D ( vSphere High Availability)”

Results When encountering a test-induced failure, the host moved to a new ESXI host and the storage seamlessly moved with it.

Test notes none

This test shows that the High Availability feature works seamlessly with the IBM XIV Gen3 as the test results show how a failure automatically moves the VM to a new ESXI host and the storage seamlessly moves with it as shown in Figure 3. See “Appendix D ( vSphere High Availability)” for test details.

Page 12: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 12 | P a g e

Figure 3. Demonstrating HA feature: VM moves to new ESXI host along with storage

4.4. vSphere Storage Distributed Resource Scheduler

The vSphere Storage Distributed Resource Scheduler (SDRS) aggregates storage resources from several storage volumes into a single pool and simplifies storage management. Intelligently placing workloads on storage volumes during provisioning based on the available storage resources, SDRS performs ongoing load balancing between volumes to ensure space and I/O bottlenecks are avoided as per predefined rules that reflect business needs and changing priorities. Test object Testing aggregated storage resources of several storage

volumes. Setup Test steps See “Appendix E (vSphere Storage DRS)” Results Passed, storage bottle neck avoided Test notes None

When run without SDRS, a storage bottleneck occurs. When SDRS is running, the system performs a task to load balance the disk. An imbalance on the datastore triggers the Storage DRS recommendation to migrate a virtual machine. Storage DRS makes multiple recommendations to solve this datastore imbalance. See “Appendix E (vSphere Storage DRS)” for test details.

Page 13: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 13 | P a g e

Figure 4. Storage DRS recommendations solve a datastore imbalance.

4.5. Profile-Driven Storage

Profile-Driven Storage enables easy and accurate selection of the correct datastore on which to deploy VMs. The selection of the datastore is based on the capabilities of that datastore. Then, throughout the lifecycle of the VM, a database administrator (DBA) can manually check to ensure that the underlying storage is still compatible, that is, it has the correct capabilities. This means that if the VM is cold-migrated or migrated using Storage vMotion, administrators can ensure that the VM moves to storage that meets the same characteristics and requirements of the original source “profile.” If the VM is moved without checking the capabilities of the destination storage, the compliance of the VM's physical storage characteristics can still be checked from the User Interface at any time, and the administrator can take corrective actions if the VM is no longer on a datastore which meets its storage requirements. Test object Deploying VMs on Profile-Driven Storage Setup Test steps See “Appendix F (Profile-Driven Storage)” Result This test demonstrates that with Profile Driven Storage, a user

is able to ensure physical storage characteristics are consistent between migrations of a VM

Test notes This test shows that the Profile-Driven Storage feature works with IBM XIV Gen3 to help ensure VM storage profiles meet requirements as shown in Figures 5 and 6. See “Appendix F (Profile-Driven Storage)” for test details.

Page 14: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 14 | P a g e

Figure 5. The VM storage profile is now compliant.

Figure 6. VM storage profile

4.6. vSphere Storage I/O Control

VMware vSphere 5.0 extends Storage I/O Control to provide cluster-wide I/O

sharing and limits for datastores. This feature helps ensure that no single virtual

machine should be able to create a bottleneck in any IT environment regardless

of the type of shared storage used. Storage I/O Control automatically throttles a

VM that is consuming a disparate amount of I/O bandwidth when the configured

latency threshold has been exceeded. This enables other virtual machines using

the same datastore to receive their fair share of I/O performance. Storage DRS

and Storage I/O Control work together to prevent deprecation of service-level

agreements while providing long- term and short-term I/O distribution balance.

Test object Test cluster-wide I/O sharing and limits for datastores Setup Test steps See “Appendix G (Storage I/O Control)” for test details

Page 15: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 15 | P a g e

Results Observed a gradual increase in the IOPS for the VM with 2000 shares and a gradual decrease in IOPS for the VM with 1000 shares.

Test notes

The test results showed that more resources needed to be allocated to one VM to balance the workload. VMware throttled the I/O of the higher IOPS VM to give more I/O to the slower VM.

This test shows that the Storage I/O Control feature works within VMware 5.0 with no changes to the IBM XIV Gen3. See “Appendix G (Storage I/O Control)” for test details.

4.7. vCenter

VMware vCenter Server is a tool that manages multiple host servers that run VMs. It enables the provisioning of new server VMs, the migration of VMs between host servers and the creation of a library of standardized VMs templates. You can install plug-ins to add several other features, for example, VASA for discovery of storage topology and capability, event and alert status; SRM for disaster recovery automation exploiting storage business-continuity features.

4.8. VMware vSphere Storage API Program

VMware vSphere provides an API and software development kit (SDK) environment to allow customers and independent software vendors to enhance and extend the functionality and control of vSphere. VMware has created several storage virtualization APIs that help address storage functionality and control.

4.8.1. vSphere Storage APIs for Array Integration (VAAI) Virtualization administrators look for ways to improve scalability, performance, and efficiency of their vSphere infrastructure. One way is by utilizing storage integration with VMware vStorage APIs for Array Integration VAAI. VAAI is a set of APIs or primitives that allow vSphere infrastructures to offload processing of data-related tasks, which can burden a VMware ESX server. Utilizing a storage platform like XIV with VAAI enabled, can provide significant improvements in vSphere performance, scalability, and availability. This capability was initially a private API requiring a plug-in in vSphere v4.1, but with vSphere 5.0, it is now a T10 SCSI standard. The VAAI driver for XIV enables the following primitives:

• Full copy (also known as hardware copy offload): o Benefit: Considerable boost in system performance and fast

completion of copy operations; minimizes host processing and network traffic

• Hardware-assisted locking (also known as atomic test and set): Replacement of the SCSI-2 lock/reservation in Virtual Machine File System (VMFS)

o Benefit: Significantly improves scalability and performance

Page 16: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 16 | P a g e

• Block zeroing (also known as write same) o Benefit: Reduces the amount of processor effort, and

input/output operations per second (IOPS) required to write zeroes across an entire EagerZeroedThick (EZT) Virtual Machine Disk (VMDK)

The XIV Storage System now provides full support for VAAI. The following sections describe each of these primitives.

• Full copy Tasks such as VM provisioning and VM migration are part of everyday activities of most VMware administrators. As the virtual environment continues to scale, it is important to monitor the overall impact that these activities have on the VMware infrastructure. Toggle the hardware assisted copy by changing the DataMover.HardwareAcceleratedMove parameter in the Advanced Settings tab in vSphere Virtual Center (set to 1 to enable, 0 to disable). When the value for hardware acceleration is 1, the data path changes for tasks such as Storage vMotion, as illustrated in Figure 7.

Instruction

Data Copy D

ata Copy In

struction

Figure 7: VAAI Full copy primitive

In this instance, the ESX server is removed from the data path of the data copy when hardware copy is enabled. Removing copy transactions from the server workload greatly increases the speed of these copy functions while reducing the impact to the ESX server. How effective is the VAAI full copy offload process?

During IBM lab testing, data retrieved from the VMware monitoring tool, esxtop showed that commands per second on the ESX host were reduced by a factor of 10. Copy time reduction varies depending on the VM but is usually significant (over 50% for most profiles).

Page 17: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 17 | P a g e

A few examples of this performance boost at customer data centers are shown in Table 1: Field results for VAAI full copy

. Customer Test Before VAAI After VAAI Time

reduction (in percentage)

Major financial 2 VMs 433 sec 180 sec 59%

Electric company

2 VMs 944 sec 517 sec 45%

Petroleum company

40 VMs 1 hour 20 min 67%

Table 1: Field results for VAAI full copy

Full copy effect: Thousands of commands and IOPs on the ESX server are freed up for other tasks and promote greater scalability.

Hardware-assisted locking (atomic test and set) Just as important as the demonstrated effect of hardware-assisted copy, the hardware-assisted locking primitive also greatly enhances VMware cluster scalability and disk operations for clustered file system (VMFS) with tighter granularity and efficiency. It is important to understand why locking occurs in the first place. For block storage environments, VMware data stores are formatted with VMFS. VMFS is a clustered file system that uses Small Computer System Interface (SCSI) reservations to handle distributive lock management. When there is a change to the metadata of the file system by an ESX server, the SCSI reservation process ensures that shared resources do not overlap with other connected ESX hosts by obtaining exclusive access to the logical unit number (LUN).

A SCSI reservation is created on VMFS when (not a complete list):

• Virtual Machine Disk (VMDK) is first created

• VMDK is deleted

• VMDK is migrated

• VMDK is created via a template

• A template is created from a VMDK

• Creating or deleting VM snapshots

• VM is switched on or off Although normal I/O operations do not require this mechanism, these boundary conditions have become more common as features such as vMotion with Distributed Resource Scheduler (DRS) are used more frequently. This SCSI reservation design leads to early storage area

Page 18: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 18 | P a g e

network (SAN) best practices for vSphere to dictate a limit in cluster size for block storage (about 8 to 10 ESX hosts). With hardware-assisted locking as shown in Figure 8, LUN locking processing is transferred to the storage system. This reduces the number of commands required to access a lock, provides locks to be more granular, and leads to better scalability of the virtual infrastructure.

VMDK

VMDK

VMDK

VMDK

VMDK

VMDK

Figure 8: VAAI Atomic test and set primitive

Hardware-assisted locking effect: Hardware-assisted locking will increase VMs per data store, ESX servers per data store, and overall performance. This functionality coupled with 60 processors and 360 GB of cache memory for the XIV Storage System Gen3 helps provide better consolidation, density, and performance capabilities for the most demanding virtual infrastructures.

Block zeroing (write same) Block zeroing, as shown in Figure 9, is designed to reduce the amount of processor and storage I/O utilization required to write zeroes across an entire EZT VMDK when it is created. With the block zeroing primitive, zeroing operation for EZT VMDK files are offloaded to the XIV Storage System without the host having to issue several commands.

Zero

Block Zeroing

Enabled

0 0 0 0 0

Zero

Zero

Zero

Zero

0 0 0 0 0

Zero

Zero

Page 19: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 19 | P a g e

Figure 9. The VAAI write same or block zeroing primitive

Block zeroing effect: Block zeroing reduces overhead and provides better performance for creating EZT virtual disks. With XIV, EZT volumes are available immediately through fast write caching and de-staging. VAAI support on XIV storage systems liberates valuable compute resources in the virtual infrastructure Offloading processor and disk intensive activities from the ESX server to the storage system provides significant improvements in vSphere performance, scalability and availability. Note: Before installing the VAAI driver for the XIV storage system, ensure 10.2.4a or higher is the installed microcode. For vSphere 5.x and later, the VAAI driver is no longer required for IBM Storage.

4.8.2. vStorage APIs for Storage Awareness (VASA)

The IBM Storage provider for VMware VASA, illustrated in Figure 10,

provides even more real-time information about the XIV Storage System.

VMware vStorage APIs for Storage Awareness (VASA) enable vCenter to

see the capabilities of storage array LUNs and corresponding datastores.

With visibility into capabilities underlying a datastore, it is much easier to

select the appropriate disk for virtual machine placement. The IBM XIV

Storage System VASA provider for VMware vCenter adds:

• Real-time disk status

• Real-time alerts and events from the XIV Storage System

to vCenter

• Support for multiple vCenter consoles and multiple XIV

Storage Systems

• Continuous monitoring through storage monitoring service

(SMS) for vSphere

• Foundation for future functions such as SDRS and policy-

driven storage deployment.

Page 20: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 20 | P a g e

Figure 10. VASA block diagram

Adding VASA support, available in vSphere 5, allows VMware and Cloud

administrators insights which lead to improved availability, performance,

and management of the storage infrastructure.

In addition to VASA, the XIV Storage System also provides a vCenter Plug-

in for vSphere 4 and vSphere 5, which extends management of the storage

to provisioning, mapping, and monitoring of replication, snapshots, and

capacity.

5. Conclusion Demonstrated through this set of IBM functional tests, VMware vSphere 5 and the IBM XIV Storage System Gen3 storage solution seamlessly complement each other as an efficient storage virtualization solution. Evaluation testing verified that VMware vSphere 5 and the IBM XIV Storage System Gen3 consistently performed as expected. The test setup and results can be further evaluated by exploring Appendices A through G. The release of VMware vSphere 5 is accompanied by many new and improved features. VMware vSphere Storage Distributed Resource Scheduler (SDRS) aggregates storage resources from several storage volumes into a single pool, and simplifies storage management. Profile Driven Storage enables easy and accurate selection of the correct datastore on which to deploy Virtual Machines. Storage I/O Control provides cluster-wide I/O sharing and limits for datastores. VAAI, integrated into vSphere 5, provides enhanced performance via storage array exploitation without the need for a plug-in. VASA delivers realtime VMware administrator discovery of storage: capacity, capabilities, events and alerts. With the addition of these new features IT professionals can realize more efficient utilization of storage resources to help achieve higher productivity at reduced costs.

Page 21: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 21 | P a g e

For more information regarding VMware vSphere 5 and the IBM XIV Storage System Gen3, reference the following links: VMware: www.vmware.com/products/vsphere/overview.html IBM XIV Storage System Gen3 ibm.com/systems/storage/disk/xiv/resources.html Iometer Iometer is downloaded from www.iometer.org/ and distributed under the terms of the Intel Open Source License. The iomtr_kstat kernel module, as well as other future independent components, is distributed under the terms of the GNU Public License).

Page 22: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 22 | P a g e

Appendix A (Iometer for performance testing)

1. Test objective: Performance of VMware vSphere 5.0 using XIV disk and network controllers.

2. Setup Steps: Create 3 New Virtual Machines on vSphere

2.1. Download Windows 2008 R2 from the Microsoft website www.microsoft.com/en-us/server-cloud/windows-server/2008-r2-trial.aspx. 2.2. Download the MS 2008 R2 ISO to vSphere machine. 2.3. On the vSphere 5.0 machine, open vSphere.

2.4. Right Click on ESX server and Select “New Virtual Machine.”

2.5. Select “Name:” Type a name for Virtual Machine; for the tested configuration, the name used was “New Virtual Machine.”

Page 23: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 23 | P a g e

2.6. Select “Next.” 2.7. Select VM Storage.

2.8. Select “Next.” 2.9. Select Guest Operating System “Windows” Version type.

Page 24: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 24 | P a g e

2.10. Select “Next.” 2.11. Select Create Network Connections. 2.12. Set “How many NICs do you want to connect” to “1.” 2.13. Select NIC 1. 2.14. Select Adapter, for this test, “E1000. ”

2.15. Select “Next.” 2.16. Select “Virtual disk size:”

Page 25: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 25 | P a g e

2.17. Select “Next.” 2.18. Select “Finish” to finish the VM creation. 2.19. Select the Virtual Machine just created.

2.20. Right Click on VM.

2.21. Select “Open Console.” 2.22. Select “Power on” (Green Arrow).

Page 26: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 26 | P a g e

2.23. Select “CD tool.”

2.24. Select “Connect to ISO image on local disk.”

2.25. Select WS 2008 R2 ISO.

Page 27: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 27 | P a g e

2.26. Select “Open.” 2.27. After executing windows server install, assign IP address.

2.28. Right Click on VM. 2.29. Select “Open Console.”

2.30. Run Windows updates, and Windows activation. 2.31. Shutdown Windows server. 2.32. Install test hard drives (XIV Gen3). 2.33. Right click on VM.

Page 28: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 28 | P a g e

2.34. Select “Edit Settings”

2.35. Select “Add”

Page 29: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 29 | P a g e

2.36. Select “Hard Disk”

2.37. Select “Next” and Select “Next” 2.38. Select “Disk Size” 40 GB 2.39. Select “Specify a datastore or datastore cluster:” 2.40. Select “Browse”

Page 30: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 30 | P a g e

2.41. Select appropriate disk volume - In this case is “XIV-ISVX8_X9”

2.42. Select “OK”

2.43. Select “Next”

Page 31: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 31 | P a g e

2.44. Select “Next”

2.45. Select “Finish” 2.46. Start the VM Select “Power on” (Green Arrow)

Page 32: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 32 | P a g e

2.47. Select “VM.”

2.48. Select “Guest.”

Page 33: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 33 | P a g e

2.49. Select “Send Crtl+Alt+del.”

2.50. Enter password 2.51. Select VM.

2.52. Select “Guest.”

Page 34: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 34 | P a g e

2.53. Select “Install/Upgrade VMware Tools.”

2.54. To add newly created disk to Windows server, select “Start.”

2.55. Right Click “My Computer.”

2.56. Select “Manage.” 2.57. Select “Offline disk.”

Page 35: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 35 | P a g e

2.58. Right Click, and select “Online.”

2.59. Right click on volume

Select “New Simple Volume” Login to VM

2.60. Select “Next.” 2.61. Select “Assign Drive” and select “Next.”

Page 36: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 36 | P a g e

2.62. Select “Volume label,” in this case disk 3, and select “Next.”

2.63. Select “Finish.”

Page 37: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 37 | P a g e

2.64. Finished 2.65 Please repeat the above procedure a total of three times to create

disk 1, disk 2 and disk 3. 2.66 Now perform a remote desktop (RDP) to the VM: 2.67. Download Iometer from this website: http://www.Iometer.org/doc/downloads.html 2.68. Download Version 2006.07.27 (or latest version). [download] Windows i386 Installer and prebuild binaries cc5814fd01a0ef936964d590e4bbce7a

2.69. Download Iometer to the desktop.

2.70. Double click on Iometer-2006.07.27.win32.i386-setup.

2.71. Select “Run.”

2.72. Select “Next.” 2.73. Read License Agreement.

Page 38: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 38 | P a g e

2.74. Select “I Agree” and select “Next” to choose the components to install. 2.75. Select “Install.”

2.76. Select “Finish” to finish installing Iometer.

3. Test Steps to create 3VMs and test performance via Iometer

3.1. To Run Iometer, select windows “Start.”

3.2. Select “All Programs.”

Page 39: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 39 | P a g e

3.3. Select “Iometer 2006.07.27” or the latest version available.

3.4. Select “Iometer”

Page 40: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 40 | P a g e

3.5. Select “+” under “All Managers” 3.6 Create a Worker; select “Worker 1.”

3.7. Select desired drive to use, in this case, E: disk 1.

3.8 Add Network Targets; select to add Network Targets.

Page 41: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 41 | P a g e

3.9. Select “Worker 2.”

3.10. Select Network from the Network targets tab.

3.11. Select “Access Specifications.”

Page 42: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 42 | P a g e

3.12. Select “New.”

3.13. Select “Name.” 3.14. Create test name. 3.15. Select “Transfer Request Size” and set to “2 KB.” 3.16. Change to 8KB to mimic SQL server. 3.17. Select “Percent Read/Write Distribution.” 3.18. Change specification to 30% Write and 70% Read.

Page 43: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 43 | P a g e

3.19. After Changes, Select “OK.” 3.20. Scroll down to find test name.

3.21. Select test name. 3.22. Select “Add.”

Page 44: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 44 | P a g e

3.23. Select “Test Setup.”

3.24. Select “Test Description.” 3.25. Type test name. 3.26. Select “Run Time.” 3.27. Set to 1 hour. 3.28. Select “Results Display.” 3.29. Select “Update Frequency (seconds).” 3.30. Set Update Frequency to 1 second to view results.

Page 45: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 45 | P a g e

3.31. Select Start (Green Flag). 3.32. Select “File name.”

3.33. Select “Save.”

The test will run for 1 hour.

Page 46: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 46 | P a g e

3.34. Start Results.

4. Iometer performance results

(3) VM, (1) CPU, 4GB Memory, (3) 40GB XIV LUNS were used for this test. The results screen shows the achieved IOPS, throughput and CPU utilization for VM1; the tests were repeated for VM2 and VM3. These tests showed the possible throughput of 3 VMs and the IBM XIV Gen3 storage array configuration without any special tuning. The 3 VMs averaged approximately 75,000 IOPS with <0.5ms latency.

Page 47: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 47 | P a g e

Appendix B (vSphere vMotion)

1. Test object: vSphere vMotion - Transfer time of VMs to local disk (Vmware 5.0)

2. Setup steps: This section demonstrates vMotion using local disk 2.1. Download a stop watch from http://download.cnet.com/Stop-Watch/3000-2350_4-10773544.html?tag=mncol;5 and install

Screen Setup for test:

3. Test Steps: Test transfer time to migrate data to local disk 3.1. Select Virtual Machine (VM).

Page 48: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 48 | P a g e

3.2. Right click on VM. 3.3. Select “Migrate.”

3.4. Select “Change datastore” and select “Next.”

Page 49: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 49 | P a g e

3.5. Select a Local Datastore “ISVX8-local-0” and select “Next.”

Start of test 3.6. Start the Stopwatch; Select “Restart.”

Page 50: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 50 | P a g e

3.7. At the Completion of the test, select “Pause.”

End of the test

4. Results: The recorded transfer time migrating VMs to local disk (Vmware 5.0) was 10 min 3 seconds.

Page 51: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 51 | P a g e

Appendix C (Transfer times of VMs to XIV LUNs (SAN)) 5. Test object: Transfer times of VMs to XIV LUNs (SAN)

6. Setup steps: This section demonstrates vMotion using XIV

2.1. Download a stop watch from http://download.cnet.com/Stop-Watch/3000-2350_4-10773544.html?tag=mncol;5 and install. Screen Setup for test

7. Test Setup: Test transfer time to migrate data to XIV disk. 3.1. Select Virtual Machine (VM).

3.2. Right click on VM. 3.3. Select “Migrate.”

Page 52: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 52 | P a g e

3.4. Select “Change datastore” and select “Next.”

3.5. Select the XIV LUN ”XIV_ISVX8_X9” and select “Next.”

Page 53: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 53 | P a g e

Start of test 3.6. Start the Stopwatch 3.7. Select “Finish”

Page 54: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 54 | P a g e

3.8. At the Completion of the test, select “Pause” and record the total migration time.

End of test

8. Results:

The recorded transfer time migrating VMs to XIV Gen3 (Vmware 5.0) was 1 min 31 seconds. For the two tested VMs, transferring all data from the server to XIV was 6.7 times faster than from the server to the local disk for the tested configuration, demonstrating the efficiency and synergy using XIV and vSphere vMotion.

Page 55: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 55 | P a g e

Appendix D ( vSphere High Availability)

9. Test object: vSphere High Availability - Failover of an ESX server 10. Test steps: Create a VMware vSphere 5.0 with a cluster environment

2.1. In the VMware cluster environment, select a VM that is not Fault Tolerant. 2.2. Right Click on the VM.

2.3. Select “Edit Settings.” 2.4. Ensure that the VM uses XIV Gen3 hard disk as in the example below; select “OK.”

Page 56: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 56 | P a g e

2.5. Right click on VM. 2.6. Select “Fault Tolerance.”

2.7. Select “Turn On Fault Tolerance.”

Page 57: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 57 | P a g e

2.8. Select “Yes”

2.9. Results

Fault Tolerance is now active.

Page 58: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 58 | P a g e

2.10. Right click on VM. 2.11. Select “Power” and “Power On.”

2.12. Set up complete.

11. Test Steps:

3.1. Right Click on VM. 3.2. Select “Fault Tolerance.”

Page 59: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 59 | P a g e

3.3. Make note of the “Host and Storage.“ 3.4. Select “Test Failover.”

Observe that “VM and Storage” has moved to a new host.

12. Results: The VM moved to a new ESXI host and the storage seamlessly moved with it.

Appendix E (vSphere Storage DRS) 1. Test object: vSphere Storage DRS

2. Setup steps: Demonstrate SDRS using VMware vSphere 5.0 startup

screen

2.1. Select “Inventory.” 2.2. Select “Datastore and Datastore Cluster.”

Page 60: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 60 | P a g e

2.3. Right click on “Datacenter.”

2.3. Select “New Datastore Cluster.”

Page 61: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 61 | P a g e

2.4. Create the Datastore Cluster Name.

2.5. Select “Turn on Storage DRS,” and select “Next.” 2.6. Select “Fully Automated” and select “Next.”

Page 62: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 62 | P a g e

2.7. Select “Show Advance Options.” 2.8. Review Settings (Use Defaults), and select “Next.”

2.9. Select “Cluster,” and select “Next.”

Page 63: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 63 | P a g e

2.10. Select the datastore to use, then select “Next.”

2.11. Review results under “Ready to Complete.”

Page 64: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 64 | P a g e

2.12. Select “Finish.” The new cluster datastore shows all operations were completed successfully.

Page 65: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 65 | P a g e

2.13. Build a new virtual machine. 2.14. Right Click on “Cluster.”

2.15. Select “New Virtual Machine,” then select “Next.”

Page 66: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 66 | P a g e

2.16. Name the virtual machine and select “Next.”

2.17. Select host and then select “Next.”

Page 67: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 67 | P a g e

2.18. Select datastore cluster, then select “Next.”

2.19. Select Guest Operating System, and select “Next.”

2.20. Select “Create Network Connections,” and select “Next.”

Page 68: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 68 | P a g e

2.21. Specify the virtual disk size, and select “Next.”

2.22. Select “Show all storage recommendations.”

Page 69: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 69 | P a g e

2.23. Select “Continue.” 2.24. Select “Apply Recommendations.”

Page 70: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 70 | P a g e

2.25. Observe that “Apply Storage DRS recommendations” has completed. Exploring the Datastore Cluster

2.26. Select “Datastore and Datastore Cluster” from vSphere Home Screen. 2.27. Select datastore.

Page 71: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 71 | P a g e

2.28. Right click.

2.29. Right click on new VM created. 2.30. Select “Migrate.”

2.31. Select “Change datastore,” and select “Next.”

Page 72: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 72 | P a g e

2.32. Select “XIV_ISVX8_X9” and select “Next.”

2.33. Select “Finish.”

Page 73: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 73 | P a g e

SDRS set up completed.

3. Test Steps: 3.1. Select Datastore cluster. 3.2. Select “Run Storage DRS.”

Page 74: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 74 | P a g e

“Relocate virtual machine” shows test status of “Completed.”

4. Results: Storage DRS (SDRS)

When an imbalance occurs on the datastore, Storage DRS recommends a virtual machine to be migrated. Storage DRS will make multiple recommendations to solve datastore imbalances.

Page 75: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 75 | P a g e

Appendix F (Profile-Driven Storage) 1. Test object: Profile-Driven Storage 2. Setup steps: This test demonstrates Profile-Driven Storage

2.1. Select “VM Storage Profile” from the Home vSphere window.

2.2. Select “Enable VM Storage Profiles.”

2.3. Select “Enable.”

Page 76: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 76 | P a g e

2.4. Note that VM Storage Profile Status is enabled and select “Close.”

2.5. Select “Manage Storage Capabilities.”

Page 77: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 77 | P a g e

2.6. Select “Add.”

2.7. Select “Name,” type “Gold.” 2.8. Select “Description,” type “Gold Storage Capability.”

2.9. Select “Ok” and “Close.”

Page 78: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 78 | P a g e

2.10. Note: Recent Tasks and select “Home.”

2.11. Select “Datastores and Datastores Cluster” from the Home vSphere window.

2.12. Select disk choice for User-Defined Storage Capability: 2.13. Select disk and right click.

2.14. Select “Assign User-Defined Storage Capability.”

Page 79: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 79 | P a g e

2.15. Select “Name” pull down, select “Gold,” and select “Ok.”

2.16. Select “Summary.”

Page 80: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 80 | P a g e

2.17. Select “Home.” 2.18. Select “VM Storage Profiles” from the vSphere Home screen.

2.19. Select “Create VM Storage Profile.”

2.20. Select “Name” type: Gold Profile. 2.21. Select “Description” type: Storage Profile for VMs that should reside on Gold storage, and select “Next.”

Page 81: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 81 | P a g e

2.22. Select “Gold,” and select “Next.”

Page 82: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 82 | P a g e

2.23. Select “Finish.” 2.24. Select “Gold Profile.” 2.25. Select “Summary.”

2.26. Observe the settings for later comparison.

Page 83: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 83 | P a g e

3. Test Steps: 3.1. Assign a VM storage profile to a VM. 3.2. Select “Home.”

Page 84: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 84 | P a g e

3.3. Select “Hosts and Clusters.”

3.4. Select a VM.

3.5. Right click the VM. 3.6. Select “VM Storage Profile.” 3.7. Select “Manage Profiles.”

Page 85: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 85 | P a g e

3.8. Select “Home VM Storage Profile.” 3.9. Select “Gold Profile” from pull down menu. 3.10. Select “Propagate to disks,” and select “Ok.”

Page 86: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 86 | P a g e

3.11. Observe the setting in “VM Storage Profiles for virtual disks” for future use and select “Ok.”

3.12. Observe in VM Storage Profiles section the profile is “Noncompliant,” as the storage characteristics in the “to” storage do not meet the same requirements.

3.13. Right click on VM.

Page 87: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 87 | P a g e

3.14. Select “Migrate.”

3.15. Select “Change datastore,” and select “Next.”

Page 88: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 88 | P a g e

3.16. Select “VM Storage Profile.” 3.17. Select “Gold Profile.” 3.18. Select Compatible disk, and select “Next.”

Note: the VM is being migrated.

3.19. Select “Refresh.”

4. Results: Profile Driven Storage Note the VM Storage Profile is now Compliant.

Page 89: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 89 | P a g e

Gold VM Storage Profile:

This test demonstrates that with Profile Driven Storage, a user is able to ensure physical storage characteristics are consistent between VM migrations.

Page 90: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 90 | P a g e

Appendix G (Storage I/O Control) 1. Test object: Storage I/O Control 2. Setup steps: Create a VM with 2 hard drives to demonstrate Storage I/O

Control

2.1. Start VM. 2.2. Use remote desktop (RDP) to go to the VM. 2.3. Install Iometer from http://www.Iometer.org/doc/downloads.html

2.4. Once installed, run Iometer.

Page 91: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 91 | P a g e

2.5. Select “Worker 1.”

2.6. Select “E: disk1.” 2.7. Select “Access Specifications,” and select “New.”

Page 92: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 92 | P a g e

2.8. Set “Transfer Request” Size to “10 Megabytes, 2 Kilobytes, 0 Bytes.” 2.9. Set “Percent Read/Write Distribution” to 75% Write / 25% Read and select “Ok” (these settings provide a heavier load on the VM).

2.10. Select “Untitled 1” under Global Access Specifications.

Page 93: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 93 | P a g e

2.11. Select “Results Display.” 2.12. Select “Update Frequency” to “1.” 2.13. Select “Green flag” to start.

2.14. Select “Save” to save results.

Page 94: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 94 | P a g e

2.15. Return to vSphere. 2.16. Select “Home.” 2.17. Select “Datastores and Datastore Clusters.”

2.18. Select the Host running the VM. Note Storage I/O Control is “Disabled”

Page 95: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 95 | P a g e

2.19. Select “Properties.” 2.20. Set “Storage I/O Control” to “Enabled.” 2.21. Select “Advanced.”

2.22. Select “OK.”

Page 96: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 96 | P a g e

2.23. Select “OK.”

2.24. Select “Close.”

2.25. Go to the VM used for testing and “Edit Settings.” 2.26. Select “Resources.”

Page 97: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 97 | P a g e

2.27. Select “Disk,” and select “OK.”

Note: Storage I/O Control (SIOC) is set on Disk 2.

Page 98: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 98 | P a g e

2.28. Set Hard disk 2 “Share” to High and “Limit – IOPS” to 100.

3. Test Steps: Demonstrate Storage I/O Control

Page 99: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 99 | P a g e

3.1. Now look at SIOC’s enforcing of the IOPS limit. Go back to the vSphere Client Performance tab or the virtual machine’s Iometer results to see the number of IOPS currently being generated. The value for this exercise is approximately 500–600 IOPS. 3.2. Go to the VM running Iometer. 3.3. Stop Iometer. 3.4. Change “# of Outstanding I/Os” to 65. 3.5. Restart Iometer.

3.6. Go to Results Display.

Page 100: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 100 | P a g e

4. Results: Storage I/O Control

Implementing Storage I/O Control recommendations shows a gradual movement towards the prioritizing of shares. The test demonstrates a gradual increase in the IOPS for the virtual machine with 2000 shares and a gradual decrease in IOPS for the virtual machine with 1000 shares. This completes the evaluation of Storage I/O Control.

Page 101: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 101 | P a g e

Trademarks and special notices

© Copyright IBM Corporation 2011. All rights Reserved.

References in this document to IBM products or services do not imply that IBM intends

to make them available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of

International Business Machines Corporation in the United States, other countries, or

both. If these and other IBM trademarked terms are marked on their first occurrence in

this information with a trademark symbol (® or ™), these symbols indicate U.S.

registered or common law trademarks owned by IBM at the time this information was

published. Such trademarks may also be registered or common law trademarks in other

countries. A current list of IBM trademarks is available on the Web at "Copyright and

trademark information" at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks

of Oracle and/or its affiliates.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft

Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of

others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers

have used IBM products and the results they may have achieved. Actual environmental

costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these

products, published announcement material, or other publicly available sources and

does not constitute an endorsement of such products by IBM. Sources for non-IBM list

prices and performance numbers are taken from publicly available information, including

vendor announcements and vendor worldwide homepages. IBM has not tested these

products and cannot confirm the accuracy of performance, capability, or any other

claims related to non-IBM products. Questions on the capability of non-IBM products

should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or

withdrawal without notice, and represent goals and objectives only. Contact your local

IBM office or IBM authorized reseller for the full text of the specific Statement of

Direction.

Some information addresses anticipated future capabilities. Such information is not

intended as a definitive statement of a commitment to specific levels of performance,

function or delivery schedules with respect to any future products. Such commitments

are only made in IBM product announcements. The information is presented here to

communicate IBM's current investment and development activities as a good faith effort

to help with our customers' future planning.

Page 102: VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization

IBM Corporation 2011 102 | P a g e

Performance is based on measurements and projections using standard IBM

benchmarks in a controlled environment. The actual throughput or performance that any

user will experience will vary depending upon considerations such as the amount of

multiprogramming in the user's job stream, the I/O configuration, the storage

configuration, and the workload processed. Therefore, no assurance can be given that

an individual user will achieve throughput or performance improvements equivalent to

the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in

production models.

Any references in this information to non-IBM websites are provided for convenience

only and do not in any manner serve as an endorsement of those websites. The

materials at those websites are not part of the materials for this IBM product and use of

those websites is at your own risk.


Recommended