+ All Categories
Home > Documents > Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard...

Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard...

Date post: 22-Aug-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
22
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 22 White Paper Deploy 200 VMware Horizon View 5.3 Pooled Desktops On Cisco UCS C240-M3 Rack Server with LSI Nytro MegaRAID and Onboard SAS Drives
Transcript
Page 1: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 22

White Paper

Deploy 200 VMware Horizon View 5.3 Pooled Desktops On Cisco UCS C240-M3 Rack Server with LSI Nytro MegaRAID and Onboard SAS Drives

Page 2: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 22

Contents

Overview ................................................................................................................................................................... 3

Cisco UCS C-Series Rack Servers ......................................................................................................................... 4

Cisco UCS C240 M3 Rack Server ........................................................................................................................... 5

Cisco VIC 1225 – 10GE Option ............................................................................................................................... 6

LSI Nytro MegaRAID Controller .............................................................................................................................. 7

Controller Cache ...................................................................................................................................................... 8

Read Policies ........................................................................................................................................................... 8

Caching Policies ...................................................................................................................................................... 8

Write Policies ........................................................................................................................................................... 8

VMware vSphere 5.5 ................................................................................................................................................ 9

VMware ESXi 5.5 Hypervisor .................................................................................................................................. 9

VMware Horizon View 5.3 ........................................................................................................................................ 9

Server Storage Volume Configuration ................................................................................................................. 12

Load Generation .................................................................................................................................................... 14

User Workload Simulation – Login VSI ................................................................................................................ 15

Test Run Protocol .................................................................................................................................................. 16

Success Criteria ..................................................................................................................................................... 16

Performance Results ............................................................................................................................................. 16

Login VSImax ......................................................................................................................................................... 17

Recommended Workload ...................................................................................................................................... 17

Conclusion ............................................................................................................................................................. 21

References ............................................................................................................................................................. 22

Page 3: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 22

Overview

One of the biggest barriers to entry for desktop virtualization (DV) is the capital expense for deploying proof of

concept (PoC) and pilot environments for mid-size organizations. For smaller customers, currently deployment of a

DV system for fewer than 200 users is cost prohibitive.

To overcome the entry point barrier, Cisco has developed a self-contained DV solution that can host up to 200

VMware View 5.3 floating assignment linked-clones on a single Cisco UCS® C240 M3 Rack Server and host the

following required infrastructure:

● VMware vSphere 5.5 Hypervisor

● Microsoft Active Directory Domain Controller (Optional)

● Microsoft SQL Server 2012

● Microsoft File Server for User Data and User Profiles (Optional)

● VMware vCenter 5.5 (Optional)

● VMware Horizon View 5.3 Composer

● VMware Horizon View 5.3 Connection Server/Administration Console

The Cisco® UCS C240 M3 Rack Server configuration used to validate the configuration is:

● Intel® Xeon

® processor E5-2697 v2 12-core 2.7 GHz Processors (2)

● 384 GB 1866 MHz DIMMs (24 x 16GB)

● Cisco UCS Virtual Interface Card (VIC) 1225 Converged Network Adapter (Optional for 10GE)

● LSI® NytroTM MegaRAID

® 200GB Controller

● Cisco 600GB 10K RPM Hot Swap SAS Drives (12)

● Cisco 650 Watt Power Supply (2)

Note: Virtual machines and hardware components marked “(Optional)” can be hosted on existing infrastructure,

deployed for management or performance enhancement or may be hosted on the C240 M3. The tests performed

for this white paper included all optional components installed on the same UCS C240 M3 server to validate

system capability.

The testing reported in this document utilized the unique capabilities for the Nytro MegaRAID 200GB controller

cache to support our Horizon View linked clone disposable disks. These disposable disks incur high IOPS during

the lifecycle of the linked clone virtual desktop.

Configuration of the controller’s flash memory and SAS drives is accomplished through the LSI Nytro MegaRAID

BIOS Config Utility configuration wizard, which is accessed during the Cisco UCS C240 M3 Rack Server’s boot

sequence by pressing the CTRL+H key sequence when the controller BIOS loads. (See “Test Configuration”

section later in this document for details of the test configuration)

The tested configuration provides excellent virtual desktop end-user experience for 200 users as measured by our

test tool, Login Virtual Session Indexer (Login VSI), at a breakthrough price point.

Fault tolerance can be achieved by deploying a second server configured identically with redundant infrastructure

and Horizon View virtual machines and by deploying a distributed file system.

Page 4: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 22

With options to use lower bin processors, such as the Intel Xeon processor E5-2680 v2, a lower entry point price

can be achieved at a slightly lower VMware Horizon View virtual machine density.

As with any solution deployed to users with data storage requirements, a backup solution must be deployed to

ensure the continuity of the user data. Such a solution is outside the scope of this paper.

Cisco UCS C-Series Rack Servers

Cisco UCS C-Series Rack Servers keep pace with Intel Xeon processor innovation by offering the latest

processors with an increase in processor frequency and improved security and availability features. With the

increased performance provided by the Intel Xeon processor E5-2600 and E5-2600 v2 product families, Cisco UCS

C-Series servers offer an improved price-to-performance ratio, extend Cisco Unified Computing System

innovations to an industry standard rack-mount form factor, including a standards-based unified network fabric,

Cisco VN-Link virtualization support, and Cisco Extended Memory Technology.

Designed to operate both in standalone environments and as part of the Cisco Unified Computing System, these

servers enable organizations to deploy systems incrementally—using as many or as few servers as needed—on a

schedule that best meets the organization’s timing and budget. Cisco UCS C-Series servers offer investment

protection through the capability to deploy them either as standalone servers or as part of the Cisco Unified

Computing System.

One compelling reason that many organizations prefer Rack Servers is the wide range of I/O options available in

the form of PCI Express (PCIe) adapters. Cisco UCS C-Series servers supports a wide spectrum of I/O options,

which includes interfaces supported by Cisco as well as adapters from third parties.

Page 5: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 22

Figure 1. Cisco UCS Components

Cisco UCS C240 M3 Rack Server

The Cisco UCS C240 M3 Rack Server (Figure 2) is designed for both performance and expandability over a wide

range of storage-intensive infrastructure workloads, from big data to collaboration. The enterprise-class Cisco UCS

C240 M3 server further extends the capabilities of the Cisco UCS portfolio in a 2 rack unit (RU) form factor with the

addition of the Intel® Xeon processor E5-2600 and E5-2600 v2 product families, which deliver an outstanding

combination of performance, flexibility, and efficiency gains. The Cisco UCS C240 M3 offers up to two Intel Xeon

processor E5-2600 or E5-2600 v2 processors, 24 DIMM slots, 24 disk drives, and four 1 Gigabit Ethernet LAN-on-

motherboard (LOM) ports to provide exceptional levels of internal memory and storage expandability and

exceptional performance.

The Cisco UCS C240 M3 interfaces with the Cisco UCS Virtual Interface Card. The Cisco UCS Virtual Interface

Card is a virtualization-optimized Fibre Channel over Ethernet (FCoE) PCI Express (PCIe 2.0 x8 10-Gbps adapter

designed for use with Cisco UCS C-Series Rack Servers. The VIC is a dual-port 10 Gigabit Ethernet PCIe adapter

that can support up to 256 PCIe standards-compliant virtual interfaces which can be dynamically configured so that

both their interface type (network interface card [NIC] or host bus adapter [HBA]) and identity MAC address and

worldwide name [WWN]) are established using just-in-time provisioning. In addition, the Cisco UCS VIC 1225 can

support network interface virtualization and Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX)

technology. An additional five PCIe slots are made available for certified third party PCIe cards. The server is

equipped to handle 24 on-board SAS drives or SSDs along with shared storage solutions offered by our partners.

Page 6: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 22

Cisco UCS C240 M3 server's disk configuration delivers balanced performance and expandability to best meet

individual workload requirements. With up to 12 LFF (Large Form Factor) or 24 SFF (Small Form Factor) internal

drives, the Cisco UCS C240 M3 optionally offers 10,000-RPM and 15,000-RPM SAS drives to deliver a high

number of I/O operations per second for transactional workloads such as database management systems. In

addition, high-capacity SATA drives provide an economical, large-capacity solution. Superfast SSDs are a third

option for workloads that demand extremely fast access to smaller amounts of data. A choice of RAID controller

options also helps increase disk performance and reliability.

The Cisco UCS C240 M3 further increases performance and customer choice over many types of storage-

intensive applications such as:

● Collaboration

● Small and medium-sized business (SMB) databases

● Big data infrastructure

● Virtualization and consolidation

● Storage servers

● High-performance appliances

This server caters to businesses that demand a large local storage capacity without compromising the user

experience. A fast processor and large memory and storage footprints help meet these business needs.

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/ps12370/data_sheet_c78-700629.html

Figure 2. Cisco UCS C240 M3 Rack Server

Cisco VIC 1225 – 10GE Option

A Cisco UCS Virtual Interface Card (VIC) 1225 (Figure 3) is a dual-port Enhanced Small Form-Factor Pluggable

(SFP+) 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe card designed

exclusively for Cisco UCS C-Series Rack Servers. With its half-height design, the card preserves full-height slots in

servers for third-party adapters certified by Cisco. It incorporates next-generation converged network adapter

(CNA) technology. The card enables a policy-based, stateless, agile server infrastructure that can present up to

256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface

cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1225 supports Cisco Data Center Virtual

Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual

machines, simplifying server virtualization deployment.

Page 7: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 22

Figure 3. Cisco UCS VIC 1225 CNA

Figure 4. Cisco UCS VIC 1225 CNA Architecture

LSI Nytro MegaRAID Controller

Cisco UCS CSeries Rack Server’s offer various raid/caching configuration options with LSI Nytro MegaRAID

controllers. Base UCS C Series servers come with an onboard controller, but customers can buy a LSI Nytro

MegaRAID controller as an option. The LSI MegaRAID Controller 8110-4i was used for this study.

The LSI MegaRAID NMR 8110-4i combines a PCIe RAID controller and onboard Flash to optimize SAS/SATA

storage into a single low latency, high performance, caching and data protected solution for internal storage

systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The

card complies with the PCI Express 3.0 x8 ports specification for high-bandwidth applications. All RAID levels are

supported on this LSI controller.

Figure 5. LSI Nytro MegaRAID Controller 8110-4i

Page 8: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 22

Controller Cache

The LSI Nytro MegaRAID Controller 8110-4i is a has eMLC memory type with 200GB capacity and 1Gb 1333Mhz

DDR3 SDRAM memory for RAID Cache assist. It provides a controller cache in read/write/caching versions which

has an extra optional protection against power failure through Battery Backed unit (BBU). The controller cache is

used to increase write and read performance which can be influenced by following three configurations

parameters.

Read Policies

● Adaptive Read Ahead: (Recommended for volume NOT associated with Nytro caching) This specifies that

the controller uses read-ahead if the two most recent disk accesses occurred in sequential sectors. If all

read requests are random, the algorithm reverts to No Read Ahead, however all requests are still evaluated

for possible sequential operation. Data which are read-ahead of the current request are kept in the

controller cache.

● Read Ahead: The controller reads ahead all the data until the end of the stripe from the disk.

● Normal: (Recommended for volumes associated with Nytro caching) Only the requested data is read and

the controller does not read ahead any data.

Caching Policies

● Direct IO: (Recommended for all volumes) All read data is transferred directly to host memory bypassing

RAID controller cache. Any read-ahead data is cached. All write data is transferred directly from host

memory bypassing RAID controller cache if Write-Through cache mode is set.

● Cached IO: All read and write data passes through controller cache memory on its way to or from host

memory. Includes write data in Write-Through mode.

Note: Recommended that Caching Policies are disabled.

Write Policies

Write-Through: Caching strategy where data is written to disks before a completion status is returned to the host

operating system considered more secure since a power failure will be less likely to cause undetected drive write

data loss with no battery-backed cache present. It is recommended using Write-Through for RAID 0, 1 and 10 to

provide optimum performance for streaming/sequential access workloads as since data is moved directly from the

host to the disks, controller avoids copying the data intermediary into cache which can improve overall

performance for streaming workloads if Direct IO mode is set.

Note: Recommended for volumes associated with Nytro caching

Write-Back: A caching strategy where write operations result in a completion status being sent to the host

operating system as soon as data is in written to the RAID cache. Data is written to the disk when it is forced out of

controller cache memory. Write-Back is more efficient if the temporal and/or spatial locality of the requests is

smaller than the controller cache size. Write-Back is more efficient in environments with “bursty” write activity.

Battery-backed cache can be used to protect against data loss as a result of a power failure or system crash.

Recommend Write-Back for RAID 0, 1, and 10 provides optimum performances for transactional (random real

world) benchmarks. Recommend Write-Back for RAID 5 and 6 improves performance of RAID-5 and 6 data

redundancy generation. Defaults to Write-Through if no BBU is available.

Note: Recommended for volumes NOT associated with Nytro caching .

Page 9: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 22

Disk Cache: Enabling disk cache increases throughput/performance for write operation/access. It is always

recommended to supply continuous power supply for hard disks by a UPS upstream. If the system is UPS-

Protected then enabling disk cache for performance reason is recommended. In case there is no UPS to these

disks – important data which has not been written from disk cache to hard disk may invariably be lost.

VMware vSphere 5.5

VMware, Inc. provides virtualization software. VMware’s enterprise software hypervisors for servers—VMware

vSphere ESX, VMware vSphere ESXi, and VSphere—are bare-metal hypervisors that run directly on server

hardware without requiring an additional underlying operating system. VMware vCenter Server for vSphere which

provides central management, complete control and visibility into clusters, hosts, virtual machines, storage,

networking and other critical elements of your virtual infrastructure.

VMware ESXi 5.5 Hypervisor

VMware ESXi 5.5 is a bare-metal hypervisor, so it installs directly on top of the physical server and partitions it into

multiple virtual machines that can run simultaneously, sharing the physical resources of the underlying server.

VMware introduced ESXi in 2007 to deliver industry-leading performance and scalability while setting a new bar for

reliability, security and hypervisor management efficiency.

Due to its ultra-thin architecture with less than 100MB of code-base disk footprint, ESXi delivers industry-leading

performance and scalability plus:

● Improved Reliability and Security: With fewer lines of code and independence from general purpose OS,

ESXi drastically reduces the risk of bugs or security vulnerabilities and makes it easier to secure your

hypervisor layer.

● Streamlined Deployment and Configuration: ESXi has far fewer configuration items than ESX, greatly

simplifying deployment and configuration and making it easier to maintain consistency.

● Higher Management Efficiency: The API-based, partner integration model of ESXi eliminates the need to

install and manage third party management agents. You can automate routine tasks by leveraging remote

command line scripting environments such as vCLI or PowerCLI.

● Simplified Hypervisor Patching and Updating: Due to its smaller size and fewer components, ESXi

requires far fewer patches than ESX, shortening service windows and reducing security vulnerabilities.

VMware Horizon View 5.3

VMware Horizon View delivers rich, personalized virtual desktops as a managed service from a virtualization

platform built to deliver the entire desktop, including the operating system, applications and data. With VMware

Horizon View, desktop administrators virtualize the operating system, applications, and user data and deliver

modern desktops to end-users. Get centralized automated management of these components for increased control

and cost savings. Improve business agility while providing a flexible high performance desktop experience for end-

users, across a variety of network conditions.

VMware Horizon View 5.3 delivers important features and enhancements from previous release. TCO was further

reduced by optimizing storage reads, improved desktop migration and large scale management, and further

enhanced the user-experience with lower bandwidth and client diversity.

This release of VMware Horizon View adds the following new features and support:

Page 10: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 22

● Microsoft Windows Server 2008 R2 Desktop Operating System Support: Windows Server 2008 R2

(Datacenter edition) is now supported as a desktop operating system. For installation instructions and

limitations, see KB 2057605: Using Windows Server 2008 R2 as a desktop operating system in VMware

Horizon View.

● Microsoft Windows 8.1 Desktop Operating System Support: Windows 8.1 is now supported as a

desktop operating system.

● VMware Horizon Mirage Support: You can now use VMware Horizon Mirage 4.3 to manage View

desktops.

● VMware Virtual SAN Datastore Support: When you create a desktop pool in View Administrator; you can

now select a Virtual SAN datastore to store desktop virtual machines. Because Virtual SAN is in Beta, this

feature is being released as a Tech Preview, which means that it is available for you to try, but it is not

recommended for production use and no technical support is provided. The space-efficient virtual disk

format is not available on Virtual SAN datastores. If you use Virtual SAN datastores to host virtual desktops,

you will not be able to reclaim allocated unused space on the virtual machines.

● View Connection Server Memory Recommendation Messages: If you install View Connection Server

with less than 10GB of memory, VMware Horizon View provides memory recommendations by generating

warning messages after the installation is complete.

● vDGA Support: vDGA (virtual Dedicated Graphics Acceleration) is now supported for View desktops. For

linked-clone desktops, vDGA settings are preserved after refresh, recompose, and rebalance operations.

See the VMware white paper Graphics Acceleration in Horizon View Virtual Machines Deployment Guide.

● Linked-Clone Desktop Pool Storage Overcommit Feature Enhancements: The linked-clone desktop

pool storage overcommit feature includes a new storage overcommit level called Unbounded. When you

select Unbounded, View Manager does not limit the number of linked-clone desktops that it creates based

on the physical capacity of the datastore. You select the storage overcommit level for a linked-clone

desktop pool on the Select Datastores page when you add or edit a linked-clone pool. Select Unbounded

only if you are certain that the datastore has enough storage capacity to accommodate all of the desktops

and their future growth.

● View Persona Management Supportability Improvements: Supportability improvements include new log

messages and profile size and file and folder count tracking. View Persona Management uses the file and

folder counts to suggest folders for redirection in the Windows event log and provides statistics for these

folders.

● Support to Grant Domain Administrators Access to Redirected Folders in View Persona

Management: A new group policy setting, Add the Administrators group to redirected folders, has been

added to make redirected folders accessible to domain administrators. For information about the new group

policy setting, see KB 2058932: Granting domain administrators access to redirected folders for View

Persona Management.

Page 11: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 22

● VMware Horizon View Agent Direct-Connection Plug-in: You can use VMware Horizon View Agent

Direct-Connection Plug-in to connect directly to a virtual desktop. This plug-in is an installable extension to

View Agent that allows a View client to directly connect to a View desktop without using View Connection

Server. For more information, see VMware Horizon View Agent Direct-Connection Plug-in Administration.

● View Composer Array Integration (VCAI) Support: The Tech Preview designation has been removed

from VCAI. VCAI appears as an option during pool creation when you select an NFS datastore on an array

that supports VAAI (vStorage API for Array Integration) native snapshots. The VCAI feature is now

supported with select NAS vendors. For a list of supported NAS vendors, see KB 2061611: View Composer

API for Array Integration (VCAI) support in VMware Horizon View.

● Blast Secure Gateway Maximum Connections: The Blast Secure Gateway (BSG) now supports up to

350 connections to Horizon View desktops from clients using HTML Access. This connection limit applies to

a BSG on one View Connection Server instance or security server.

Figure 6. VMware Horizon View Logical Architecture

Test Configuration:

The test configuration is represented by the hybrid logical/physical diagram shown below.

Page 12: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 22

Figure 7. Reference Architecture

Hardware Components:

● Cisco UCS C240-M3 Rack Server (2 X Intel Xeon processor E5-2697 v2 @ 2.70 GHz) with 384GB of

memory (16 GB X 24 DIMMS @ 1866 MHz), hypervisor host

● Cisco UCS VIC1225 Converged Network Adapter/Rack Server (Optional for 10GB Connectivity)

● 2 x Cisco Nexus 5548UP Access Switches

● 12 x 600GB SAS disks @ 10000 RPM

● LSI Nytro MegaRAID Controller 8110-4i

● Cisco Nexus 5548UP 1/10G Unified Port Switch (Optional)

Software components:

● Cisco UCS firmware 2.2(1b)

● VMware ESXi 5.5 for virtual desktop infrastructure (VDI) Hosts

● VMware Horizon View 5.3

● Microsoft Windows 7 SP1 32 bit, 1 virtual CPU, and 1.5 GB of memory (For VSImax test we used 1Gb of

memory)

● LoginVSI 3.7 End User Experience test tool

Server Storage Volume Configuration

The key differentiator that allows this compact solution to perform so well, was the strategic use of the flash drives

on the LSI Nytro MegaRAID Controller 8110-4i and 12 600GB 10K RPM SAS drives.

There are three high level operations that need to be performed to configure the controller card flash and the SAS

drives for use in the solution.

● Create Drive Groups

● Add Drive Groups to Spans

● Create Virtual Drives, setting RAID Level, controller settings and size

Page 13: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 22

Note: (Multiple virtual drives can be configured on a single drive group)

For this paper, Drive Groups were configured in the following way to support all infrastructure, user data, and linked

clone disks needed for 200 floating assignment linked clones:

Table 1. Physical Drive Group Configuration

Drive Groups RAID Configuration Physical Drives Purpose

N/A 0 Backplane Nytro MegaRAID caching

0 5 0-3 Boot/Infrastructure Volumes

1 0 4-7 1st Group for RAID 10 Volume

2 0 8-11 2nd Group for RAID 10 Volume

Drive Groups were added to Spans as follows:

Table 2. Spans

Drive Groups Span

0 Boot/Infrastructure Span

1 Floating Assignment Linked Clone Span 0

2 Floating Assignment Linked Clone Span 1

Three Virtual Drives were created from the Drive Groups and Spans created above. Only Virtual Drive 2 utilizes the

LSI Nytro MegaRAID cache for VMware Horizon View 5.3 Linked Clones.

Table 3. Virtual Drives

Drive Groups

RAID Configuration

Virtual Drive Capacity Purpose

0 5 0 20 GB Boot

0 5 1 1.6 TB Infrastructure/User Files

1,2 10 2 2.16 TB View 5.3 Linked Clones

The summary configuration follows in the table below:

Table 4. Configuration Summary

Physical Drive(s)

Drive Group Span Virtual Drive(s) Capacity Purpose

Backplane Flash Devices

Nytro Cache RAID-0

NA Nytro Cache 180.7GB Flash Caching

0-3 0 RAID-5 NA VD0, VD1 20GB, 1.6TB Boot, Infra

4-7 1 RAID-0 RAID-10 VD21 2.16TB View Clones

8-11 2 RAID-0

The final configuration looks like the following figure in the Nytro MegaRAID Bios Configuration Utility:

Page 14: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 22

Figure 8. LSI Nytro MegaRAID WebBIOS Configuration Utility

The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during the

desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution

(also referred to as steady state), and user logoff for the Hosted VDI model under test.

Test metrics were gathered from the hypervisor, virtual desktop, storage, and load generation software to assess

the overall success of an individual test cycle. Each test cycle was not considered passing unless all of the planned

test users completed the ramp-up and steady state phases (described later in this document) and unless all

metrics were within the permissible thresholds as noted as success criteria. Three successfully completed test

cycles were conducted for each hardware configuration and results were found to be relatively consistent from one

test to the next.

Load Generation

Within each test environment, load generators were utilized to put demand on the system to simulate multiple

users accessing the VMware Horizon View 5.3 environment and executing a typical end-user workflow. To

generate load within the environment, an auxiliary software application was required to generate the end user

connection to the VMware Horizon View environment, to provide unique user credentials, to initiate the workload,

and to evaluate the end user experience.

In the Hosted VDI test environment, sessions launchers were used simulate multiple users making a direct

connection to Horizon View 5.3 via a VMware Horizon View PCoIP protocol connection.

Page 15: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 22

User Workload Simulation – Login VSI

One of the most critical factors of validating a VMware Horizon View deployment is identifying a real-world user

workload that is easy for customers to replicate and standardized across platforms to allow customers to

realistically test the impact of a variety of worker tasks. To accurately represent a real-world user workload, a third-

party tool from Login VSI was used throughout the Hosted VDI testing.

The tool has the benefit of taking measurements of the in-session response time, providing an objective way to

measure the expected user experience for individual desktop throughout large scale testing, including login storms.

The Virtual Session Indexer (Login VSI 3.6) methodology, designed for benchmarking Server Based Computing

(SBC) and Virtual Desktop Infrastructure (VDI) environments is completely platform and protocol independent and

hence allows customers to easily replicate the testing results in their environment.

Note: In this testing, we utilized the tool to benchmark our VDI environment only.

Login VSI calculates an index based on the amount of simultaneous sessions that can be run on a single machine.

Login VSI simulates a medium workload user (also known as knowledge worker) running generic applications such

as: Microsoft Office 2007 or 2010, Internet Explorer 8 including a Flash video applet and Adobe Acrobat Reader

(Note: For the purposes of this test, applications were installed locally, not streamed nor hosted on Thin App).

Like real users, the scripted Login VSI session will leave multiple applications open at the same time. The medium

workload is the default workload in Login VSI and was used for this testing. This workload emulated a medium

knowledge working using Office, IE, printing and PDF viewing.

● Once a session has been started the medium workload will repeat every 12 minutes.

● During each loop the response time is measured every 2 minutes.

● The medium workload opens up to 5 apps simultaneously.

● The type rate is 160ms for each character.

● Approximately 2 minutes of idle time is included to simulate real-world users.

Each loop will open and use:

● Outlook 2007/2010, browse 10 messages.

● Internet Explorer, one instance is left open (BBC.co.uk), one instance is browsed to Wired.com,

Lonelyplanet.com and heavy

● 480 p Flash application gettheglass.com

● Word 2007/2010, one instance to measure response time, one instance to review and edit document.

● BullZip PDF Printer & Acrobat Reader, the word document is printed and reviewed to PDF.

● Excel 2007/2010, a very large randomized sheet is opened.

● PowerPoint 2007/2010, a presentation is reviewed and edited.

● 7-zip: using the command line version the output of the session is zipped.

You can obtain additional information on Login VSI from http://www.loginvsi.com.

Page 16: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 22

Test Run Protocol

To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known as Ramp

Up, to complete in 30 minutes. Additionally, we require all sessions started, whether 170 single server users or

2000 full scale test users to become active within two minutes after the session is launched.

For each of the three consecutive runs on single blade (170 User) and 14-blade (2000 User) tests, the same

process was followed:

1. Time 0:00:00 Started ESXTOP Logging or Perfmon logging on the following systems:

VDI Host Blades used in test run

Profile VMs used in test run

SQL Server VM used in test run

2. Time 0:05 Take 200 desktops out of maintenance mode on Horizon View Admin Console

3. Time 0:06 First machines boot

4. Time 0:33 200 desktops booted on C240 M3.

5. Time 0:35 200 desktops available on C240 M3.

6. Time 0:50 Start Login VSI 3.6 Test with 200 desktops utilizing 10 Launchers

7. Time 1:20 200 desktops launched.

8. Time 1:22 200 desktops active.

9. Time 1:35 Login VSI Test Ends.

10. Time 1:50 200 desktops logged off.

11. Time 2:00 all logging terminated.

Success Criteria

There were multiple metrics that were captured during each test run, but the success criteria for considering a

single test run as pass or fail was based on the key metric, VSI Max. The Login VSI Max evaluates the user

response time during increasing user load and assesses the successful start-to-finish execution of all the initiated

virtual desktop sessions.

Performance Results

The objective of the test case was to determine whether a UCS C240 M3 server equipped with 12 hot-swappable

10K RPM SAS drives and a Cisco Nytro MegaRAID controller could provide excellent end-user experience for up

to 200 VMware Horizon View 5.3 automated pool, floating assignment linked clone Microsoft Windows 7 SP1

virtual desktops running on ESXi 5.5 and the required infrastructure to support them.

In order to determine the end-user experience with the hypervisor, infrastructure and virtual desktop users

exercising the system simultaneously, we generated a Login VSImax score by loading the system with 250 virtual

desktops and users.

Login VSI does not take into account the stress on the physical servers during the test (CPU, memory, network,

nor storage.)

Page 17: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 22

For that reason, we re-run the end-user experience tests taking into account those physical server factors to

determine the maximum recommended virtual desktop workload for the system.

Login VSImax

Once we determined the Login VSImax, we perform three consecutive test runs generating the same result to

insure integrity of the result.

To reach the Login VSI Max, we ran 250 Medium Workload (with flash) Windows 7 SP1 sessions on a single

server. The Login VSI score was confirmed by three consecutive runs and is shown in Figure XX below.

Figure 9. 250 User Horizon View 5.3 Desktop Sessions on VMware ESXi 5.5: VSImax 242

Recommended Workload

To establish our recommended maximum workload, we ran the single server test at approximately 20% lower user

density than prescribed by the Login VSImax to achieve a successful pass of the test with server hardware

performance in a realistic range. Although a Login VSImax is not achieved at this load, the Login VSI Analyzer

chart for the run shows very low response times over the entire run, confirming excellent end-user experience.

We have included graphs detailing the CPU, Memory utilization and network throughput during peak session load

are also presented. Given adequate storage capability, the CPU utilization determined the maximum VM density

per server.

Disks performance metrics are captured for Adapter Q depth, throughput rate (IOPS) and Read-Write rate in Mbps

using ESXTOP data collector.

The charts below present our recommended maximum Login VSI Medium workload loading on a single blade

server with average and index response time below 2000ms and maximum response times under 3000ms.

Page 18: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 22

Figure 10. 200 User Horizon View 5.3 Desktop Sessions on VMware ESXi 5.5 below 3000ms

Figure 11. 200 User Cisco UCS C240 M3 Memory Utilization - Test Phase

Page 19: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 22

Figure 12. 200 User Cisco UCS C240 M3 VIC 1225 Network Utilization - Test Phase

Figure 13. 200 User Cisco UCS C240 M3 CPU Core Utilization - Test Phase

Page 20: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 22

Figure 14. 200 User Cisco UCS C240 M3 CPU Utilization - Test Phase

Figure 15. 200 User Cisco UCS C240 M3 disks throughput - Test Phase

Page 21: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 22

Figure 16. 200 User Cisco UCS C240 M3 Read and Write Rate - Test Phase

Figure 17. 200 User Cisco UCS C240 M3 Adapter Q depth in millisecond - Test Phase

Conclusion

With the Cisco and VMware solution, the cost barrier for mid-size to large organizations for Proofs of Concept has

been shattered. For smaller organizations, there is now an affordable entry point solution well within their reach for

desktop virtualization for the first time.

The performance results discussed in this document demonstrate that Cisco UCS C240 M3 Rack Server running

VMware vSphere 5.5 and VMware Horizon View 5.3 with a LSI Nytro MegaRAID NMR 8110-4i controller and 12

600GB 10K RPM SAS drives provides impressive infratstructure and virtual desktop hosting density. The Cisco

UCS solution delivered 200 concurrent virtual desktops with acceptable user response times and low bandwidth

usage with an all direct-attached storage (DAS) configuration.

Page 22: Deploy 200 VMware Horizon View 5.3 Pooled …...systems (DAS) using up to 128 SATA and/or SAS hard drives with data transfer rates of up to 6Gb/s per port. The The card complies with

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 22

References

1. Cisco UCS C-Series Rack Servers

http://www.cisco.com/en/US/products/ps10265/

http://www.cisco.com/en/US/partner/products/ps12370/index.html

http://www.cisco.com/en/US/products/ps12571/index.html

2. LSI Mega RAID Controllers

http://www.lsi.com/downloads/Public/Nytro/docs/DB07-000134-06_LSI_Nytro_MegaRAID %28NMR

r1.7%29_Application_Acceleration_RelNotes.pdf

http://www.lsi.com/downloads/Public/Nytro/downloads/Nytro XM/Tech

Pubs/LSI_Nytro_MegaRAID_Application_Acceleration_Card_QIG.pdf

3. VMware Horizon View 5.3 Reference Documents

http://www.vmware.com/files/pdf/view/vmware-horizon-view-best-practices-performance-study.pdf

https://www.vmware.com/support/view53/doc/horizon-view-53-release-notes.html

https://www.vmware.com/support/pubs/view_pubs.html

4. View 5 with PCoIP Network Optimization Guide

http://www.vmware.co/files/pdf/view/VMware-View-5-PCoIP-Network-Optimization-Guide.pdf

5. Virtual Desktop - Windows 7 Optimization Guide:

http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf

6. VMware vSphere ESXi and vCenter Server 5 Documentation:

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

LSI, the LSI & Design logo, the Storage. Networking. Accelerated. tagline, Nytro and MegaRAID are trademarks or registered trademarks of LSI Corporation in the United

States and/or other countries. All other brand or product names may be trademarks or registered trademarks of their respective companies.

LSI Corporation reserves the right to make changes to any products and services herein at any time without notice. LSI does not assume any responsibility or liability arising

out of the application or use of any product or service described herein, except as expressly agreed to in writing by LSI; nor does the purchase, lease, or use of a product or

service from LSI convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property rights of LSI or of third parties.

Printed in USA C11-731563-00 04/14


Recommended