+ All Categories
Home > Documents > Dell EMC XC Series Hyper-Converged Appliances for VMware ... · 8 Dell EMC XC Series...

Dell EMC XC Series Hyper-Converged Appliances for VMware ... · 8 Dell EMC XC Series...

Date post: 14-Mar-2020
Category:
Upload: others
View: 28 times
Download: 1 times
Share this document with a friend
97
A Dell EMC Reference Architecture Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon Reference Architecture Integration of VMware Horizon with XC Series appliance clusters. Dell Engineering July 2017
Transcript

A Dell EMC Reference Architecture

Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture Integration of VMware Horizon with XC Series appliance clusters.

Dell Engineering July 2017

2 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Revisions

Date Description

March 2015 Initial release

September 2015 Document overhaul, XC730 + vGPU Addition

April 2016 New platforms, new artwork, new GRID architecture

September 2016 Updates to Endpoints section

December 2016 Updates to disk configurations and other minor updates

January 2017 Minor updates

April 2017 Added test results to document and information for all-flash configurations

July 2017 Added details for the Dell EMC XC430 Xpress Hyper-Converged platform

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this

publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2015 – 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its

subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA [8/3/2017] [Reference Architecture]

Dell believes the information in this document is accurate as of its publication date. The information is subject to change without notice.

3 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Table of contents Revisions............................................................................................................................................................................. 2

Executive summary ............................................................................................................................................................. 6

1 Introduction ................................................................................................................................................................... 7

1.1 Objective ............................................................................................................................................................. 7

1.2 What’s new ......................................................................................................................................................... 7

2 Solution architecture overview ..................................................................................................................................... 8

2.1 Introduction ......................................................................................................................................................... 8

2.2 Nutanix cloud platform overview ......................................................................................................................... 8

2.3 Distributed Storage Fabric .................................................................................................................................. 9

2.4 App Mobility Fabric ............................................................................................................................................. 9

2.5 Nutanix Acropolis architecture ............................................................................................................................ 9

2.6 Nutanix Hyper-Converged infrastructure .......................................................................................................... 12

2.7 Nutanix all-flash ................................................................................................................................................ 13

2.8 Dell EMC XC VDI solution architecture ............................................................................................................ 13

Networking ........................................................................................................................................................ 13

XC Series – Enterprise solution pods ............................................................................................................... 14

XC Xpress – SMB solution ............................................................................................................................... 16

3 Hardware components ............................................................................................................................................... 18

3.1 Network ............................................................................................................................................................. 18

Dell Networking S3048 (1Gb ToR switch) ........................................................................................................ 18

Dell Networking S4048 (10Gb ToR switch) ...................................................................................................... 19

3.2 Dell EMC XC Series Hyper-Converged appliances ......................................................................................... 19

Dell EMC XC630............................................................................................................................................... 21

Dell EMC XC730XD (high capacity) ................................................................................................................. 22

Dell EMC XC730 (graphics) ............................................................................................................................. 23

Dell EMC XC430 (ROBO) ................................................................................................................................ 25

Dell EMC XC6320 (high density) ...................................................................................................................... 27

3.3 Dell EMC XC430 Xpress Hyper-Converged Appliance .................................................................................... 30

Dell EMC XC430 Xpress (SMB) ....................................................................................................................... 30

3.4 GPUs ................................................................................................................................................................ 32

NVIDIA Tesla GPUs ......................................................................................................................................... 32

3.5 Dell Wyse Endpoints ........................................................................................................................................ 34

4 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Wyse 3030 LT Thin Client (ThinOS, ThinLinux) ............................................................................................... 34

Wyse 3040 Thin Client (ThinOS, ThinLinux) .................................................................................................... 34

Wyse 5040 AIO Thin Client (ThinOS)............................................................................................................... 34

Wyse 5060 Thin Client (ThinOS, ThinLinux, WES7P, WIE10) ........................................................................ 35

Wyse 7020 Thin Client (WES 7/7P/8, WIE10, ThinLinux) ................................................................................ 35

Wyse 7040 Thin Client (WES7P, WIE10) ........................................................................................................ 35

4 Software components ................................................................................................................................................. 37

4.1 VMware ............................................................................................................................................................. 37

VMware Horizon 7 ............................................................................................................................................ 37

VMware vSphere 6 ........................................................................................................................................... 38

4.2 Microsoft RDSH ................................................................................................................................................ 38

4.3 NVIDIA GRID vGPU ......................................................................................................................................... 40

vGPU profiles .................................................................................................................................................... 41

5 Solution architecture for Horizon ................................................................................................................................ 48

5.1 Management role configuration ........................................................................................................................ 48

VMware Horizon management role requirements ............................................................................................ 48

RDSH on vSphere ............................................................................................................................................ 48

NVIDIA GRID license server requirements ...................................................................................................... 49

SQL databases ................................................................................................................................................. 49

DNS .................................................................................................................................................................. 50

5.2 Storage architecture overview .......................................................................................................................... 50

Nutanix containers ............................................................................................................................................ 51

5.3 Virtual networking ............................................................................................................................................. 52

vSphere ............................................................................................................................................................ 52

5.4 Scaling guidance .............................................................................................................................................. 53

5.5 Solution high availability ................................................................................................................................... 56

5.6 Dell Wyse Datacenter for Horizon communication flow ................................................................................... 57

6 Solution performance and testing............................................................................................................................... 58

6.1 Summary .......................................................................................................................................................... 58

6.2 Test and performance analysis methodology ................................................................................................... 58

Testing process ................................................................................................................................................ 58

Resource monitoring ........................................................................................................................................ 61

Resource utilization .......................................................................................................................................... 62

5 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.3 Test configuration details .................................................................................................................................. 63

Compute VM configurations ............................................................................................................................. 64

Platform configurations ..................................................................................................................................... 64

6.4 Test results and analysis .................................................................................................................................. 66

XC430 B5 ......................................................................................................................................................... 68

XC430 Xpress ................................................................................................................................................... 83

7 Related resources ...................................................................................................................................................... 95

Acknowledgements ........................................................................................................................................................... 96

About the authors .............................................................................................................................................................. 97

6 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Executive summary

This document provides the reference architecture for integrating Dell EMC XC Series Hyper-Converged

Appliances and VMware Horizon software to create virtual application and virtual desktop environments. The

available appliance choices include the Dell EMC XC Series and the Dell EMC XC Xpress.

The Dell EMC XC Series is a hyper-converged solution that combines storage, compute, networking, and

virtualization using industry-proven Dell EMC PowerEdge™ server technology and Nutanix software. By

combining the hardware resources from each appliance into a shared-everything model for simplified

operations, improved agility, and greater flexibility, Dell EMC and Nutanix together deliver simple, cost-

effective solutions for enterprise workloads.

Dell EMC XC Xpress brings the power and simplicity of the XC Series to smaller IT organizations with simple,

reliable, and affordable “out-of-the-box” infrastructure solution. This solution combines industry proven

PowerEdge server technology and Nutanix Xpress software from each appliance into a shared-everything

model for simplified operations, improved agility, and greater flexibility. XC Xpress is a cost effective solution

for the SMB market yet includes enterprise level features such as deduplication, compression, cloning and

tiering.

VMware Horizon provides a complete end-to-end virtualization solution delivering Microsoft Windows

virtual desktops or server-based hosted shared sessions to users on a wide variety of endpoint devices.

7 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

1 Introduction This document addresses the architecture design, configuration and implementation considerations for the

key components required to deliver virtual desktops or shared sessions via VMware Horizon® on VMware

vSphere® 6 running on the Dell EMC XC Series Hyper-Converged infrastructure platform.

For deployment guides (Dell EMC XC Xpress only), manuals, support info, tools, and videos, please visit:

www.Dell.com/xcseriesmanuals.

1.1 Objective Relative to delivering the virtual desktop environment, the objectives of this document are to:

Define the detailed technical design for the solution.

Define the hardware requirements to support the design.

Define the constraints which are relevant to the design.

Define relevant risks, issues, assumptions and concessions – referencing existing ones where

possible.

Provide a breakdown of the design into key elements such that the reader receives an incremental or

modular explanation of the design.

Provide solution scaling and component selection guidance.

1.2 What’s new Added details for the Dell EMC XC430 Xpress Hyper-Converged platform.

8 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

2 Solution architecture overview

2.1 Introduction The XC Series delivers an out-of-the-box infrastructure solution for virtual desktops that eliminates the high

cost, variable performance, and extensive risk of conventional solutions. The Nutanix™ hyper-converged

infrastructure is a turnkey solution that comes ready to run your VDI solution of choice. The Nutanix platform’s

unique architecture allows enterprises to scale up to tens of thousands of virtual desktops in a linear fashion,

providing customers with a simple path to enterprise deployment with the agility of public cloud providers.

Additionally, the XC Xpress platform brings many of the Nutanix enterprise features to SMB customers in an

affordable and reliable solution.

2.2 Nutanix cloud platform overview Nutanix delivers a hyper-converged infrastructure solution purpose-built for virtualization and cloud

environments. This solution brings the performance and economic benefits of hyper-converged architecture to

the enterprise through the Nutanix enterprise cloud platform, which is composed of two product families—

Nutanix Acropolis and Nutanix Prism.

Attributes of this solution include:

Storage and compute resources hyper-converged on x86 servers.

System intelligence located in software.

Data, metadata, and operations fully distributed across entire cluster of x86 servers.

Self-healing to tolerate and adjust to component failures.

API-based automation and rich analytics.

Simplified on-click upgrade.

Native file services for hosting user profiles.

Native backup and disaster recovery solutions.

Nutanix Acropolis can be broken down into three foundational components: the Distributed Storage Fabric

(DSF), the App Mobility Fabric (AMF), and AHV. Prism provides one-click infrastructure management for

virtual environments running on Acropolis. Acropolis is hypervisor agnostic, supporting two third-party

hypervisors—ESXi and Hyper-V—in addition to the native Nutanix hypervisor, AHV.

9 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

2.3 Distributed Storage Fabric The Distributed Storage Fabric (DSF) delivers enterprise data storage as an on-demand service by employing

a highly distributed software architecture. Nutanix eliminates the need for traditional SAN and NAS solutions

while delivering a rich set of VM-centric software-defined services. Specifically, the DSF handles the data

path of such features as snapshots, clones, high availability, disaster recovery, deduplication, compression,

and erasure coding.

The DSF operates via an interconnected network of Controller VMs (CVMs) that form a Nutanix cluster, and

every node in the cluster has access to data from shared SSD, HDD, and cloud resources. The hypervisors

and the DSF communicate using the industry-standard NFS, iSCSI, and SMB3 protocols.

NOTE: Erasure coding is not available with the XC Xpress platform. Refer to the XC Xpress vs XC Series

section for more details.

2.4 App Mobility Fabric The Acropolis App Mobility Fabric (AMF) is the Nutanix virtualization solution that allows apps and data to

move across different supported hypervisors and from Nutanix systems to public clouds. When virtual

machines can move between hypervisors (for example, between VMware ESXi and AHV), administrators can

host production and dev/test environments concurrently on different hypervisors and shift workloads between

them as needed. AMF is implemented via a distributed, scale-out service that runs inside the CVM on every

node within a Nutanix cluster.

2.5 Nutanix Acropolis architecture Acropolis does not rely on traditional SAN or NAS storage or expensive storage network interconnects. It

combines highly dense storage and server compute (CPU and RAM) into a single platform building block.

Each building block is based on industry-standard Intel processor technology and delivers a unified, scale-out,

shared-nothing architecture with no single points of failure.

The Nutanix solution has no LUNs to manage, no RAID groups to configure, and no complicated storage

multipathing to set up. All storage management is VM-centric, and the DSF optimizes I/O at the VM virtual

disk level. There is one shared pool of storage that includes flash-based SSDs for high performance and low-

latency HDDs for affordable capacity. The file system automatically tiers data across different types of storage

devices using intelligent data placement algorithms. These algorithms make sure that the most frequently

used data is available in memory or in flash for optimal performance. Organizations can also choose flash-

only storage (not available with XC Xpress) for the fastest possible storage performance. The following figure

illustrates the data I/O path for a write in a hybrid model with a mix of SSD and HDD disks.

10 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

With the DSF, a CVM writes data to local flash memory for fast acknowledgment; the CVM also handles read

operations locally for reduced latency and fast data delivery.

The figure below shows an overview of the Nutanix architecture including, user VMs, the Nutanix storage

CVM, and its local disk devices. Each CVM connects directly to the local storage controller and its associated

disks. Using local storage controllers on each host localizes access to data through the DSF, thereby

reducing storage I/O latency. The DSF replicates writes synchronously to at least one other Nutanix node in

the system, distributing data throughout the cluster for resiliency and availability. Replication factor 2 (RF2)

creates two identical data copies in the cluster, and replication factor 3 (RF3) creates three identical data

copies (RF3 not available with XC Xpress). Having a local storage controller on each node ensures that

storage performance as well as storage capacity increase linearly with each node addition.

NOTE: The XC430 Xpress platform currently only allows for one SSD and 3 HDD disks for a total of four

disks. Refer to the XC Xpress – SMB Solution section for more details.

11 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Local storage for each Nutanix node in the architecture appears to the hypervisor as one large pool of shared

storage. This allows the DSF to support all key virtualization features. Data localization maintains

performance and quality of service (QoS) on each host, minimizing the effect noisy VMs have on their

neighbors’ performance. This functionality allows for large, mixed-workload clusters that are more efficient

and more resilient to failure when compared to traditional architectures with standalone, shared, and dual-

controller storage arrays.

When VMs move from one hypervisor to another, such as during live migration and high availability, the now

local CVM serves a newly migrated VM’s data. When reading old data (stored on the now remote CVM) the

local CVM forwards the I/O request to the remote CVM. All write I/O occurs locally. The DSF detects that I/O

is occurring from a different node and migrates the data to the local node in the background, allowing for all

read I/O to now be served locally. The data only migrates when there have been enough reads and writes

from the remote node to minimize network utilization.

The next figure shows how data follows the VM as it moves between hypervisor nodes.

Nutanix Shadow Clones delivers distributed localized caching of virtual disks performance in multi-reader

scenarios, such as desktop virtualization using VMware Horizon or Microsoft Remote Desktop Session Host

(RDSH). With Shadow Clones, the CVM actively monitors virtual disk access trends. If there are requests

originating from more than two remote CVMs, as well as the local CVM, and all of the requests are read I/O

and the virtual disk will be marked as immutable. Once the disk has been marked immutable, the virtual disk

is then cached locally by each CVM, so read operations are now satisfied locally by local storage.

12 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

2.6 Nutanix Hyper-Converged infrastructure The Nutanix hyper-converged infrastructure provides an ideal combination of both high-performance compute

with localized storage to meet any demand. True to this capability, this reference architecture has been

validated as optimized for VDI use case.

The next figure shows a high-level example of the relationship between an XC node, storage pool, container,

pod and relative scale out:

NOTE: The XC Xpress platform is limited to a maximum of four nodes per cluster and a maximum of two

clusters per customer. Refer to the XC Xpress vs XC Series section for more details.

This solution allows organizations to deliver virtualized or remote desktops and applications through a single

platform and support end users with access to all of their desktops and applications in a single place.

13 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

2.7 Nutanix all-flash Nutanix supports an all-flash configuration where all local disks are SSDs and therefore, the storage pool is

fully comprised of SSDs for both capacity and performance. The previously described features and

functionality for management, data optimization and protection, and disaster recovery are still present. With

all-flash, hot data is stored on SSDs local to each VM. If capacity needs exceed the local SSD storage,

capacity on other nodes is automatically and transparently utilized. Compared to traditional all-flash shared

storage arrays, XC Series all-flash clusters won’t have the typical performance limitations due to network and

storage controller bottlenecks. Benefits for VDI include faster provisioning times, low latency, ability to handle

extremely high application I/O needs, and accommodating bursts of activity such as boot storms and anti-

virus scans.

Note: All-Flash is not available for the XC Xpress platform.

2.8 Dell EMC XC VDI solution architecture

Networking The networking layer consists of the 10Gb Dell Networking S4048 utilized to build a leaf/spine architecture

with robust 1Gb switching in the S3048 for iDRAC connectivity.

Designed for true linear scaling, XC Series leverages a Leaf-Spine network architecture. A Leaf-Spine

architecture consists of two network tiers: an L2 Leaf and an L3 Spine based on 40GbE and non-blocking

switches. This architecture maintains consistent performance without any throughput reduction due to a static

maximum of three hops from any node in the network.

The following figure shows a design of a scale-out Leaf-Spine network architecture that provides 20Gb active

throughput from each node to its Leaf and scalable 80Gb active throughput from each Leaf to Spine switch

providing scale from 3 XC nodes to thousands without any impact to available bandwidth:

14 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

NOTE: The XC Xpress platform is limited to a maximum of four nodes per cluster and a maximum of two

clusters per customer. Refer to the Dell EMC XC Xpress Hyper-Converged Appliance Deployment Guide for

cabling details.

XC Series – Enterprise solution pods The compute, management and storage layers are converged into each XC Series appliance server in the

cluster, hosting VMware vSphere. The recommended boundaries of an individual pod are based on number

of nodes supported within a given hypervisor cluster, 64 nodes for vSphere 6, although the Nutanix DFS

cluster can scale much larger.

Dell recommends that the VDI management infrastructure nodes be separated from the compute resources

onto their own appliance cluster with a common storage namespace shared between them based on NFS for

vSphere. One node for VDI management is required, minimally, and expanded based on size of the pod. The

designations ds_rdsh, ds_compute, ds_vgpu and ds_mgmt as seen below are logical DSF containers used to

group VMs of a particular type.

Using distinct containers allows features and attributes, such as compression and deduplication, to be applied

to groups of VMs that share similar characteristics. Compute hosts can be used interchangeably for Horizon

or RDSH as required. Distinct clusters should be built for management and compute hosts for HA,

respectively, to plan predictable failover, scale and load across the pod. The NFS namespace can be shared

across multiple hypervisor clusters adding disk capacity and performance for each distinct cluster.

15 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

High-performance graphics capabilities compliment the solution and can be added at any time to any new or

existing XC Series vSphere deployment. Simply add the appropriate number of XC730 appliances to your

DSF cluster and provide a superior user experience with vSphere 6 and NVIDIA GRID vGPU technology. Any

XC Series appliance can be utilized for the non-graphics compute or management portions of this solution.

NOTE: Hybrid storage shown. It is possible to configure the XC730 for all-flash storage but the chassis

must be fully populated requiring 16 x SSDs.

16 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

XC Xpress – SMB solution Ideally suited for SMB customers, the Dell EMC XC430 Xpress Hyper-Converged appliance couples the Dell

PowerEdge server technology with the Nutanix Xpress Software edition to deliver many of the XC Series

enterprise features in a simple, reliable, and affordable solution.

As with the XC Series solution, the compute, management and storage layers are converged into each XC

Xpress appliance server in the cluster, hosting VMware vSphere, Microsoft Hyper-V, or Nutanix AHV

hypervisor. A minimum of 3 nodes is required for an XC Xpress cluster to function properly with a maximum

of 4 nodes allowed. In order to maintain high availability within a cluster, 4 nodes must be used. Additionally,

there is a 2 cluster limit per customer.

For this solution, Dell recommends that the VDI compute and management infrastructure by installed on the

same appliance cluster with a common storage namespace shared between them based on NFS. The

designations ds_rdsh, ds_compute, and ds_mgmt as seen below are logical DSF containers used to group

VMs of a particular type. Using distinct containers allows features and attributes, such as compression and

deduplication, to be applied to groups of VMs that share similar characteristics.

Compute nodes can be used for VDI desktops or RDSH as required. Distinct clusters can be used to provide

VDI for different company departments or for single site disaster recovery. Additionally, Microsoft Azure back-

up is pre-bundled with XC Xpress as a flexible, pay-as-you-grow cloud backup solution offered only by Dell

EMC. Customers can also opt for backups to Amazon Web Servers (AWS) or Microsoft Azure via the Cloud

Connect functionality of the Nutanix platform.

NOTE: The XC430 Xpress does not support GPU cards. Graphics intensive workloads should not be used

with this solution.

17 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

2.8.3.1 XC Xpress vs XC Series The table below highlights the similarities and differences between the XC Xpress and XC Series platforms.

With XC Xpress, customers can choose to self-deploy the cluster using deployment guides, videos, and tools

or have Dell EMC Services perform the deployment.

NOTE: There is no upgrade path from XC Xpress to XC Series. You cannot mix XC Xpress and XC Series

nodes within the same cluster.

18 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3 Hardware components

3.1 Network The following sections contain the core network components for the Dell Wyse Datacenter solutions. General

uplink cabling guidance to consider in all cases is that TwinAx is very cost effective for short 10Gb runs and

for longer runs use fiber with SFPs.

Dell Networking S3048 (1Gb ToR switch) Accelerate applications in high-performance environments with a low-latency top-of-rack (ToR) switch that

features 48 x 1GbE and 4 x 10GbE ports, a dense 1U design and up to 260Gbps performance. The S3048-

ON also supports Open Network Installation Environment (ONIE) for zero-touch installation of alternate

network operating systems.

Model Features Options Uses

Dell Networking S3048-ON

48 x 1000BaseT 4 x 10Gb SFP+

Non-blocking, line-rate

performance

260Gbps full-duplex bandwidth

131 Mpps forwarding rate

Redundant hot-swap PSUs & fans

1Gb connectivity

VRF-lite, Routed VLT, VLT Proxy Gateway

User port stacking (up to 6 switches)

Open Networking Install Environment (ONIE)

19 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Dell Networking S4048 (10Gb ToR switch) Optimize your network for virtualization with a high-density, ultra-low-latency ToR switch that features 48 x

10GbE SFP+ and 6 x 40GbE ports (or 72 x 10GbE ports in breakout mode) and up to 720Gbps performance.

The S4048-ON also supports ONIE for zero-touch installation of alternate network operating systems.

Model Features Options Uses

Dell Networking S4048-ON

48 x 10Gb SFP+ 6 x 40Gb QSFP+

Non-blocking, line-rate

performance

1.44Tbps bandwidth

720 Gbps forwarding rate

VXLAN gateway support

Redundant hot-swap PSUs & fans

10Gb connectivity

72 x 10Gb SFP+ ports with breakout cables

User port stacking (up to 6 switches)

Open Networking Install Environment (ONIE)

For more information on the S3048, S4048 switches and Dell Networking, please visit: LINK

3.2 Dell EMC XC Series Hyper-Converged appliances Consolidate compute and storage into a single chassis with XC Series Hyper-converged appliances, powered

by Nutanix software. XC Series appliances install quickly, integrate easily into any data center, and can be

deployed for multiple virtualized workloads including desktop virtualization, test and development, and private

cloud projects. For general purpose virtual desktop and virtual application solutions, Dell recommends the

XC630 or XC730XD. For workloads requiring graphics the XC730 with NVIDIA GRID can be integrated into

any environment running any other XC Series appliance. For small Remote Office – Branch Office scenarios

20 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

we offer the XC430 and for high density requirements the 4-node in 2U XC6320. For more information on the

Dell EMC XC Series, please visit: Link

The XC Series portfolio, optimized for VDI, has been designed and arranged in six top-level overarching

configurations which apply to the available physical platforms showcased below.

A3 configuration is perfect for small scale, POC or low density cost-conscience environments.

Available in the XC630, XC730XD, XC430 and XC6320.

B5 configuration is geared toward larger scale general purpose workloads, balancing performance

and cost-effectiveness. Available in the XC630, XC730XD, XC430 and XC6320.

B5-AF configuration is the same as B5 with all-flash storage instead of a hybrid mix of HDD and SSD.

Available in the XC430, XC6320, and XC630.

C7 is the premium configuration offering an abundance of high performance and tiered capacity

where user density is maximized. Available in the XC630, XC730XD, XC430 and XC6320.

C7-AF is the same as the C7 configuration with all-flash storage instead of hybrid. Available with the

XC630.

C7-GFX for high-performance graphical workloads is available in the XC730.

21 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Dell EMC XC630 The Dell EMC XC630 is a 1U platform with a broad range of configuration options. Each appliance comes

equipped with dual CPUs, 10 to 20 cores, and up to 512GB of high-performance RAM by default. For the

hybrid disk configuration, a minimum of six disks is required in each host, 2 x SSD for the hot tier (Tier1) and

4 x HDD for the cold tier (Tier2) which can be expanded up to eight HDDs as required. For the all-flash disk

configuration, the chassis must be fully populated with 10 x SSDs. The 64GB SATADOM boots the

hypervisor and Nutanix Controller VM while the PERC H330 is configured in pass-through mode connecting

to the SSDs and HDDs. 64GB is consumed on each of the first two SSDs for the Nutanix “home”. All

HDD/SSD disks are presented to the Nutanix CVM running locally on each host which contributes to the

clustered DSF pool. Each platform can be outfitted with SFP+ or BaseT NICs.

3.2.1.1 XC630 hybrid disk storage

22 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.2.1.2 XC630 all-flash disk storage

Dell EMC XC730XD (high capacity) The Dell EMC XC730XD is a 2U platform that can be configured with 24 x 2.5” disks or 12 x 3.5” disks to

serve a broad range of capacity requirements. Each appliance comes equipped with dual CPUs, 10 to 20

cores, and up to 512GB of high-performance RAM by default. A minimum of six disks is required in each host,

2 x SSD for the hot tier (Tier1) and 4 x HDD for the cold tier (Tier2) which can be expanded as required up to

a possible 45TB per node raw. The 64GB SATADOM boots the hypervisor and Nutanix Controller VM while

the PERC H330 is configured in pass-through mode connecting to the SSDs and HDDs. 64GB is consumed

on each of the first two SSDs for the Nutanix “home”. All HDD/SSD disks are presented to the Nutanix CVM

running locally on each host which contributes to the clustered DSF pool. Each platform can be outfitted with

SFP+ or BaseT NICs.

23 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Dell EMC XC730 (graphics) The Dell EMC XC730 is a 2U platform that can be configured with dual NVIDIA GRID cards using vGPU to

supply high-performance virtualized graphics. Each appliance comes equipped with dual 18core CPUs and

256GB of high-performance RAM by default supporting up to 64 users per node. A minimum of six disks is

required in each host, 2 x SSD for the hot tier (Tier1) and 4 x HDD for the cold tier (Tier2) which can be

expanded as required. The 64GB SATADOM boots the hypervisor and Nutanix Controller VM while the PERC

H330 is configured in pass-through mode connecting to the SSDs and HDDs. 64GB is consumed on each of

the first two SSDs for the Nutanix “home”. All HDD/SSD disks are presented to the Nutanix CVM running

locally on each host which contributes to the clustered DSF pool. Each platform can be outfitted with SFP+ or

BaseT NICs. Solutions can be designed around the XC730 entirely which can be purchased with or without

GRID cards. Additionally, the XC730 can be used to augment other non-graphics enabled deployments based

on a differing XC platform such as with the XC630 or XC730XD.

24 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

NOTE: Hybrid storage shown. It is possible to configure the XC730 for all-flash storage but the chassis

must be fully populated requiring 16 x SSDs.

25 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Dell EMC XC430 (ROBO) The Dell EMC XC430 is a 1U platform that offers short depth (24”) space savings perfect for the Remote

Office/ Branch Office use case. Each appliance comes equipped with single or dual CPUs, 10 to 14 cores,

and up to 384GB of high-performance RAM by default. Four disks are required in each host regardless if

configured for hybrid or all-flash storage. For a hybrid storage configuration, 2 x SSD is used for the hot tier

(Tier1) and 2 x HDD is used for the cold tier (Tier2). For an all-flash configuration, all four disks per node will

be SSDs. The 64GB SATADOM boots the hypervisor and Nutanix Controller VM while the PERC H330 is

configured in pass-through mode connecting to the SSDs and HDDs. 64GB is consumed on each of the first

two SSDs for the Nutanix “home”. All HDD/SSD disks are presented to the Nutanix CVM running locally on

each host which contributes to the clustered DSF pool. Each platform can be outfitted with SFP+ or BaseT

NICs.

3.2.4.1 XC430 hybrid disk storage

26 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.2.4.2 XC430 all-flash disk storage

27 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Dell EMC XC6320 (high density) The Dell EMC XC6320 is a 4-node in 2U platform offering maximum user density per rack unit. Each of the

four nodes within a single 2U appliance comes equipped with dual CPUs, 10 to 14 cores, and up to 512GB of

high-performance RAM by default. Each node is equipped with six disks. For a hybrid storage configuration,

2 x SSD disks are used for the hot tier (Tier1) and 4 x HDD disks are used for the cold tier (Tier2). For an all-

flash configuration, all six disks per node will be SSDs. The 64GB SATADOM boots the hypervisor and

Nutanix Controller VM while the LSI2008 HBA connects the SSDs and HDDs. 64GB is consumed on each of

the first two SSDs for the Nutanix “home”. All HDD/SDD disks are presented to the Nutanix CVM running

locally on each host which contributes to the clustered DSF pool. Each platform is outfitted with SFP+ NICs.

28 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.2.5.1 XC6320 hybrid disk storage

29 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.2.5.2 XC6320 all-flash disk storage

30 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.3 Dell EMC XC430 Xpress Hyper-Converged Appliance The XC430 Xpress combines industry leading PowerEdge server technology with the proven Nutanix Xpress

hyper-converged platform to natively consolidate compute and storage in a single all-in-one infrastructure

solutions. Optimized for smaller environments, the Dell EMC XC430 Xpress is deployed in a space saving 3-

4 node cluster to radically simplify onsite infrastructure. Powerful enough for general purpose virtual desktop

and virtual application solutions, yet simple enough to be managed by a single person. The Dell EMC XC430

Xpress combines many of the enterprise features found on the XC Series, but has been optimized for smaller

environments to make it consumer friendly (refer to the XC Xpress vs XC Series section for details). For

more information on the Dell EMC XC430 Xpress, please visit: Link

The following configuration has been validated to be optimized for VDI use case on the Dell EMC XC430

Xpress platform.

Dell EMC XC430 Xpress (SMB) The Dell EMC XC430 Xpress Hyper-Converged appliance is a 1U platform that offers short depth (24”) space

savings perfect for SMB customers and smaller environments. Each appliance comes equipped with dual 10

core CPUs and up to 384GB of high-performance RAM by default. Four disks are required in each host with 1

x SSD used for the hot tier (Tier1) and 3 x HDD used for the cold tier (Tier2).

The 64GB SATADOM boots the hypervisor and Nutanix Controller VM while the PERC H330 is configured in

pass-through mode connecting to the SSDs and HDDs. 64GB is consumed on each of the first two SSDs for

the Nutanix “home”. All HDD/SSD disks are presented to the Nutanix CVM running locally on each host which

contributes to the clustered DSF pool. The XC430 Xpress platform has built-in (LOM) quad-port 1GbE

networking but can also be outfitted with additional 10Gb SFP+ or BaseT NICs. Dell recommends 10Gb

networking for the cluster traffic.

31 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

32 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.4 GPUs

NVIDIA Tesla GPUs Accelerate your most demanding enterprise data center workloads with NVIDIA® Tesla® GPU accelerators.

Scientists can now crunch through petabytes of data up to 10x faster than with CPUs in applications ranging

from energy exploration to deep learning. Plus, Tesla accelerators deliver the horsepower needed to run

bigger simulations faster than ever before. For enterprises deploying VDI, Tesla accelerators are perfect for

accelerating virtual desktops. GPUs can only be used with the Dell EMC XC730 platform.

3.4.1.1 NVIDIA Tesla M10 The NVIDIA® Tesla® M10 is a dual-slot 10.5 inch PCI Express Gen3

graphics card featuring four mid-range NVIDIA Maxwell™ GPUs and

a total of 32GB GDDR5 memory per card (8GB per GPU). The

Tesla® M10 doubles the number of H.264 encoders over the

NVIDIA® Kepler™ GPUs and improves encoding quality, which

enables richer colors, preserves more details after video encoding,

and results in a high-quality user experience.

The NVIDIA® Tesla® M10 GPU accelerator works with NVIDIA

GRID™ software to deliver the industry’s highest user density for virtualized desktops and applications. It

supports up to 64 desktops per GPU card (up to 128 desktops per server) and gives businesses the power to

deliver great graphics experiences to all of their employees at an affordable cost.

Specs Tesla M10

Number of GPUs 4 x NVIDIA Maxwell™ GPUs

Total CUDA cores 2560 (640 per GPU)

GPU Clock Idle: 405MHz / Base: 1033MHz

Total memory size 32GB GDDR5 (8GB per GPU)

Max power 225W

Form Factors Dual slot (4.4” x 10.5”)

Aux power 8-pin connector

PCIe x16 (Gen3)

Cooling solution Passive

33 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.4.1.2 NVIDIA Tesla M60 The NVIDIA® Tesla® M60 is a dual-slot 10.5 inch PCI Express Gen3

graphics card featuring two high-end NVIDIA Maxwell™ GPUs and a

total of 16GB GDDR5 memory per card. This card utilizes NVIDIA

GPU Boost™ technology which dynamically adjusts the GPU clock

to achieve maximum performance. Additionally, the Tesla® M60

doubles the number of H.264 encoders over the NVIDIA® Kepler™

GPUs.

The NVIDIA® Tesla® M60 GPU accelerator works with NVIDIA

GRID™ software to provide the industry’s highest user performance for virtualized workstations, desktops,

and applications. It allows enterprises to virtualize almost any application (including professional graphics

applications) and deliver them to any device, anywhere.

Specs Tesla M60

Number of GPUs 2 x NVIDIA Maxwell™ GPUs

Total CUDA cores 4096 (2048 per GPU)

Base Clock 899 MHz (Max: 1178 MHz)

Total memory size 16GB GDDR5 (8GB per GPU)

Max power 300W

Form Factors Dual slot (4.4” x 10.5”)

Aux power 8-pin connector

PCIe x16 (Gen3)

Cooling solution Passive/ Active

34 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

3.5 Dell Wyse Endpoints The following Dell Wyse clients will deliver a superior VMware Horizon user experience and are the

recommended choices for this solution.

Wyse 3030 LT Thin Client (ThinOS, ThinLinux) The Wyse 3030 LT thin client offers an excellent user experience within a cost-effective

offering, and features the virus resistant and extremely efficient Wyse ThinOS (with or

without PCoIP), for environments in which security is critical—there’s no attack surface to

put your data at risk. The 3030 LT delivers outstanding performance based on its dual core

Intel Celeron 1.58GHz processor, and delivers smooth multimedia, bi-directional audio and

flash playback. Boot up in just seconds and log in securely to almost any network. In

addition, the Wyse 3030 LT is designed for smooth playback of high bit-rate HD video and

graphics within a very compact form factor, with very efficient energy consumption and low

heat emissions. Using less than 7 watts of electricity, the Wyse 3030 LT’s small size enables

discrete mounting options: under desks, to walls, and behind monitors, creating cool workspaces in every

respect. For more information, please visit: Link

Wyse 3040 Thin Client (ThinOS, ThinLinux) The Wyse 3040 is the industry’s first entry-level Intel x86 quad-core thin

client, powered by a quad-core Intel Atom 1.44GHz processor,

delivering robust connectivity options with a choice of Wyse ThinOS or

ThinLinux operating systems. The Wyse 3040 is Dell’s lightest, smallest

and most power-efficient thin client – it consumes 3.3 Watts in idle state

– and offers superb performance and manageability for task and basic

productivity users. Despite its small size, the 3040 includes all typical interfaces such as four USB ports

including USB 3.1, two DisplayPort interfaces and wired and wireless options. It is highly manageable as it

can be monitored, maintained, and serviced remotely via Wyse Device Manager (WDM) or Wyse

Management Suite. For more information, please visit: Link

Wyse 5040 AIO Thin Client (ThinOS) The Dell Wyse 5040 AIO all-in-one (AIO) thin client runs ThinOS

(with or without PCoIP), has a 21.5" Full HD display and offers

versatile connectivity options for use in a wide range of industries.

With four USB 2.0 ports, Gigabit Ethernet and integrated dual band

Wi-Fi options, users can link to their peripherals and quickly connect

to the network while working with processing-intensive, graphics-

rich applications. Built-in speakers, a camera and a microphone

make video conferencing and desktop communication simple and

easy. It even supports a second attached display for those who

need a dual monitor configuration. A simple one-cord design and

out-of-box automatic setup makes deployment effortless while

remote management from a simple file server, Wyse Device Manager (WDM), or Wyse Management Suite

can help lower your total cost of ownership as you grow from just a few thin clients to tens of thousands. For

more information, please visit: Link

35 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Wyse 5060 Thin Client (ThinOS, ThinLinux, WES7P, WIE10) The Wyse 5060 offers high performance and reliability, featuring all the security

and management benefits of Dell thin clients. It come with flexible OS options:

ThinOS (with or without PCoIP), ThinLinux, Windows Embedded Standard 7P

(WES7P) or Windows 10 IoT Enterprise (WIE10). Designed for knowledge workers

demanding powerful virtual desktop performance, and support for unified

communications solutions like Skype for Business, the Wyse 5060 thin client

delivers the flexibility, efficiency and security organizations require for their cloud

environments. It is powered by a quad-core AMD 2.4GHz processor, supports dual

4K (3840x2160) monitors and provides multiple connectivity options with six USB

ports, two of which are USB 3.0 for high-speed peripherals, as well as two

DisplayPort connectors, wired networking or wireless 802.11 a/b/g/n/ac. The Wyse

5060 can be monitored, maintained, and serviced remotely via Wyse Device

Manager (WDM), cloud-based Wyse Management Suite or Microsoft SCCM (5060

with Windows versions). For more information, please visit: Link

Wyse 7020 Thin Client (WES 7/7P/8, WIE10, ThinLinux) The versatile Dell Wyse 7020 thin client is a powerful endpoint platform for virtual

desktop environments. It is available with Windows Embedded Standard 7/7P/8

(WES), Windows 10 IoT Enterprise (WIE10), Wyse ThinLinux operating systems

and it supports a broad range of fast, flexible connectivity options so that users can

connect their favorite peripherals while working with processing-intensive, graphics-

rich applications. This 64-bit thin client delivers a great user experience and support

for local applications while ensuring security. Designed to provide a superior user

experience, ThinLinux features broad broker support including Citrix Receiver,

VMware Horizon and Amazon Workspace, and support for unified communication

platforms including Skype for Business, Lync 2013 and Lync 2010. For additional security, ThinLinux also

supports single sign-on and VPN. With a powerful quad core AMD G Series APU in a compact chassis with

dual-HD monitor support, the Wyse 7020 thin client delivers stunning performance and display capabilities

across 2D, 3D and HD video applications. Its silent diskless and fan less design helps reduce power usage to

just a fraction (it only consumes about 15 watts) of that used in traditional desktops. Wyse Device Manager

(WDM) helps lower the total cost of ownership for large deployments and offers remote enterprise-wide

management that scales from just a few to tens of thousands of cloud clients. For more information, please

visit Link

Wyse 7040 Thin Client (WES7P, WIE10) The Wyse 7040 is a high-powered, ultra-secure thin client

running Windows Embedded Standard 7P (WES7P) or Windows

10 IoT Enterprise (WIE10) operating systems. Equipped with an

Intel i5/i7 processors, it delivers extremely high graphical display

performance (up to three displays via display-port daisy-chaining,

with 4K resolution available on a single monitor) for seamless access to the most demanding applications.

The Wyse 7040 is compatible with both data center hosted and client-side virtual desktop environments and

is compliant with all relevant U.S. Federal security certifications including OPAL compliant hard-drive options,

36 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

VPAT/Section 508, NIST BIOS, Energy-Star and EPEAT. Wyse enhanced WES7P OS provides additional

security features such as BitLocker. The Wyse 7040 offers a high level of connectivity including dual NIC, 6 x

USB3.0 ports and an optional second network port, with either copper or fiber SFP interface. Wyse 7040

devices are highly manageable through Intel vPRO, Wyse Device Manager (WDM), Microsoft System Center

Configuration Manager (SCCM) and Dell Command Configure (DCC). For more information, please visit:

Link

Enhanced Security

Note that all the above thin clients running Windows Embedded Standard 7 or Windows 10 IoT can be

protected against viruses, ransomeware and zero-day threats by installing Dell Threat Defense, a

revolutionary anti-malware software solution using artificial intelligence and mathematical modeling and is not

signature-based. Threat Defense prevents 99% of executable malware, far above the average 50% of threats

identified by the top anti-virus solutions. It doesn’t need a constant internet connection nor frequent updates

(only about twice a year), it only uses 1-3% CPU and has only a ~40MB memory footprint, making it an ideal

choice to protect thin clients without impacting the end user productivity.

If you also want to protect virtual desktops against such malware and threats with a similar success, Dell

recommends using Dell Endpoint Security Suite Enterprise, a full suite featuring advanced threat

prevention and data-centric encryption using an on-premise management console. This suite can also be

used to protect physical PCs, MAC OS X systems and Windows Server.

37 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

4 Software components

4.1 VMware

VMware Horizon 7 The solution is based on VMware Horizon which provides a complete end-to-end solution delivering Microsoft

Windows virtual desktops to users on a wide variety of endpoint devices. Virtual desktops are dynamically

assembled on demand, providing users with pristine, yet personalized, desktops each time they log on.

VMware Horizon provides a complete virtual desktop delivery system by integrating several distributed

components with advanced configuration tools that simplify the creation and real-time management of the

virtual desktop infrastructure. For the complete set of details, please see the Horizon View resources page at

http://www.vmware.com/products/horizon-view/resources.html

The core Horizon components include:

Connection Server (VCS) – Installed on servers in the data center and brokers client connections,

The VCS authenticates users, entitles users by mapping them to desktops and/or pools, establishes

secure connections from clients to desktops, support single sign-on, sets and applies policies, acts as

a DMZ security server for outside corporate firewall connections and more.

Client – Installed on endpoints. Is software for creating connections to View desktops that can be run

from tablets, Windows, Linux, or Mac PCs or laptops, thin clients and other devices.

Portal – A web portal to access links for downloading full View clients. With HTML Access Feature

enabled enablement for running a View desktop inside a supported browser is enabled.

Agent – Installed on all VMs, physical machines and Terminal Service servers that are used as a

source for View desktops. On VMs the agent is used to communicate with the View client to provide

services such as USB redirection, printer support and more.

Horizon Administrator – A web portal that provides admin functions such as deploy and

management of View desktops and pools, set and control user authentication and more.

Composer – This software service can be installed standalone or on the vCenter server and provides

enablement to deploy and create linked clone desktop pools (also called non-persistent desktops).

vCenter Server – This is a server that provides centralized management and configuration to entire

virtual desktop and host infrastructure. It facilitates configuration, provision, management services. It

is installed on a Windows Server host (can be a VM).

Transfer Server – Manages data transfers between the data center and the View desktops that are

checked out on the end users’ desktops in offline mode. This Server is required to support desktops

that run the View client with Local Mode options. Replications and syncing are the functions it will

perform with offline images.

38 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

VMware vSphere 6 The vSphere hypervisor also known as ESXi is a bare-metal

hypervisor that installs directly on top of your physical server

and partitions it into multiple virtual machines. Each virtual

machine shares the same physical resources as the other

virtual machines and they can all run at the same time. Unlike

other hypervisors, all management functionality of vSphere is

done through remote management tools. There is no

underlying operating system, reducing the install footprint to

less than 150MB.

VMware vSphere 6 includes three major layers: Virtualization,

Management and Interface. The Virtualization layer includes

infrastructure and application services. The Management layer is central for configuring, provisioning and

managing virtualized environments. The Interface layer includes the vSphere web client.

Throughout the solution, all VMware and Microsoft best practices and prerequisites for core services are

adhered to (NTP, DNS, Active Directory, etc.). The vCenter 6 VM used in the solution is a single Windows

Server 2012 R2 VM or vCenter 6 virtual appliance, residing on a host in the management layer. SQL server is

a core component of the Windows version of vCenter and is hosted on another VM also residing in the

management layer. It is recommended that all additional Horizon components be installed in a distributed

architecture, one role per server VM.

4.2 Microsoft RDSH The RDSH servers can exist as physical or virtualized instances of Windows Server 2012 R2. A minimum of

one, up to a maximum of ten virtual servers are installed per physical compute host. Since RDSH instances

are easily added to an existing Horizon stack, the only additional components required are one or more

Windows Server OS instances added to the Horizon site

The total number of required virtual RDSH servers is dependent on application type, quantity and user load.

Deploying RDSH virtually and in a multi-server farm configuration increases overall farm performance,

application load balancing as well as farm redundancy and resiliency.

4.2.1.1 NUMA architecture considerations Best practices and testing has showed that aligning RDSH design to the physical Non-Uniform Memory

Access (NUMA) architecture of the server CPUs results in increased and optimal performance. NUMA

alignment ensures that a CPU can access its own directly-connected RAM banks faster than those banks of

the adjacent processor which are accessed via the Quick Path Interconnect (QPI). The same is true of VMs

with large vCPU assignments, best performance will be achieved if your VMs receive their vCPU allotment

from a single physical NUMA node. Ensuring that your virtual RDSH servers do not span physical NUMA

nodes will ensure the greatest possible performance benefit.

The general guidance for RDSH NUMA-alignment on the Dell EMC XC appliance is as follows:

39 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

4.2.1.2 A3 and XC430 Xpress NUMA alignment 10 physical cores per CPU in the A3 configuration, 20 logical with Hyper-threading active, gives a total of 40

consumable cores per appliance. The Nutanix CVM will receive its vCPU allotment from the first physical

CPU and so configuring the RDSH VMs as shown below will ensure that no NUMA spanning occurs which

could lower performance. Per the example below, we have three RDSH VMs configured along with the

Nutanix CVM all receiving 8 vCPUs, this results in a total oversubscription rate of 1.25x per host.

4.2.1.3 B5 NUMA alignment 14 physical cores per CPU in the B5 configuration, 28 logical with Hyper-threading active, gives a total of 56

consumable cores per appliance. The Nutanix CVM will receive its vCPU allotment from the first physical

CPU and so configuring the RDSH VMs as shown below will ensure that no NUMA spanning occurs which

could lower performance. Per the example below, we have five RDSH VMs configured along with the Nutanix

CVM all receiving 8 vCPUs, this results in a total oversubscription rate of 1.17x per host.

40 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

4.2.1.4 C7 NUMA alignment 20 physical cores per CPU in the C7 configuration, 40 logical with Hyper-threading active, gives us a total of

80 consumable cores per appliance. The Nutanix CVM will receive its vCPU allotment from the first physical

CPU and so configuring the RDSH VMs as shown below will ensure that no NUMA spanning occurs which

could lower performance. Per the example below, we have nine RDSH VMs configured along with the Nutanix

CVM all receiving 8 vCPUs, this results in a total oversubscription rate of 2x per host.

4.3 NVIDIA GRID vGPU NVIDIA GRID™ vGPU™ brings the full benefit of NVIDIA hardware-accelerated graphics to virtualized solutions. This technology provides exceptional graphics performance for virtual desktops equivalent to local PCs when sharing a GPU among multiple users.

GRID vGPU is the industry's most advanced technology for sharing true GPU hardware acceleration between multiple virtual desktops—without compromising the graphics experience. Application features and compatibility are exactly the same as they would be at the user's desk.

With GRID vGPU technology, the graphics commands of each virtual machine are passed directly to the GPU, without translation by the hypervisor. This allows the GPU hardware to be time-sliced to deliver outstanding shared virtualized graphics performance.

NOTE: GPU cards are not supported with the XC430 Xpress platform.

41 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Image provided courtesy of NVIDIA Corporation, Copyright NVIDIA Corporation

vGPU profiles Virtual Graphics Processing Unit, or GRID vGPU™, is technology developed by NVIDIA® that enables

hardware sharing of graphics processing for virtual desktops. This solution provides a hybrid shared mode

allowing the GPU to be virtualized while the virtual machines run the native NVIDIA video drivers for better

performance. Thanks to OpenGL support, VMs have access to more graphics applications. When utilizing

vGPU, the graphics commands from virtual machines are passed directly to the GPU without any hypervisor

translation. Every virtual desktop has dedicated graphics memory so they always have the resources they

need to launch and run their applications at full performance. All this is done without sacrificing server

performance and so is truly cutting edge.

The combination of Dell servers, NVIDIA GRID vGPU™ technology and NVIDIA GRID™ cards enable high-end graphics users to experience high fidelity graphics quality and performance, for their favorite applications at a reasonable cost.

For more information about NVIDIA GRID vGPU, please visit: LINK

The number of users per appliance is determined by the number of GPU cards in the system (max 2), vGPU profiles used for each GPU in a card (2 GPUs per card), and GRID license type. The same profile must be used on a single GPU but profiles can differ across GPUs in a single card.

42 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

NVIDIA® Tesla® M10 GRID vGPU Profiles:

Card vGPU

Profile

Graphics Memory

(Frame Buffer)

Virtual

Display

Heads

Maximum

Resolution

Maximum

Graphics-Enabled VMs

Per

GPU

Per

Card

Per Server

(2 cards)

64bit Linux

Tesla

M10

M10-8Q 8GB 4 4096x2160 1 4 8

∞ M10-4Q 4GB 4 4096x2160 2 8 16

M10-2Q 2GB 4 4096x2160 4 16 32

M10-1Q 1GB 2 4096x2160 8 32 64

M10-0Q 512MB 2 2560x1600 16 64 128

M10-1B 1GB 4 2560x1600 8 32 64

M10-0B 512MB 2 2560x1600 16 64 128

M10-8A 8GB

1 1280x1024

1 4 8

M10-4A 4GB 2 8 16

M10-2A 2GB 4 16 32

M10-1A 1GB 8 32 64

43 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

*NOTE: Supported guest operating systems listed as of the time of this writing. Please refer to NVIDIA’s

documentation for latest supported operating systems.

Card vGPU

Profile

Guest VM OS

Supported* GRID

License

Required

Win 64bit

Linux

Tesla

M10

M10-8Q ● ●

GRID Virtual

Workstation

M10-4Q ● ●

M10-2Q ● ●

M10-1Q ● ●

M10-0Q ● ●

M10-1B ● GRID Virtual

PC

M10-0B ●

M10-8A ●

GRID Virtual

Application

M10-4A ●

M10-2A ●

M10-1A ●

Supported Guest VM Operating Systems*

Windows Linux

Windows 7

(32/64-bit) RHEL 6.6 & 7

Windows 8.x (32/64-

bit) CentOS 6.6 & 7

Windows 10 (32/64-

bit)

Ubuntu 12.04 &

14.04 LTS

Windows Server

2008 R2

Windows Server

2012 R2

Windows Server

2016

44 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

NVIDIA® Tesla® M60 GRID vGPU Profiles:

Card vGPU

Profile

Graphics Memory

(Frame Buffer)

Virtual

Display

Heads

Maximum

Resolution

Maximum

Graphics-Enabled VMs

Per

GPU

Per

Card

Per Server

(2 cards)

64bit Linux

Tesla

M60

M60-8Q 8GB 4 4096x2160 1 2 4

∞ M60-4Q 4GB 4 4096x2160 2 4 8

∞ M60-2Q 2GB 4 4096x2160 4 8 16

∞ M60-1Q 1GB 2 4096x2160 8 16 32

∞ M60-0Q 512MB 2 2560x1600 16 32 64

∞ M60-1B 1GB 4 2560x1600 8 16 32

M60-0B 512MB 2 2560x1600 16 32 64

M60-8A 8GB

1 1280x1024

1 2 4

M60-4A 4GB 2 4 8

M60-2A 2GB 4 8 16

M60-1A 1GB 8 16 32

45 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

*NOTE: Supported guest operating systems listed as of the time of this writing. Please refer to NVIDIA’s

documentation for latest supported operating systems.

Card vGPU

Profile

Guest VM OS

Supported* GRID

License

Required Win

64bit

Linux

Tesla

M60

M60-8Q ● ●

GRID Virtual

Workstation

M60-4Q ● ●

M60-2Q ● ●

M60-1Q ● ●

M60-0Q ● ●

M60-1B ● GRID Virtual

PC

M60-0B ●

M60-8A ●

GRID Virtual

Application

M60-4A ●

M60-2A ●

M60-1A ●

Supported Guest VM

Operating Systems*

Windows Linux

Windows 7

(32/64-bit) RHEL 6.6 & 7

Windows 8.x (32/64-

bit) CentOS 6.6 & 7

Windows 10 (32/64-

bit)

Ubuntu 12.04 &

14.04 LTS

Windows Server

2008 R2

Windows Server

2012 R2

Windows Server

2016

46 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

4.3.1.1 GRID vGPU licensing and architecture NVIDIA® GRID vGPU™ is offered as a licensable feature on Tesla® GPUs. vGPU can be licensed and

entitled using one of the three following software editions.

NVIDIA GRID

Virtual Applications

NVIDIA GRID

Virtual PC

NVIDIA GRID

Virtual Workstation

For organizations deploying or

other RDSH solutions.

Designed to deliver Windows

applications at full

performance.

For users who want a virtual

desktop, but also need a great

user experience leveraging PC

applications, browsers, and

high-definition video.

For users who need to use

professional graphics

applications with full

performance on any device,

anywhere.

Up to 2 displays @ 1280x1024

resolution supporting

virtualized Windows

applications

Up to 4 displays @ 2560x1600

resolution supporting Windows

desktops, and NVIDIA Quadro

features

Up to 4 displays @

4096x2160* resolution

supporting Windows or Linux

desktops, NVIDIA Quadro,

CUDA**, OpenCL** & GPU

pass-through

*0Q profiles only support up to 2560x1600 resolution

**CUDA and OpenCL only supported with M10-8Q, M10-8A, M60-8Q, or M60-8A profiles

47 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The GRID vGPU Manager, running on the hypervisor installed via the VIB, controls the vGPUs that can be

assigned to guest VMs. A properly configured VM obtains a license from the GRID license server during the

boot operation for a specified license level. The NVIDIA graphics driver running on the guest VM provides

direct access to the assigned GPU. When the VM is shut down, it releases the license back to the server. If

a vGPU enabled VM is unable to obtain a license, it will run at full capability without the license but users will

be warned each time it tries and fails to obtain a license.

(Image provided courtesy of NVIDIA Corporation, Copyright NVIDIA Corporation)

48 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

5 Solution architecture for Horizon

5.1 Management role configuration The Management role recommendations for the base solution are summarized below. Use data disks for role-

specific application files such as data, logs and IIS web files in the Management volume.

VMware Horizon management role requirements

Role vCPU vRAM (GB) NIC

OS vDisk

Size (GB) Location

Nutanix CVM 8 16 2 - (SATADOM)

Connection Server 4 8 1 40 DSF: ds_mgmt

Primary SQL 4 8 1 40 + 200 DSF: ds_mgmt

vCenter Appliance 2 8 1 125 DSF: ds_mgmt

Total 18

vCPUs 40GB 5 vNICs 405GB -

RDSH on vSphere The recommended number of RDSH VMs and their configurations on vSphere are summarized below based

on applicable hardware platform.

Role HW

Config

VMs per

host

vCPUs

per VM

RAM

(GB) NIC

OS vDisk

Size (GB) Location

RDSH VM A3* 3 8 32 1 80 DSF: ds_rdsh

RDSH VM B5 5 8 32 1 80 DSF: ds_rdsh

RDSH VM C7 9 8 32 1 80 DSF: ds_rdsh

*Includes XC430 Xpress platform.

49 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

NVIDIA GRID license server requirements When using NVIDIA Tesla cards, graphics enabled VMs must obtain a license from a GRID License server on

your network to be entitled for vGPU. To configure, a virtual machine with the following specifications must

be added to a management host in addition to the management role VMs.

Role vCPU vRAM (GB) NIC

OS vDisk

Size (GB) Location

NVIDIA GRID

License Srv 2 4 1 40 + 5 DSF: ds_mgmt

GRID License server software can be installed on a system running the following operating systems:

Windows 7 (x32/x64)

Windows 8.x (x32/x64)

Windows 10 x64

Windows Server 2008 R2

Windows Server 2012 R2

Red Hat Enterprise 7.1 x64

CentOS 7.1 x64

Additional license server requirements:

A fixed (unchanging) IP address. The IP address may be assigned dynamically via DHCP or statically

configured, but must be constant.

At least one unchanging Ethernet MAC address, to be used as a unique identifier when registering

the server and generating licenses in NVIDIA’s licensing portal.

The date/time must be set accurately (all hosts on the same network should be time synchronized).

SQL databases The VMware databases are hosted by a single dedicated SQL Server 2012 VM in the Management layer.

Use caution during database setup to ensure that SQL data, logs, and TempDB are properly separated onto

their respective volumes. Create all Databases that are required for:

VMware Horizon

vCenter (if using Windows version)

Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue, in

which case database need to be separated into separate named instances. Enable auto-growth for each DB.

Best practices defined by Microsoft and VMware are to be adhered to, to ensure optimal database

performance.

50 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Align all disks to be used by SQL Server with a 1024K offset and then formatted with a 64K file allocation unit

size (data, logs, and TempDB).

DNS DNS plays a crucial role in the environment not only as the basis for Active Directory but is used to control

access to the various VMware and Microsoft software components. All hosts, VMs, and consumable software

components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace.

Microsoft best practices and organizational requirements are to be adhered to.

Pay consideration for eventual scaling, access to components that may live on one or more servers (SQL

databases, Horizon services) during the initial deployment. Use CNAMEs and the round robin DNS

mechanism to provide a front-end “mask” to the back-end server actually hosting the service or data source.

5.1.5.1 DNS for SQL To access the SQL data sources, either directly or via ODBC, a connection to the server name\ instance

name must be used. To simplify this process, as well as protect for future scaling (HA), instead of connecting

to server names directly, alias these connections in the form of DNS CNAMEs. So instead of connecting to

SQLServer1\<instance name> for every device that needs access to SQL, the preferred approach is to

connect to <CNAME>\<instance name>.

For example, the CNAME “VDISQL” is created to point to SQLServer1. If a failure scenario was to occur and

SQLServer2 would need to start serving data, we would simply change the CNAME in DNS to point to

SQLServer2. No infrastructure SQL client connections would need to be touched.

5.2 Storage architecture overview All Dell EMC XC Web Scale appliances come with two tiers of storage by default, SSD for performance and

HDD for capacity. Additionally, all-flash configurations are available utilizing only SSD disks. A single

common Software Defined Storage namespace is created across the Nutanix cluster and presented as either

NFS or SMB to the hypervisor of each host. This constitutes a storage pool and one should be sufficient per

cluster. Within this common namespace, logical containers are created to group VM files as well as control

the specific storage-related features that are desired to be enabled such as deduplication and compression.

51 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Nutanix containers The following table outlines the recommended containers, their purpose and settings given the use case. Best

practices suggest using as few features as possible, only enable what is absolutely required. For example, if

you are not experiencing disk capacity pressure then there is no need to enable Capacity Tier Deduplication.

Enabling unnecessary services increases the resource demands of the Controller VMs. Capacity tier

deduplication requires that CVMs be configured with 32GB RAM. Erasure Coding (EC-X) is recommended to

increase usable capacity of the cluster.

Container Purpose Replication

Factor EC-X*

Perf Tier

Deduplication

Capacity Tier

Deduplication Compression

Ds_compute Desktop VMs 2 Enabled Enabled Disabled Disabled

Ds_mgmt Mgmt Infra

VMs 2 Enabled Enabled Disabled Disabled

Ds_rdsh RDSH Server

VMs 2 Enabled Enabled Disabled Disabled

Ds_vgpu vGPU-

enabled VMs 2 Enabled Enabled Disabled Disabled

*Minimum node requirement for Erasure Coding (EC-X):

RF2 – 4 nodes

RF3 – 6 nodes

NOTE: Erasure coding, RF3, and vGPU-enabled VMs are not available with XC430 Xpress.

52 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

5.3 Virtual networking The network configuration for the Dell EMC XC appliances utilizes a 10Gb converged infrastructure model.

All required VLANs will traverse 2 x 10Gb NICs configured in an active/active team. For larger scaling it is

recommended to separate the infrastructure management VMs from the compute VMs to aid in predictable

compute host scaling. The following outlines the VLAN requirements for the Compute and Management hosts

in this solution model:

Compute hosts

o Management VLAN: Configured for hypervisor infrastructure traffic – L3 routed via spine layer

o vMotion VLAN: Configured for vMotion traffic – L2 switched via leaf layer

o VDI VLAN: Configured for VDI session traffic – L3 routed via spine layer

Management hosts

o Management VLAN: Configured for hypervisor Management traffic – L3 routed via spine layer

o vMotion VLAN: Configured for vMotion traffic – L2 switched via leaf layer

o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via spine layer

An iDRAC VLAN is configured for all hardware management traffic – L3 routed via spine layer

vSphere The Management host network configuration consists of a standard vSwitch teamed with 2 x 10Gb physical

adapters assigned. The CVM connects to a private internal vSwitch as well as the standard external vSwitch.

All VMkernel service ports connect to the standard external vSwitch. All VDI infrastructure VMs connect

through the primary port group on the external vSwitch.

The Compute hosts are configured in the same basic manner with the desktop VMs connecting to the primary

port group on the external vSwitch.

53 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

5.4 Scaling guidance Each component of the solution architecture scales independently according to the desired number of

supported users. Additional appliance nodes can be added at any time to expand the Nutanix SDS pool in a

modular fashion. While there is no scaling limit of the Nutanix architecture itself, practicality might suggest

scaling pods based on the limits of hypervisor clusters (64 nodes for vSphere). Isolating management and

compute to their own HA clusters provides more flexibility with regard to scaling and functional layer

protection while stretching the DSF cluster namespace between them.

Another option is to design a large single contiguous NDFS namespace with multiple hypervisor clusters

within to provide single pane of glass management. For example, portrayed below is a 30,000 professional

user environment segmented by vSphere HA cluster and broker farm. Each farm compute instance is

segmented into an HA cluster with a hot standby node providing N+1, served by a dedicated pair of

54 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

management nodes in a separate HA cluster. This provides multiple broker farms with separated HA

protection while maintaining a single NDFS cluster across all nodes.

NOTE: The XC Xpress platform can only be scaled to four nodes maximum per cluster and only two clusters

per customer are allowed.

The components are scaled either horizontally (by adding additional physical and virtual servers to

the server pools) or vertically (by adding virtual resources to the infrastructure)

Eliminate bandwidth and performance bottlenecks as much as possible

Allow future horizontal and vertical scaling with the objective of reducing the future cost of ownership

of the infrastructure.

55 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Component Metric Horizontal scalability Vertical scalability

Virtual Desktop Host/Compute

Servers VMs per physical host

Additional hosts and clusters added as necessary

Additional RAM or CPU compute power

View Composer Desktops per instance

Additional physical servers added to the Management

cluster to deal with additional management VMs.

Additional network and I/O capacity

added to the servers

View Connection

Servers Desktops per instance

Additional physical servers added to the Management

cluster to deal with additional management VMs.

Additional VCS Management VMs.

RDSH Servers Desktops per instance Additional virtual servers

added as necessary

Additional physical servers to host virtual RDSH

servers.

VMware vCenter

VMs per physical host and/or ESX hosts per

vCenter instance

Deploy additional servers and use linked mode to optimize

management

Additional vCenter Management VMs.

Database Services

Concurrent connections, responsiveness of reads/

writes

Migrate databases to a dedicated SQL server and

increase the number of management nodes

Additional RAM and CPU for the

management nodes

File Services Concurrent connections, responsiveness of reads/

writes

Split user profiles and home directories between multiple file servers in the cluster. File services can also be migrated to the optional NAS device to

provide high availability.

Additional RAM and CPU for the

management nodes

56 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

5.5 Solution high availability High availability (HA) is offered to protect each architecture solution layer, individually if desired. Following the

N+1 model, additional ToR switches are added to the Network

layer and stacked to provide redundancy as required, additional

compute and management hosts are added to their respective

layers, vSphere clustering is introduced in both the

management and compute layers, SQL is configured for

AlwaysOn or clustered and F5 is leveraged for load balancing.

The HA options provide redundancy for all critical components

in the stack while improving the performance and efficiency of

the solution as a whole.

Additional switches added to the existing thereby equally spreading each host’s network connections

across multiple switches.

Additional ESXi hosts added in the compute or management layers to provide N+1 protection.

Applicable VMware Horizon infrastructure server roles are duplicated and spread amongst

management host instances where connections to each are load balanced via the addition of F5

appliances.

SQL Server databases also are protected through the addition and configuration of an "AlwaysOn" Failover Cluster Instance or Availability Group.

Please refer to these links for more information: LINK1 and LINK2

NOTE: For the XC Xpress platform, four nodes must be used in a cluster to ensure high availability. If a

cluster only contains the minimum three nodes and one fails, the cluster will be flagged in critical status with

lost redundancy requiring the third node to be restored as quickly as possible to function properly.

57 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

5.6 Dell Wyse Datacenter for Horizon communication flow

58 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6 Solution performance and testing

6.1 Summary At the time of publication, these are the available density recommendations per appliance/node. Please refer

to the Platform Configurations section for hardware specifications.

NOTE: All-flash configurations yield the same user densities with our test methodology since processor and

memory resources are exhausted before storage resources are impacted.

User density summary

Host Config*

Hypervisor Broker &

Provisioning Workload Template

User Density

B5 ESXi 6.0

U2 Horizon 7 Linked

Clones Task Worker

Windows 10 x32 & Office 2016

150

B5 ESXi 6.0

U2 Horizon 7 Linked

Clones Knowledge Worker

Windows 10 x32 & Office 2016

110

B5 ESXi 6.0

U2 Horizon 7 Linked

Clones Power Worker

Windows 10 x64 & Office 2016

85

XC430 Xpress

ESXi 6.0 U2

Horizon 7.1 Linked Clones

Task Worker Windows 10 x64

& Office 2016 125

XC430 Xpress

ESXi 6.0 U2

Horizon 7.1 Linked Clones

Knowledge Worker Windows 10 x64

& Office 2016 85

XC430 Xpress

ESXi 6.0 U2

Horizon 7.1 Linked Clones

Power Worker Windows 10 x64

& Office 2016 65

* For B5 and C7 results, user densities are supported on the following platforms: XC430, XC630, XC730xd,

and XC6320.

The detailed validation results and analysis of these reference designs are in the following sections.

6.2 Test and performance analysis methodology

Testing process In order to ensure the optimal combination of end-user experience (EUE) and cost-per-user, performance

analysis and characterization (PAAC) on Dell Wyse Datacenter solutions is carried out using a carefully

designed, holistic methodology that monitors both hardware resource utilization parameters and EUE during

load-testing.

59 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Login VSI is currently the load-generation tool used during PAAC of Dell Wyse Datacenter solutions. Each

user load is tested against multiple runs. First, a pilot run to validate that the infrastructure is functioning and

valid data can be captured, and then, subsequent runs allowing correlation of data.

At different times during testing, the testing team will complete some manual “User Experience” Testing while

the environment is under load. This will involve a team member logging into a session during the run and

completing tasks similar to the User Workload description. While this experience will be subjective, it will help

provide a better understanding of the end user experience of the desktop sessions, particularly under high

load, and ensure that the data gathered is reliable.

6.2.1.1 Load generation Login VSI by Login Consultants is the de-facto industry standard tool for testing VDI environments and server-

based computing (RDSH environments). It installs a standard collection of desktop application software (e.g.

Microsoft Office, Adobe Acrobat Reader) on each VDI desktop; it then uses launcher systems to connect a

specified number of users to available desktops within the environment. Once the user is connected, the

workload is started via a logon script which starts the test script once the user environment is configured by

the login script. Each launcher system can launch connections to a number of ‘target’ machines (i.e. VDI

desktops). The launchers and Login VSI environment are configured and managed by a centralized

management console.

Additionally, the following login and boot paradigm is used:

Users are logged in within a login timeframe of 1 hour. Exception to this login timeframe occurs when

testing low density solutions such as GPU/graphics based configurations. With those configurations,

users are logged on every 10-15 seconds.

All desktops are pre-booted in advance of logins commencing.

All desktops run an industry-standard anti-virus solution. Windows Defender is used for Windows 10

due to issues implementing McAfee.

6.2.1.2 Profiles and workloads It’s important to understand user workloads and profiles when designing a desktop virtualization solution in

order to understand the density numbers that the solution can support. At Dell, we use five workload / profile

levels, each of which is bound by specific metrics and capabilities with two targeted at graphics-intensive use

cases. We will present more detailed information in relation to these workloads and profiles below but first it is

useful to define the terms “profile” and “workload” as they are used in this document.

Profile: This is the configuration of the virtual desktop - number of vCPUs and amount of RAM

configured on the desktop (i.e. available to the user).

Workload: This is the set of applications used by performance analysis and characterization (PAAC)

of Dell Wyse Datacenter solutions (e.g. Microsoft Office applications, PDF Reader, Internet Explorer

etc.)

Load-testing on each profile is carried out using an appropriate workload that is representative of the relevant

use case and summarized in the table below:

60 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Profile to workload mapping

Profile Name Workload

Task Worker Login VSI Task worker

Knowledge Worker Login VSI Knowledge worker

Power Worker Login VSI Power worker

Graphics LVSI Power + ProLibrary Graphics - Login VSI Power worker with ProLibrary

Graphics LVSI Custom Graphics – LVSI Custom

Login VSI workloads are summarized in the sections below. Further information for each workload can be

found on Login VSI’s website.

Login VSI Task Worker Workload

The Task Worker workload runs fewer applications than the other workloads (mainly Excel and Internet

Explorer with some minimal Word activity, Outlook, Adobe, copy and zip actions) and starts/stops the

applications less frequently. This results in lower CPU, memory and disk IO usage.

Login VSI Knowledge Worker Workload

The Knowledge Worker workload is designed for virtual machines with 2vCPUs. This workload and contains

the following activities:

Outlook, browse messages.

Internet Explorer, browse different webpages and a YouTube style video (480p movie trailer) is

opened three times in every loop.

Word, one instance to measure response time, one instance to review and edit a document.

Doro PDF Printer & Acrobat Reader, the Word document is printed and exported to PDF.

Excel, a very large randomized sheet is opened.

PowerPoint, a presentation is reviewed and edited.

FreeMind, a Java based Mind Mapping application.

Various copy and zip actions.

Login VSI Power Worker Workload

The Power Worker workload is the most intensive of the standard workloads. The following activities are

performed with this workload:

Begins by opening four instances of Internet Explorer which remain open throughout the workload.

Begins by opening two instances of Adobe Reader which remain open throughout the workload.

61 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

There are more PDF printer actions in the workload as compared to the other workloads.

Instead of 480p videos a 720p and a 1080p video are watched.

The idle time is reduced to two minutes.

Various copy and zip actions.

Graphics - Login VSI Power Worker with ProLibrary workload

For lower performance graphics testing where lower amounts of graphics memory are allocated to each VM,

the Power worker + Pro Library workload is used. The Login VSI Pro Library is an add-on for the Power

worker workload which contains extra content and data files. The extra videos and web content of the Pro

Library utilizes the GPU capabilities without overwhelming the lower frame buffer assigned to the desktops.

This type of workload is typically used with high density vGPU and sVGA or other shared graphics

configurations.

Graphics – LVSI Custom workload

This is a custom Login VSI workload specifically for higher performance, intensive graphics testing. For this

workload, SPECwpc benchmark application is installed to the client VMs. During testing, a script is started

that launches SPECwpc which executes the Maya and sw-03 modules for high performance tests and module

sw-03 only for high density tests. The usual activities such as Office application execution are not performed

with this workload. This type of workload is typically used for lower density/high performance pass-through,

vGPU, and other dedicated, multi-user GPU configurations.

Resource monitoring The following sections explain respective component monitoring used across all Dell Wyse Datacenter

solutions where applicable.

6.2.2.1 GPU resources ESXi hosts

For gathering of GPU related resource usage, a script is executed on the ESXi host before starting the test

run and stopped when the test is completed. The script contains NVIDIA System Management Interface

commands to query each GPU and log GPU utilization and GPU memory utilization into a .csv file.

ESXi 6.5 and above includes the collection of this data in the vSphere Client/Monitor section. GPU processor

utilization, GPU temperature, and GPU memory utilization can be collected the same was as host CPU, host

memory, host Network, etc.

6.2.2.2 VMware vCenter VMware vCenter is used for VMware vSphere-based solutions to gather key data (CPU, Memory, Disk and

Network usage) from each of the compute hosts during each test run. This data is exported to .csv files for

single hosts and then consolidated to show data from all hosts (when multiple are tested). While the report

does not include specific performance metrics for the Management host servers, these servers are monitored

during testing to ensure they are performing at an expected performance level with no bottlenecks.

62 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Resource utilization Poor end-user experience is one of the main risk factors when implementing desktop virtualization but a root

cause for poor end-user experience is resource contention: hardware resources at some point in the solution

have been exhausted, thus causing the poor end-user experience. In order to ensure that this does not

happen, PAAC on Dell Wyse Datacenter solutions monitors the relevant resource utilization parameters and

applies relatively conservative thresholds as shown in the table below. Thresholds are carefully selected to

deliver an optimal combination of good end-user experience and cost-per-user, while also providing burst

capacity for seasonal / intermittent spikes in usage. Utilization within these thresholds is used to determine

the number of virtual applications or desktops (density) that are hosted by a specific hardware environment

(i.e. combination of server, storage and networking) that forms the basis for a Dell Wyse Datacenter RA.

Resource utilization thresholds

Parameter Pass/Fail Threshold

Physical Host CPU Utilization

(AHV & ESXi hypervisors)* 100%

Physical Host Memory Utilization 85%

Network Throughput 85%

Storage IO Latency 20ms

*Turbo mode is enabled; therefore, the CPU threshold is increased as it will be reported as over 100%

utilization when running with turbo.

63 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.3 Test configuration details The following components were used to complete the validation testing for the solution:

XC Series hardware and software test components

Component Description/Version

Hardware platform(s) XC430 B5

Hypervisor(s) ESXi 6.0 U2

Broker technology Horizon 7.0

Broker database MS SQL embedded

Management VM OS Windows Server 2012 R2 (Connection Server &

Database)

Virtual desktop OS Windows 10 Enterprise

Office application suite Office Professional 2016

Login VSI test suite Version 4.1

XC430 Xpress hardware and software test components

Component Description/Version

Hardware platform(s) XC430 Xpress

Hypervisor(s) ESXi 6.0 U2

Nutanix Acropolis 5.1.0.2

Broker technology Horizon View 7.1

Broker database Microsoft SQL 2014

Management VM OS Windows Server 2012 R2

Virtual desktop OS Windows 10 Enterprise

Office application suite Office Professional 2016

Login VSI test suite Version 4.1.25

64 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Compute VM configurations The following table summarizes the compute VM configurations for the various profiles/workloads tested.

Desktop VM specifications

User Profile vCPUs ESXi

Memory Configured

ESXi Memory

Reservation

Screen Resolution

Operating System

Task Worker 1 2GB 1GB 1280 X 720 Windows 10

Enterprise 64-bit

Knowledge Worker 2 3GB 1.5GB 1920 X 1080 Windows 10

Enterprise 64-bit

Power Worker 2 4GB 2GB 1920 X 1080 Windows 10

Enterprise 64-bit

Graphics LVSI Power + ProLibrary

2 4 GB 4GB 1920 X 1080 Windows 10

Enterprise 64-bit

Graphics LVSI Custom – Density

2 4 GB 4GB 1920 X 1080 Windows 10

Enterprise 64-bit

Graphics LVSI Custom - Performance

4 8GB 8GB 1920 X 1080 Windows 10

Enterprise 64-bit

Platform configurations The hardware configurations that were tested are summarized in the table(s) below.

XC430 B5 hardware configuration

Enterprise Platform

Platform Config

CPU Memory RAID Ctlr

SATADOM HD Config Network

XC430 B5 E5-2660v4 (14 Core, 2.0GHz)

384GB @2400 MT/s

Dell HBA 330 Mini

1 X 64GB

(CVM /

Hypervisor)

2 X 400GB, Intel

S3710, 6GB/s SATA

SSD‘s 2.5” (T1)

2 X 10Gb

Intel 2P

X520

2 X Seagate 2TB

SAS 3.5” (T2)

1 X

Broadcom

BCM5720

Compute and Management resources were split out with the following configuration across a three node

Nutanix cluster and all test runs were completed with this configuration.

65 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Node 1 – Dedicated Management: vCenter Appliance, SQL Server, View Connection Server, View

Composer, and Nutanix CVM

Node 2 – Dedicated Compute, Nutanix CVM and User VMs only.

Node 3 – Dedicated Compute, Nutanix CVM and User VMs only.

1GB networking was used for the deployment of the XC appliances while 10GB networking was used for all

PAAC testing.

XC430 Xpress hardware configuration

Enterprise Platform

Platform Config

CPU Memory RAID Ctlr

SATADOM HD Config Network

XC430 Xpress

XC-Xpress-A3

E5-2640v4 (10 Core, 2.4GHz)

384GB @2400 MT/s

Dell HBA 330 Mini

1 X 64GB

(CVM /

Hypervisor)

1 X 800GB, Intel

S3710, 6GB/s SATA

SSD‘s 2.5” (T1)

2 X 10Gb

Intel 2P

X520

3 X 4TB SAS 3.5”

(T2)

2 X 1Gb

Intel i350

BT

Instead of dedicated nodes, Nutanix CVMs and VDI management roles were deployed on the cluster with

desktops, which reduced maximum density somewhat.

1GB networking was used for the deployment of the XC appliances while 10GB networking was used for all

PAAC testing.

66 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.4 Test results and analysis The following table summarizes the test results for the compute hosts using the various workloads and

configurations. Refer to the prior section for platform configuration details.

Test result summary

Platform Config

Hypervisor Broker &

Provisioning Login VSI Workload

Density per Host

Avg CPU

Avg Mem Consumed

Avg Mem

Active

Avg IOPS / User

Avg Net Mbps / User

B5 ESXi 6.0

U2

Horizon 7

Linked Clone

Task

Worker 150 98% 324 GB 109 GB 13.1 0.777

B5 ESXi 6.0

U2

Horizon 7

Linked Clone

Knowledge

Worker 110 97% 287 GB 92 GB 13.2 1.177

B5 ESXi 6.0

U2

Horizon 7

Linked Clone

Power

Worker 85 96% 322 GB 100 GB 13.9 1.733

XC430

Xpress ESXi 6.0

U2

Horizon 7.1

Linked Clone

Task

Worker 125 96% 289 GB 101 GB 8.59 5.86

XC430

Xpress ESXi 6.0

U2

Horizon 7.1

Linked Clone

Knowledge

Worker 85 97% 278 GB 98 GB 10.48 10.39

XC430

Xpress ESXi 6.0

U2

Horizon 7.1

Linked Clone

Power

Worker 65 95% 262 GB 96 GB 12.35 15.08

Density per Host: Density reflects number of users per compute host that successfully completed the

workload test within the acceptable resource limits for the host. For clusters, this reflects the average of the

density achieved for all compute hosts in the cluster.

Avg CPU: This is the average CPU usage over the steady state period. For clusters, this represents the combined average CPU usage of all compute hosts. On the latest Intel series processors, the ESXi host CPU metrics will exceed the rated 100% for the host if Turbo Boost is enabled (by default). An additional 35% of CPU is available from the Turbo Boost feature but this additional CPU headroom is not reflected in the VMware vSphere metrics where the performance data is gathered. Therefore, CPU usage for ESXi hosts is adjusted and a line indicating the potential performance headroom provided by Turbo boost is included in each CPU graph.

Avg Consumed Memory: Consumed memory is the amount of host physical memory consumed by a virtual

machine, host, or cluster. For clusters, this is the average consumed memory across all compute hosts over

the steady state period.

Avg Mem Active: For ESXi hosts, active memory is the amount of memory that is actively used, as estimated

by VMkernel based on recently touched memory pages. For clusters, this is the average amount of guest

“physical” memory actively used across all compute hosts over the steady state period.

Avg IOPS/User: IOPS calculated from the average Disk IOPS figure over the steady state period divided by

the number of users.

Avg Net Mbps/User: Amount of network usage over the steady state period divided by the number of users.

For clusters, this is the combined average of all compute hosts over the steady state period divided by the

number of users on a host.

67 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

A Nutanix Controller VM (CVM) is located on each host in the cluster and each CVM has 10,000 MHz of CPU

reserved, 16GB of memory and 8 vCPU’s allocated. The Nutanix CVM’s use approximately 8% of the

available CPU MHz on each host when no tests are running and all user VMs are powered off.

Without the inclusion of CPU turbo boost there is a total of 55,916 MHz available for Desktops and with turbo

boost the total available MHz value is 75,486 MHz.

68 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

XC430 B5 Refer to the Platform Configurations section for hardware configuration details.

6.4.1.1 Task Worker, 150 Users, ESXi 6.0 U2, Horizon 7 Linked Clones Each of the compute hosts was populated with 150 virtual machines and one Nutanix CVM per host. With all

user virtual machines powered on and before starting test, the CPU usage was approximately 24%.

The below graph shows the performance data for 150 user sessions per host on a pair of compute hosts and

one management host. The CPU reaches a steady state average of 98% across both compute hosts during

the test cycle when 150 users are logged on to each compute host.

The Management host in the cluster runs the vSphere management virtual machines and a Nutanix CVM

virtual machine. Its CPU utilization is significantly lower than the compute hosts in the cluster. The CPU

utilization for the management host does not exceed 16% at any point in the test cycle.

Boot Storm Logon Steady State Logoff

0

20

40

60

80

100

120

140

1:4

5

1:5

5

2:0

5

2:1

5

2:2

5

2:3

5

2:4

5

2:5

5

3:0

5

3:1

5

3:2

5

3:3

5

3:4

5

3:5

5

4:0

5

4:1

5

4:2

5

CPU Utilization %

Management Compute ACompute B CPU Threshold 95%Turbo Performance Increase 35%

69 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

In regards to memory consumption for the cluster, out of a total of 384 GB available memory per node there

were no constraints for any of the hosts. The compute hosts reached a maximum memory consumption of

328 GB with active memory usage reaching a max of 282GB. There was no memory ballooning or swapping

on any of the hosts.

Boot Storm Logon Steady State Logoff

0

64

128

192

256

320

384

1:4

5

1:5

5

2:0

5

2:1

5

2:2

5

2:3

5

2:4

5

2:5

5

3:0

5

3:1

5

3:2

5

3:3

5

3:4

5

3:5

5

4:0

5

4:1

5

4:2

5

Consumed Memory GB

Management Compute A Compute B

Boot Storm Logon Steady State Logoff

0

64

128

192

256

320

384

1:4

5

1:5

5

2:0

5

2:1

5

2:2

5

2:3

5

2:4

5

2:5

5

3:0

5

3:1

5

3:2

5

3:3

5

3:4

5

3:5

5

4:0

5

4:1

5

4:2

5

Active Memory GB

Management Compute A Compute B

70 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Network bandwidth is not an issue on this test run with a steady state peak of approximately 181Mbps.

The cluster reached a maximum of 29,559 Disk IOPS during the boot storm before test start and the IOPS

peak during the login phase was 4,666 IOPS and 4,425 IOPS at the start of steady state. The average Disk

IOPS figure over the steady state period was 3,942.

Boot Storm Logon Steady State Log…

0

20

40

60

80

100

120

140

160

180

200

1:4

5

1:5

5

2:0

5

2:1

5

2:2

5

2:3

5

2:4

5

2:5

5

3:0

5

3:1

5

3:2

5

3:3

5

3:4

5

3:5

5

4:0

5

4:1

5

4:2

5

Network Usage Mbps

Management Compute A Compute B

71 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Nutanix Cluster IOPS graphs and IOPS numbers are taken from the Nutanix Prism web console and the

graphs clearly show the initial reboot of all the desktops followed by the settle period, logon, steady state and

logoff phases. The blue slider indicates the start of the steady state period. The top graph is the Disk IOPS for

the whole cluster and the bottom graph shows the Disk IOPS for each individual server in the cluster.

Disk I/O Latency was not an issue. The maximum latency reached was approximately 7 ms during a spike in

the user logon period of the test run. This was a single spike but was still below the 20ms threshold that is

regarded as becoming potentially troublesome. There were no latency issues during the testing period.

72 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Login VSI Max user experience score shown below for this test was not reached indicating there was

little deterioration of user experience during testing and manually interacting with the test sessions confirmed

this. Mouse and window responses were fast and video play back was of good quality.

Notes:

As indicated above, the CPU graphs do not take into account the extra 35% of CPU resources

available through the E5- 2660v4’s turbo feature.

User login times increased slowly and consistently over the test period with very few outlier sessions

taking extra time to logon slowly increasing over the test period with no extra-long session login

times.

384 GB of memory installed on each node was just about enough for this configuration as the

memory approached maximum usage and caused a small amount of ballooning.

The Management host in this test setup is under-utilized and can easily host user VMs if required.

73 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.4.1.2 Knowledge Worker, 110 Users, ESXi 6.0 U2, Horizon 7 Linked Clones Each of the compute hosts was populated with 110 virtual machines and one Nutanix CVM per host. With all

user virtual machines powered on and before starting test, the CPU usage was approximately 20%.

The below graph shows the performance data for 110 user sessions per host on a pair of compute hosts and

one management host. The CPU reaches a steady state average of 97% across both compute hosts during

the test cycle when 110 users are logged on to each compute host.

The Management host in the cluster runs the vSphere management virtual machines and a Nutanix CVM

virtual machine. Its CPU utilization is significantly lower than the compute hosts in the cluster. The CPU

utilization for the management host does not exceed 27% at any point in the test cycle.

Boot Storm Logon Steady State Logoff

0

20

40

60

80

100

120

140

9:3

0

9:4

0

9:5

0

10

:00

10

:10

10

:20

10

:30

10

:40

10

:50

11

:00

11

:10

11

:20

11

:30

11

:40

11

:50

12

:00

12

:10

12

:20

12

:30

12

:40

CPU Usage %

Management Compute A

Compute B CPU Threshold 95%

Turbo Performance Increase 35%

74 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

In regards to memory consumption for the cluster, out of a total of 384 GB available memory per node there

were no constraints for any of the hosts. The compute hosts reached a maximum memory consumption of

296 GB with active memory usage reaching a max of 226 GB during the reboot of the user desktops. There

was no memory ballooning or swapping on any of the hosts.

Boot … Logon Steady State Logoff

0

64

128

192

256

320

384

9:3

0

9:4

0

9:5

0

10

:00

10

:10

10

:20

10

:30

10

:40

10

:50

11

:00

11

:10

11

:20

11

:30

11

:40

11

:50

12

:00

12

:10

12

:20

12

:30

12

:40

Consumed Memory GB

Management Compute A Compute B

Boot Storm Logon Steady State Logoff

0

64

128

192

256

320

384

9:3

0

9:4

0

9:5

0

10

:00

10

:10

10

:20

10

:30

10

:40

10

:50

11

:00

11

:10

11

:20

11

:30

11

:40

11

:50

12

:00

12

:10

12

:20

12

:30

12

:40

Active Memory GB

Management Compute A Compute B

75 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Network bandwidth is not an issue on this test run with a steady state peak of approximately 163 Mbps.

The cluster reached a maximum of 21,769 Disk IOPS during the boot storm before test start and the IOPS

peak during the login phase was 3,546 IOPS and 3,411 IOPS at the start of steady state. The average Disk

IOPS figure over the steady state period was 2,899.

Boot Storm Logon Steady State Logoff

0

20

40

60

80

100

120

140

160

180

9:3

0

9:4

0

9:5

0

10

:00

10

:10

10

:20

10

:30

10

:40

10

:50

11

:00

11

:10

11

:20

11

:30

11

:40

11

:50

12

:00

12

:10

12

:20

12

:30

12

:40

Network Usage Mbps

Management Compute A Compute B

76 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Nutanix Cluster IOPS graphs and IOPS numbers are taken from the Nutanix Prism web console and the

graphs clearly show the initial reboot of all the desktops followed by the settle period, logon, steady state and

logoff phases. The blue slider indicates the start of the steady state period. The top graph is the Disk IOPS for

the whole cluster and the bottom graph shows the Disk IOPS for each individual server in the cluster.

Disk I/O Latency was not an issue. The maximum latency reached was approximately 8.3 ms during a spike

in the steady state period of test. This was well below the 20ms threshold that is regarded as becoming

potentially troublesome.

77 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Login VSI Max user experience score shown below for this test was not reached indicating there was

little deterioration of user experience during testing and manually interacting with the test sessions confirmed

this. Mouse and window responses were fast and video play back was of good quality.

Notes:

As indicated above, the CPU graphs do not take into account the extra 35% of CPU resources

available through the E5-2660v4’s turbo feature.

User login times were consistent up until the final few sessions were launching when a few session

took some extra time to login.

384 GB of memory installed on each node is more than enough for this configuration and should run

equally well with less memory installed.

The Management host in this test setup is underutilized and can easily host user VMs if required.

78 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.4.1.3 Power Worker, 85 Users, ESXi 6.0 U2, Horizon 7 Linked Clones Each of the compute hosts was populated with 85 virtual machines and one Nutanix CVM per host. With all

user virtual machines powered on and before starting test, the CPU usage was approximately 21%.

The below graph shows the performance data for 85 user sessions per host on a pair of compute hosts and

one management host. The CPU reaches a steady state average of 96% across both compute hosts during

the test cycle when 85 users are logged on to each compute host.

The Management host in the cluster runs the vSphere management virtual machines and a Nutanix CVM

virtual machine. Its CPU utilization is significantly lower than the compute hosts in the cluster. The CPU

utilization for the management host does not exceed 13% at any point in the test cycle.

Boot Storm Logon Steady State Logoff

0

20

40

60

80

100

120

140

12

:30

12

:40

12

:50

1:0

0

1:1

0

1:2

0

1:3

0

1:4

0

1:5

0

2:0

0

2:1

0

2:2

0

2:3

0

2:4

0

2:5

0

3:0

0

CPU Usage %

Management Compute A

Compute B CPU threshold 95%

Turbo Performance Increase 35%

79 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

In regards to memory consumption for the cluster, out of a total of 384 GB available memory per node there

were no constraints for any of the hosts. The compute hosts reached a maximum memory consumption of

352 GB with active memory usage reaching a max of 281 GB during the reboot of the user desktops. There

was no memory ballooning or swapping on any of the hosts.

Boot Storm Logon Steady State Logoff

0

64

128

192

256

320

384

12

:30

12

:40

12

:50

1:0

0

1:1

0

1:2

0

1:3

0

1:4

0

1:5

0

2:0

0

2:1

0

2:2

0

2:3

0

2:4

0

2:5

0

3:0

0

Consumed Memory GB

Management Compute A Compute B

Boot Storm Logon Steady State Logoff

0

64

128

192

256

320

384

12

:30

12

:40

12

:50

1:0

0

1:1

0

1:2

0

1:3

0

1:4

0

1:5

0

2:0

0

2:1

0

2:2

0

2:3

0

2:4

0

2:5

0

3:0

0

Active Memory GB

Management Compute A Compute B

80 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Network bandwidth is not an issue on this solution with a steady state peak of approximately 178 Mbps.

The cluster reached a maximum of 9,536 Disk IOPS during the boot storm before test start and the IOPS

peak during the login phase was 3,287 IOPS and 2,990 IOPS at the start of steady state. The average Disk

IOPS figure over the steady state period was 2,366.

Boot Storm Logon Steady State Logoff

0

20

40

60

80

100

120

140

160

180

200

12

:30

12

:40

12

:50

1:0

0

1:1

0

1:2

0

1:3

0

1:4

0

1:5

0

2:0

0

2:1

0

2:2

0

2:3

0

2:4

0

2:5

0

3:0

0

Network Usage Mbps

Management Compute A Compute B

81 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Nutanix Cluster IOPS graphs and IOPS numbers are taken from the Nutanix Prism web console and the

graphs clearly show the initial reboot of all the desktops followed by the settle period, logon, steady state and

logoff phases. The blue slider indicates the start of the steady state period. The top graph is the Disk IOPS for

the whole cluster and the bottom graph shows the Disk IOPS for each individual server in the cluster.

Disk I/O Latency was not an issue. The maximum latency reached was approximately 6.2 ms during a spike

in the reboot period of test. This was well below the 20ms threshold that is regarded as becoming potentially

troublesome.

82 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Login VSI Max user experience score shown below for this test was not reached indicating there was

little deterioration of user experience during testing and manually interacting with the test sessions confirmed

this. Mouse and window responses were fast and video play back was of good quality.

Notes:

As indicated above, the CPU graphs do not take into account the extra 35% of CPU resources

available through the 2660v4’s turbo feature.

User login times were consistent and slowly increased over the test period with a small number of

sessions taking some extra time to login.

384 GB of memory installed on each node is close to the limit for this workload and number of user

VMs.

The Management host in this test setup is under-utilized and can easily host user VMs if required.

83 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

XC430 Xpress Refer to the Platform Configurations section for hardware configuration details.

6.4.2.1 Task Worker, 125 Users, ESXi 6.0 U2, Horizon 7.1 Linked Clones In this test run each host had 125 desktops, plus the Nutanix CVM and various management VMs.

The CPU usage peaked at 100% for one host during the boot storm, and the steady state average was 96%.

The memory consumption was within the 85% tolerance, with a steady state max of 289 GB.

84 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Active memory averaged 101 GB over steady state with a peak of 119 GB after steady state.

Network Usage peaked at 1291 Mbps during the boot storm and averaged 732 Mbps in steady state.

85 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The IOPS peaked at 8224 IOPS during the boot storm and averaged 3223 in steady state. At the beginning of

steady state the value was 2195 IOPS.

Disk IO Latency for the cluster was below the 20 ms threshold. The peak was 11.7 ms during boot storm and

the steady state peak was 7.5 ms. Average IO Latency during steady state was 3.6 ms.

86 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The VSI baseline of 1178 is in the Fair range, although VSI Max was not reached. The Index average came

close to the VS Threshold of 2179, so there is little room for any additional sessions.

Login VSI Baseline VSIMax Reached VSI Threshold

1178 NO 2179

87 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.4.2.2 Knowledge Worker, 85 Users, ESXi 6.0 U2, Horizon 7.1 Linked Clones In this test run each host had 85 user sessions. The CPU Usage peaked at 100% during the boot storm,

which was typical on Horizon View test runs. The steady state average CPU was 97%.

Memory consumption was below the 85% threshold with a steady state average of 278 GB.

88 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Active memory averaged 98 GB during steady state and had a peak of 227 GB during boot storm and a

smaller peak of 121 GB during the logoff phase.

Network usage peaked at 1030 Mbps during Steady state and averaged 883 Mbps.

89 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

In this test run we were able to capture additional data series for individual hosts. It is clear from the graph

that the cluster series is the summation of the 3 hosts’ series, which is as expected. The peak IOPS during

the test run occurred in the boot storm phase at 7953 IOPS. At the beginning of steady state the IOPS were

2104, and averaged 2672 IOPS.

We also were able to capture Disk IO Latency per host. It is clear from the graph below that the Cluster

series is the average of the 3 hosts’ series. The peak IO latency on any host was 10.7 ms during the boot

storm. The cluster latency average during steady state was 5.1 ms.

90 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Achieving a VSI baseline of 825 places this test run in the Good user experience range. VSIMax was not

reached, and the Index Average reached only 1616, well below the threshold.

Login VSI Baseline VSIMax Reached VSI Threshold

825 NO 1825

91 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

6.4.2.3 Power Worker, 65 Users, ESXi 6.0 U2, Horizon 7.1 Linked Clones Each host in this test run had 65 user sessions, plus the Nutanix CVM and various management VM roles.

The CPU usage peaked at 100% during boot storm phase, and averaged 95% during steady state.

Memory consumption was below the 85% threshold and peaked at 284 GB during steady state. The average

in steady state was 265 GB.

92 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Active memory peaked at 232 GB during boot storm and averaged 96 GB in steady state.

Network usage averaged 980 Mbps during steady state, and the peak usage for any host was 1118 Mbps.

93 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The Nutanix cluster IOPS peaked at 7174 IOPS during the boot storm phase, and peaked at 3296 IOPS

during the logon phase. At the beginning of steady state the IOPS was 2357, and the steady state average

IOPS were 2407.

The peak IO latency on any host was 9.8 ms during the boot storm. The cluster latency average during

steady state was 5.3 ms.

94 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

The VSI baseline of 815 places this test run in the Good user experience range. The index average peaked at

1360, well below the threshold of 1815.

Login VSI Baseline VSIMax Reached VSI Threshold

815 NO 1815

95 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

7 Related resources See the following referenced or recommended resources:

The Dell EMC Cloud-Client Computing Solutions for VMware Tech Center page which includes this

RA and other VMware Horizon based RAs.

Dell EMC Tech Center for XC Series:

http://en.community.dell.com/techcenter/storage/w/wiki/11454.dell-emc-xc-series-hyper-converged-

solution

http://www.dell.com/XCSeriesSolutions for Dell EMC XC Series white papers.

www.Dell.com/xcseriesmanuals for deployment guides (XC Xpress only), manuals, support info,

tools, and videos.

96 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

Acknowledgements

Thank you to the Dell EMC XC Series engineering and marketing teams for their input and feedback for this

document.

Thank you to the Nutanix Technical Marketing and Solution Engineering teams for the detail presented in

section 2.2-2.4 of this document.

Information for the Nutanix All-Flash section sourced from The Definitive Guide to Application

Performance with Hyperconverged All-Flash Solutions by Nutanix and the Nutanix All-Flash Solutions

white paper.

Thank you to David Hulama of the Dell Wyse Technical Marketing team for his support and assistance with

datacenter EUC programs at Dell.

97 Dell EMC XC Series Hyper-Converged Appliances for VMware Horizon – Reference Architecture

About the authors

Peter Fine is the Chief Architect and CTO of EUC Enterprise Engineering at Dell. Peter owns the strategy,

architecture and leads the engineering of the datacenter EUC product and solutions portfolio. Follow Peter

@ExitTheFastLane or www.ExitTheFastLane.com.

Jerry Van Blaricom is a Lead Architect in the Cloud Client Solutions Engineering Group at Dell. Jerry has

extensive experience with the design and implementation of a broad range of enterprise systems and is

focused on making Dell’s virtualization offerings consistently best in class.

Geoff Dillon is a Solutions Engineer in the Cloud Client Solutions Engineering Group at Dell. Geoff is

experienced in enterprise system deployment and administration and is working on making Dell’s VDI

solutions stand out in the market.


Recommended