Worldwide Consulting Solutions | WHITE PAPER | Citrix XenDesktop
www.citrix.com
XenDesktop Planning Guide
Integration with Microsoft Hyper-V 2008 R2
Page 2
Table of Contents
Introduction ........................................................................................................................................................ 5
Reference Architecture ...................................................................................................................................... 6
Conceptual Architecture ............................................................................................................................... 6
Design Decision Summary ........................................................................................................................... 7
Architectural Guidelines ................................................................................................................................. 10
Hardware ....................................................................................................................................................... 10
1. Hardware Compatibility ................................................................................................................. 10
2. Hyper-V Edition .............................................................................................................................. 10
3. Processor Specification................................................................................................................... 11
4. Memory Capacity ............................................................................................................................ 11
5. Storage Capacity .............................................................................................................................. 12
6. Direct-attached Storage .................................................................................................................. 12
7. Shared Storage Connectivity .......................................................................................................... 13
8. Network ............................................................................................................................................ 14
9. Scale Up/Out ................................................................................................................................... 15
10. Hyper-V Host Scalability ........................................................................................................... 15
11. Dynamic Memory ....................................................................................................................... 16
12. Component Redundancy ........................................................................................................... 17
Networking ................................................................................................................................................... 17
1. Physical Network Connections ..................................................................................................... 17
2. NIC Teaming ................................................................................................................................... 18
3. Virtual Adapter ................................................................................................................................ 18
4. Provisioning Services ...................................................................................................................... 19
5. IP Addressing................................................................................................................................... 19
6. VLANs .............................................................................................................................................. 19
Page 3
7. Security .............................................................................................................................................. 20
Storage ........................................................................................................................................................... 20
1. Direct-attached Storage/Storage-attached network ................................................................... 20
2. Database Storage ............................................................................................................................. 21
3. Tiered Storage .................................................................................................................................. 21
4. Performance ..................................................................................................................................... 21
5. Thin Provisioning ............................................................................................................................ 23
6. Data De-Duplication ...................................................................................................................... 24
Hyper-V Failover Cluster ............................................................................................................................ 24
1. Virtual Machines to Cluster ........................................................................................................... 24
2. Number of Failover Clusters ......................................................................................................... 25
3. Nodes per Failover Cluster ............................................................................................................ 26
4. Cluster Shared Volumes ................................................................................................................. 26
System Center 2012 Virtual Machine Manager ........................................................................................ 27
1. VMM Architecture .......................................................................................................................... 27
1. VMM Design and Scalability ......................................................................................................... 28
Operating System Delivery ............................................................................................................................. 30
1. Templates ......................................................................................................................................... 30
2. Machine Creation Services ............................................................................................................. 30
3. Provisioning Services ...................................................................................................................... 31
4. Desktop Customization.................................................................................................................. 33
Management ..................................................................................................................................................... 34
Monitoring .................................................................................................................................................... 34
1. Management Tool ........................................................................................................................... 34
2. Management Tool Access .............................................................................................................. 34
3. Hyper-V Hardware ......................................................................................................................... 34
Page 4
4. Hyper-V Performance .................................................................................................................... 34
5. XenDesktop Infrastructure............................................................................................................ 35
Backup and Recovery .................................................................................................................................. 35
1. Backup Software .............................................................................................................................. 35
2. Backup Type .................................................................................................................................... 35
3. Components to Backup .................................................................................................................. 35
High Availability ........................................................................................................................................... 36
1. VMM server ..................................................................................................................................... 36
2. VMM Library ................................................................................................................................... 36
3. SQL Server ....................................................................................................................................... 37
4. XenDesktop Infrastructure VMs .................................................................................................. 37
5. Virtual Desktops .............................................................................................................................. 37
Appendix ........................................................................................................................................................... 37
Appendix A: Optimizations........................................................................................................................ 37
Appendix B: High-Level Implementation Steps ..................................................................................... 39
Product Versions.............................................................................................................................................. 44
Revision History ............................................................................................................................................... 44
Page 5
Introduction
This document provides design guidance for Citrix XenDesktop 5.x deployments that leverage
Microsoft Hyper-V 2008 R2. It should not be considered as a replacement for other Hyper-V or
XenDesktop design guidance, but rather an addendum that will assist in design decisions specifically
related to integrating XenDesktop with Microsoft Hyper-V.
It is assumed that the reader is already familiar with the architecture of XenDesktop. For an
overview of the XenDesktop architecture, please refer to the Citrix Knowledgebase Article
CTX133162 – Modular Reference Architecture.
The document has been divided into four sections –
1. Reference Architecture. Provides a high-level conceptual architecture and default
recommendations for key design decisions that are discussed in detail within the
Architectural Guidelines section of the document.
2. Architectural Guidelines. Provides guidance on the core components of the Hyper-V
architecture including host hardware, networking configuration, storage, clustering and the
VMM server.
3. Operating System Delivery. Examines the key Hyper-V design decisions relating to
operating system delivery; options include – Virtual Hard Disks, Machine Creation Services
and Provisioning Services.
4. Management. Discusses the key Hyper-V design decisions that need to be considered so
that the XenDesktop environment is stable, highly available and continues to provide users
with a high-level of performance.
After reading this document, the reader will be able to effectively design a XenDesktop deployment
on Microsoft’s hypervisor and have a high-level understanding of the key implementation steps. For
further planning guides, please refer to the XenDesktop Design Handbook.
Page 6
Reference Architecture
Conceptual Architecture
The following diagram depicts a typical Hyper-V architecture used to support XenDesktop where
separate Hyper-V clusters have been created to host the infrastructure servers and dedicated virtual
desktops. The Hyper-V hosts supporting the pooled desktops and XenApp servers have not been
clustered as they are user persistent. If a host fails, identical virtual desktops will still be available on
another host. This allows costs to be reduced by using cheap local storage for non-persistent
desktops.
Page 7
Design Decision Summary
The following table provides default recommendations for the key Hyper-V design decisions. These
recommendations are intended to provide a starting point that is valid in the majority of situations.
Additional information on each design decision is available within the Architectural Guidelines
section of this document.
Decision Default Recommendation Justification
HARDWARE
Hyper-V Edition for
virtual desktops
Microsoft Hyper-V Server 2008 R2
Microsoft Hyper-V Server 2008 R2 is a
good choice for virtual desktops because
it has a reduced attack surface and
requires less maintenance than Windows
Server 2008 R2.
Hyper-V Edition for
XenApp servers and
infrastructure servers
Microsoft Windows Server 2008 R2
Datacenter
Microsoft Windows Server 2008 R2
Datacenter is a good choice for the
XenApp servers and infrastructure
servers because it includes unlimited
virtual server image use rights.
Processor
Specification
1. 64-bit Intel VT or AMD-V
processors
2. Hardware Data Execution
Protection (DEP)
3. Second Level Address
Translation (SLAT) capable
processors
Processors that support this
specification will support Hyper-V and
offer an improved level of scalability.
RAID Configuration RAID-1 / RAID-10 Hyper-V hosts supporting the write
intensive Provisioning Services write-
cache (80% write) and Machine Creation
Services difference disks (50% write)
should utilize either RAID-1 or RAID-
10 to help minimize write penalties.
Network Card
Specification
1. TCP chimney offload
2. Virtual Machine Queue (VMQ)
3. Jumbo Frames
4. 1Gbps or greater
Network cards that meet this
specification will help to ensure that the
network does not become a bottleneck
and also improve host scalability.
Scaling Up/Out Scale Up Scaling up typically results in a less
expensive solution due to a reduction in
power, networking, management,
support and hosting costs.
Dynamic Memory Enabled Provides additional VM density per host
when memory is the limiting factor.
Redundancy Redundant storage and network
connections as well as power
supplies, fans and disks.
Prevents a single component failure
from impacting a large number of users.
Page 8
NETWORKING
Physical Networks 4 x Network
2 x NICs (teamed): Live Migration Network, Cluster Shared Volume Network, Virtual Machine Network, Provisioning Network (teamed)
1 x NIC: Management Network
1 x NIC: Cluster Private Network
Assumed that HBA connections used for shared storage.
Virtual Network
Adapter Type
Legacy The legacy network connection will be
suitable for all but the most network
intensive environments.
LAN / VLANs 1. Desktop Network
2. Server Network
3. Provisioning Network
For reasons of security and
performance, the XenDesktop
infrastructure should be hosted on a
separate VLAN to the virtual desktops.
Separate physical networks are typically
created for management, cluster and
storage networks due to security and
bandwidth requirements.
STORAGE
Direct-attached
Storage
1. Host partition install
2. XenApp servers
3. Pooled desktops
The XenApp servers and pooled
desktops do not typically contain user
specific data or applications which
require the use of a failover cluster or
live migration.
Storage-attached
Network
1. Dedicated Desktops
2. Infrastructure servers
Components that require a high-level of availability should be stored on the SAN so that live migration and failover clustering can be used.
Virtual machines per
LUN
30 Limiting the number of virtual machines per LUN helps to ensure that the iSCSI queue does not become congested.
Thin Provisioning Disabled Microsoft does not recommend that
system volumes or page files are hosted
on thin provisioned LUNs.
De-Duplication Enabled Reduces storage requirements.
Monitoring should be implemented to
ensure that de-duplication does not
impact performance.
HYPER-V FAILOVER CLUSTERS
Virtual Machines to
Cluster
1. Infrastructure servers
2. Dedicated desktops
Typically require a high-level of
availability.
Page 9
Number of Clusters Minimum of 2 –
1. Server infrastructure
2. Dedicated virtual desktops
Ensure that dedicated virtual desktop
resources are isolated from
infrastructure resources for optimal
performance and availability.
Resilience per cluster N+1 Ensures that there is sufficient capacity within the cluster to accommodate a single host becoming unavailable or being taken offline for maintenance.
OPERATING SYSTEM DELIVERY
Operating System
Delivery Technology
Provisioning Services Provisioning Services is recommended
as Hyper-V is not supported with
Intellicache. As such, Machine Creation
Services generates approximately 50%
more IOPS than Provisioning Services.
Personal vDisk Enabled for power users and
knowledge workers
These user groups typically require the
ability to customize their desktop
(beyond profile changes).
BACKUP & RECOVERY
Backup Type Backup from Hyper-V host Performing the backup from the Hyper-
V host is preferred because it captures
more data than performing the backup
from within the guest operating.
HIGH AVAILABILITY
VMM Server Hosted on Huper-V Failover Cluster Required so that XenDesktop can
control the power state of the virtual
desktops.
SQL High
Availability
Hosted on Hyper-V Failover Cluster Required so that support staff can
administer the Hyper-V environment.
VMM Library No High Availability The VMM Library is not essential for
the availability of existing virtual
desktops or XenApp servers.
Infrastructure Virtual
Machines
Hosted on Hyper-V Failover Cluster Ensures that a single host failure does
not affect the availability or
performance of the XenDesktop
environment.
Virtual Desktop
Hosts
Dedicated desktops hosted on
Microsoft Failover Cluster.
Dedicated desktops are unique, unlike
pooled desktops.
Page 10
Architectural Guidelines
The sections below include architectural guidance and scalability considerations that should be
addressed when planning a XenDesktop deployment hosted on Microsoft Hyper-V.
Hardware
The hardware selected for the Hyper-V hosts has a direct impact on the performance, scalability,
stability and resilience of the XenDesktop solution. As such, the following key design decisions
must be considered during the hardware selection process:
1. Hardware Compatibility
For reasons of stability, it is imperative that the Hyper-V hardware specification selected in the
design is compatible with Hyper-V. Microsoft maintains a list of servers and hardware
components capable of supporting Hyper-V through the Windows Server Catalog website.
Servers and hardware components with the “Certified for Windows Server 2008 R2” and
“Certified for Windows Server 2008” logo pass the Microsoft standards for supporting x64 and
virtualization technologies.
2. Hyper-V Edition
The design must determine which variation of Hyper-V will be used:
Hyper-V Server 2008 R2. Standalone version of Hyper-V that does not include a
Graphical User Interface (GUI). Administration is either performed remotely, or via the
command line. Hyper-V Server does not require a Windows 2008 R2 license, however
licenses are required for each guest virtual machine. For more information, please refer
to the Microsoft Hyper-V Server 2008 R2 website.
Windows Server 2008 R2. Hyper-V is also available as an installable role for Windows
Server 2008 R2 Standard, Enterprise and Datacenter editions, each of which includes
different levels of “virtual server image use rights” which range from one Microsoft
server operating system license to unlimited. For more information, please refer to the
Microsoft Windows Server 2008 R2 website.
The following table highlights the key differences between Hyper-V Server 2008 R2 and the
different editions of Windows Server 2008 R2 when it comes to supporting XenDesktop:
Capability Hyper-V Server 2008 R2
Windows Server 2008 R2
Standard Enterprise Datacenter
Virtual Server Image Use Rights 0 1 4 Unlimited
Sockets 8 4 8 64
Memory 1TB 32GB 1TB 1TB
Cluster Shared Volumes Yes No Yes Yes
Page 11
Hyper-V Server 2008 R2 is typically preferred for XenDesktop deployments because it has a
decreased attack service, improved stability, reduced maintenance and does not require a
Windows Server 2008 R2 license. Windows Server 2008 R2 Enterprise and Datacenter is
recommended when familiarity with the command-line is limited or there is third-party software
to be installed on the server that is not supported on the Hyper-V Server 2008 R2 version.
The virtual server image rights included with Windows Server 2008 R2 make it an excellent
choice for hosting XenApp servers.
Note: The Standard edition of Windows Server 2008 R2 is rarely used due to the limited memory support. In
addition, the Standard edition lacks Cluster Shared Volumes support which is a recommended feature for hosting
infrastructure servers and dedicated virtual desktops.
3. Processor Specification
The Hyper-V design must ensure that the processor specification selected is capable of
supporting Hyper-V:
64-bit Intel VT or AMD-V processors with a clock speed of at least 1.4 GHz, however
2GHz or faster is recommended.
Hardware Data Execution Protection (DEP), specifically Intel XD bit (execution disable
bit) or AMD NX bit (no execute bit), must be enabled and available. For more
information, please refer to Microsoft TechNet Article 912923 - How to determine that
hardware DEP is available and configured on your computer.
For improved performance and scalability, ensure that Second Level Address Translation
(SLAT) capable processors are selected. Intel calls this technology Extended Page Tables (EPT)
and AMD calls it Nested Page Tables (NPT). The technology provides hardware assistance for
memory virtualization, memory allocation, and address translation between the physical and
virtual memory. The reduction in processor and memory overhead improves scalability allowing
a single Hyper-V host to run more virtual machines.
Note: Live Migration allows administrators to move virtual machines between physical hosts. In order to use this feature, all physical host servers in a cluster must have processors from the same family.
4. Memory Capacity
The hardware selected for the Hyper-V hosts must have sufficient memory to support the
parent partition as well as the guest virtual machines that they support:
At a minimum, the parent partition requires 512MB of RAM, however 2GB or greater is
recommended for VDI environments.
Virtual desktops typically require between 768MB and 4GB of RAM depending on
workload and operating system used. For more information, please refer to the Hosted
Page 12
VDI Scalability section in the Citrix Knowledgebase Article CTX133162 – XenDesktop
Module Reference Architecture.
Each Citrix XenApp user typically requires between 200MB and 1GB of RAM
depending on their workload. For more information, please refer to the Hosted Shared
Scalability section in the Citrix Knowledgebase Article CTX133162 – XenDesktop
Module Reference Architecture.
Note: With the exception of Microsoft Windows Server 2008 R2 Standard edition which is limited to 32GB, Hyper-V can support up to a total of 1TB of physical memory.
5. Storage Capacity
Sufficient storage must be specified for the parent partition and any guest VMs that are to be
hosted on local storage (typically pooled desktops and XenApp servers):
At a minimum, Hyper-V Server requires 8GB of storage, however, 20GB or greater is
recommended. For more information, please refer to the Microsoft Website – Hyper-V
Server 2008 R2 System Requirements.
Microsoft Windows Server 2008 R2, requires a minimum of 32GB of storage, however,
40GB or greater is recommended. For more information, please refer to the Microsoft
TechNet Article – Installing Windows Server 2008 R2.
Virtual desktops typically require between 10 and 20GB of storage depending on
application set and operating system used. Provisioning Services helps to reduce virtual
desktop storage requirements to between 1 and 3GB.
Virtual XenApp servers typically require between 40 and 60GB of storage depending on
the application set. Provisioning Services typically helps to reduce the XenApp storage
requirements to between 10 and 15GB.
The first time a virtual machine is started Hyper-V creates a .BIN file that is equal in size
to the RAM configured for the virtual machine. This file is used to store the contents of
the RAM when the virtual machine is moved into a saved state. The file is created
automatically in the same location where the virtual machine resides and cannot be
disabled.
6. Direct-attached Storage
The Direct-attached Storage (DAS) specification selected during the design has a direct impact
on both performance and availability of the XenDesktop environment, particularly when it is
used to host guest VMs (typically pooled desktops and XenApp servers):
For optimal XenDesktop performance, it is imperative that the DAS solution can handle the IO generated by the virtual desktops on the host:
Page 13
FlexCast Model Performance Category Steady State IOPS per User
Hosted VM Light 8
Normal 15
Heavy 50
Hosted Shared Light 2
Normal 4
Heavy 8
IOPS performance can be improved by adding disks to the RAID array and/or using high performance disks (e.g. SAS 15k / SSD). In addition it is highly recommended that the array controllers are specified with a battery backed cache and a minimum of 512MB of memory. For more information, please refer to the Citrix Knowledgebase Article CTX130632 – Storage Best Practices.
Using SSDs is particularly critical in environments with blade servers because most vendors only provide two slots for local storage per blade.
Note: Hyper-V supports the following Direct-attached Storage (DAS) technologies - Serial Advanced Technology Attachment (SATA), external Serial Advanced Technology Attachment (eSATA), Parallel Advanced Technology Attachment (PATA), Serial Attached SCSI (SAS), SCSI, USB, and Firewire.
While some RAID levels are optimized for writes (i.e. RAID 0, 1, 10) others such as
RAID 5 or 6 are not:
RAID Level Capacity Read Performance (Random)
Write Performance (Random)
RAID-0 100% Very Good Very Good (Write Penalty 0)
RAID-1 50% Very Good Good (Write Penalty 2)
RAID-5 Disk Size * (number of disks -1)
Very Good Bad (Write Penalty 4)
RAID-10 50% Very Good Good (Write Penalty 2)
Therefore, Hyper-V hosts supporting the write intensive Provisioning Services write-
cache (80% write) and Machine Creation Services difference disks (50% write) should
utilize either RAID-1 or RAID-10. RAID-0 is not recommended because it does not
offer redundancy.
7. Shared Storage Connectivity
Hyper-V hosts that support virtual machines on shared storage (typically dedicated desktops and
infrastructure servers) must have an appropriate storage connection. Hyper-V supports the use
of either Host Bus Adapters (HBA) or Ethernet connections to Storage Area Networks.
Note: Hyper-V supports the following Storage Area Network (SAN) technologies - Internet SCSI (iSCSI), Fibre Channel, and SAS technologies.
Page 14
Note: Microsoft does not support Network Attached Storage (SMB, CIFS or NFS protocols) with Hyper-V. For more information, please refer to Microsoft Knowledgebase Article KB2698995 – Microsoft Support Policy for Hyper-V Environments Utilizing Network Attached Storage.
8. Network
For reasons of performance and resiliency, it is recommended that the following
recommendations are taken into consideration when selecting the Hyper-V host network cards:
Sharing network cards between virtual desktops can lead to a network bottleneck. The
use of fast network adapters/switches (1Gbps or greater) will help prevent the network
from becoming a bottleneck.
If sufficient infrastructure exists, network throughput can be increased by separating
different types of network traffic across multiple physical NICs, for example
management, virtual machine, cluster heartbeat, live migration, provisioning, backup and
storage traffic can all be isolated from each other. Please refer to the Networking
section of this document for further details.
The implementation of NIC teaming provides increased resiliency by allowing multiple
network cards to function as a single entity. In the event of one NIC failing, traffic will
be automatically routed to another NIC in the team.
Note: Microsoft does not support NIC teaming with Hyper-V; however, third party solutions are
available. For more information, please refer to Microsoft Knowledgebase Article KB968703 -
Microsoft Support Policy for NIC Teaming with Hyper-V.
Virtual Machine Queue (VMQ) capable network cards should be selected to improve
network performance. VMQ delivers network traffic from an external virtual machine
directly to the virtual machines running on the host, thus reducing the overhead from
routing packets.
TCP chimney offload capable network adapters should be selected so that network stack
processing can be offloaded to the network adapter. For more information, please refer
to the Microsoft Knowledgebase Article KB951037 – Information about the TCP
Chimney Offload.
Jumbo Frames compatible network cards should be selected for dedicated iSCSI
connections to the SAN. Jumbo frames are Ethernet frames containing more than 1500
bytes of payload. Jumbo frames allow higher throughput to be achieved, reducing the
load on system bus memory, and reducing the CPU overhead. For more information,
please refer to the Microsoft Blog post – Hyper-V Network Optimizations.
Note: All NICs and switches between the Hyper-V hosts and the storage solution must support jumbo
frames.
Page 15
For details on the number of network cards required, please refer to the Physical Network
Connections recommendations within the Networking section of this document.
9. Scale Up/Out
There are a number of environmental factors that need to be considered when determining
whether the Hyper-V host hardware specification should be “scaled up” (reduced number of
high-performance hosts) or “scaled out” (increased number of less powerful hosts), including:
Data Center Capacity. The data center may have limited space, power and/or cooling
available. In this situation, consider scaling up.
Infrastructure and Maintenance Costs. When determining the overall cost of the
Hyper-V hosts, careful consideration should be taken when planning to include costs
such as rack space, support contracts and the network infrastructure required.
Hosting Costs. There may be hosting and/or maintenance costs based on the number
of physical servers used. If so, consider “scaling up” to reduce the long-term costs of
these overheads.
Redundancy. Spreading user load across additional less-powerful servers helps reduce
the number of users affected from hardware or software failure on a single host. If the
business is unable to accept the loss of a single high-specification server, consider
“scaling out”.
10. Hyper-V Host Scalability
During the design, it’s necessary to estimate the number of physical host servers that will be
required to support the XenDesktop implementation. Ideally, scalability testing should be
performed prior to the hardware being ordered, however this is not always possible. In these
situations, consider the following guidelines when determining single host scalability:
The parent partition typically utilizes between 8 and 12% of the processor resources
available leaving between 88 and 92% available for the child partitions.
At a minimum, the parent partition requires 512MB of RAM, however 2GB or greater is
recommended for VDI environments.
Microsoft supports up to 12 virtual processors per logical processor for Hyper-V hosts
supporting Windows 7. A maximum of 8 virtual processors per logical processor is
supported for all other operating systems.
Physical processors
Cores per processor
Threads per core (Hyper-Threading)
Max Virtual Processors – Win7
Max Virtual Processors – All Other OS
2 2 2 96 64
Page 16
2 4 2 192 128
2 6 2 288 192
2 8 2 384 256
2 10 2 480 320
4 2 2 192 128
4 4 2 384 256
4 6 2 512 (Maximum) 384
4 8 2 512 (Maximum) 512 (Maximum)
4 10 2 512 (Maximum) 512 (Maximum)
It is important to realize that these are maximum values. Testing from Cisco shows that a Cisco
UCS B230 M2 Blade Server (2 processor / 10 core / 2 threads) supports 145 Windows 7
desktops running the Login VSI medium workload (rather than the 480 maximum supported).
Processor was identified as the primary bottleneck. For more information, please refer to the
Cisco Whitepaper – Citrix XenDesktop on FlexPod with Microsoft Private Cloud.
11. Dynamic Memory
Citrix has conducted many scalability tests with Dynamic Memory enabled and found it to be a
very useful and recommended tool. However, there are a few areas to consider during planning:
Dynamic Memory does not oversubscribe the memory pool. When all of the physical
memory has been assigned, used Desktops will not be able to request additional
memory.
The CPU costs to manage Dynamic memory are real, but very reasonable. (<1%
additional CPU overhead).
An increase in IOPS is normal and should be planned for. When a virtual machine
requests additional memory, it will also start to utilize its page file in preparation for its
request being denied by Hyper-V and the operating system running out of physical
memory. Therefore, it should be expected that a virtual machine will increase its I/O
characteristics during this time, which can result in an increase in IOPS of up to 10%.
This is a short term increase, and one noticed more in mass scalability testing than in real
life situations. However, on systems where RAM is more plentiful than IOPS, Dynamic
Memory settings should be conservative. For example if a Windows 7 virtual machine,
with a subset of applications, has been tested and determined to require a minimum of
700MB to run then the Dynamic Memory default setting of 512MB should be increased
to 700MB. By accepting the default setting there will be an increase in IOPS
consumption in relation to paging occurring for the guest.
For more information, including recommended default values for Startup RAM by operating
system, please refer to the Microsoft TechNet Article – Hyper-V Dynamic Memory
Configuration Guide.
Page 17
12. Component Redundancy
The design should ensure that the hardware specifications selected for the infrastructure and
desktops hosts incorporate a sufficient level of redundancy to meet the needs of the business:
Infrastructure Hosts. A component failure on a host supporting the XenDesktop
infrastructure servers could prevent users from connecting to their desktop. Even if
Hyper-V High Availability is enabled, there could be a brief outage while the servers are
restarted on another host.
Desktop Hosts. Component redundancy adds cost to the solution. Therefore, it may
be necessary to tailor the level of redundancy provided according to the requirements of
the user groups. For example, the Executive User group is likely to require a higher level
of redundancy than the HR user group.
For scenarios where a high level of redundancy is required, use redundant storage and network
connections as well as power supplies, fans and disks (typically RAID-1 or RAID-10).
Networking
If unfamiliar with the concepts in this section, please refer to the Microsoft Whitepaper -
Understanding Networking with Hyper-V.
When integrating Hyper-V with XenDesktop it is important to consider the following key
networking topics:
1. Physical Network Connections
If sufficient infrastructure exists, performance may be improved by separating different types of
network traffic across multiple physical Network Interface Cards (NICs), for example
management, cluster, virtual machine, storage, provisioning and backup traffic can all be isolated
from each other. A Hyper-V host used to support a XenDesktop environment typically utilizes
between four and eight physical network connections:
2 x NICs (teamed): Live Migration Network, Cluster Shared Volume Network, Virtual
Machine Network
1 x NIC: Management Network
1 x NIC: Cluster Private Network
If standard network adapters, rather than HBAs, are used to access shared storage, separate out
the storage traffic onto a dedicated network:
2 x NICs (teamed): Storage Network
Page 18
In addition, if the servers are network-constrained and Provisioning Services is utilized it may be beneficial to separate out the provisioning traffic:
2 x NICs (teamed): Provisioning Network
For more information, please refer to CTX120955 - How to Setup a Multi-Homed VM
Separating Large Scale Traffic from ICA Traffic for XenDesktop.
Note: All hosts within a cluster must have the same number of network interfaces and the interfaces must be connected to the same networks.
2. NIC Teaming
The implementation of NIC teaming provides increased resiliency by allowing up to two
network cards to function as a single entity. In the event of one NIC failing, traffic is
automatically routed to the second NIC. The Hyper-V design should determine which network
connections should be teamed for resiliency. Ideally, all network connections should be teamed
apart from the Management and Cluster Networks because they can be failed over to alternative
networks in the event of a failure.
Many servers today are supplied with NICs offering two or more ports. As such, it is important
that any teams created consist of connections from two separate physical NICs so that a single
card failure does not bring down the team.
Redundancy should also encompass the external network. Teamed NICs should be diversely
connected to external switches to help reduce the risk from a single switch failure.
Note: Microsoft does not support NIC teaming with Hyper-V; however, third party solutions are available. All support queries regarding teaming with Hyper-V should be directed to the NIC OEM. In addition, some OEMs may not support TCP Chimney Offload and VMQ functionality with NIC teaming enabled. Please refer to your OEM’s teaming documentation for more information. For more information, please refer to Microsoft Knowledgebase Article KB968703 - Microsoft Support Policy for NIC Teaming with Hyper-V.
3. Virtual Adapter
Hyper-V supports two types of network adapters: Legacy and Synthetic. The synthetic adapter is
better performing, requires reduced host CPU, and is installed by default when a VM is created.
In XenDesktop environments using Provisioning Services the legacy network adapter must be
used in order to PXE boot the desktops. As such, two options must be considered during the
Hyper-V design:
Only use the legacy network adapter, which does not perform as well as the synthetic
adapter.
Configure the Provisioning Services targets with dual network adapters: one synthetic
and one legacy adapter. Once the virtual desktop PXE boots, the synthetic adapter has
Page 19
precedence over the legacy adapter, but PVS traffic is bound to the legacy adapter and is
not able to take advantage of the better performing synthetic adapter.
Note: If the dual NIC configuration is used, each desktop will require two IP addresses from the DHCP server.
The legacy network connection will be suitable for all but the most network intensive
environments. Therefore, for simplicity, utilize the legacy network adapter unless an associated
network bottleneck is identified.
4. Provisioning Services
Provisioning Services is a bandwidth intensive application that can occasionally impact network
performance, particularly if 100Mbps/1Gbps connections are used. Depending on the impact
Provisioning Services will have on the underlying network, it may be necessary to implement a
high-capacity network (10Gbps) or create a separate physical network dedicated to Provisioning
Services traffic. For more information, please refer to the Citrix Blog post – PVS Stream Traffic
Isolation.
For details on how to create a separate Provisioning Services network with XenDesktop, please
refer to CTX120955. For details on the average data transfers during boot-up by operating
system, please refer to CTX125744.
5. IP Addressing
IP addresses need to be assigned to the Hyper-V network interfaces and individual virtual
machines. As such, the design must consider the IP addressing requirements for these
components. If DHCP is used to provide the IP configuration for the Hyper-V hosts, ensure
that reservations are created for the appropriate MAC addresses to prevent configuration issues
when IP addresses change.
The Hyper-V network design should ensure that the Hyper-V traffic is routed via the
appropriate virtual and physical networks. For example, shared storage traffic is routed via the
parent partition and not directly from the virtual machine. Depending on the networking
architecture used, static routes may need to be added to the parent partition’s routing table to
ensure that the correct interface is used.
6. VLANs
Many network environments utilize VLAN technologies to reduce broadcast traffic, enhance
security and to enable complex virtual network configurations which would otherwise not be
possible. The Hyper-V network design should determine which VLANs will be required.
Typical VLANs used in a XenDesktop environment include:
Desktop VLAN
Page 20
Server VLAN
Provisioning VLAN
Separate physical networks are typically created for the Hyper-V Live Migration, Cluster and
Management networks.
Note: Hyper-V supports the configuration and use of 802.1Q tagged VLANs. Each Hyper-V host will need to have its physical NICs connected to specific VLAN trunk ports to allow for the correct routing of VLAN tagged traffic. For more information, please refer to the TechNet Article – Hyper-V: Configure VLANs and VLAN Tagging.
7. Security
For security, firewalls should be used to control traffic flow between the following components:
Virtual desktops and the infrastructure servers
End users and the virtual desktops
End users and the infrastructure servers
The Hyper-V network design should ensure that that port 8100 (WCF) is open between the
XenDesktop Controllers and the VMM server to facilitate machine state queries and power
management operations. For a full list of ports that need to be open for XenDesktop, please
refer to the Citrix Knowledgebase Article CTX101810 – Communication Ports Used By Citrix
Technologies
Note: The Windows Communication Framework (WCF) utilizes Kerberos for authentication and is encrypted
by default. For more information, please refer to the Microsoft TechNet Article – Hardening the VMM server.
Storage
For an introduction to storage, please refer to Citrix Knowledgebase Article CTX118397 –
Introduction to Storage Technologies. For detailed XenDesktop storage recommendations, please
refer to Citrix Knowledgebase Article CTX130632 – Storage Best Practices.
Storage has a major impact on the performance, scalability and availability of the XenDesktop
implementation. As such, the storage design should focus on the following key areas:
1. Direct-attached Storage/Storage-attached network
The Hyper-V storage design must determine whether the virtual machines should be hosted on a
Storage-attached Network (SAN), Direct-attached Storage (DAS) or a combination of the two.
There are a number of factors that need to be considered, including:
Unlike DAS, SAN supports Live Migration, Failover Clustering and Resource
Optimization (PRO). Although these features are less critical when supporting pooled
Page 21
virtual desktops and XenApp servers, they are still very important for dedicated desktops
and infrastructure servers. A SAN must be used if these features are required.
Another disadvantage of local storage is a reduced level of scalability due to the hardware
limit on the number of disks supported, particularly for blade systems. As the number
of virtual desktops per host increases, additional disks are required to accommodate the
number of IOPS generated.
Although using local storage may require additional disks and array controllers to be
purchased per Hyper-V host, the overall cost is likely to be significantly less than that of
an enterprise storage solution.
In many cases, the optimal solution is to use SAN storage for the infrastructure servers and
dedicated desktops so that they can benefit from Failover Clustering and Live Migration while
using less expensive DAS storage for pooled desktops and XenApp servers.
2. Database Storage
If SQL server is to be virtualized, the Hyper-V storage design must allocate DAS or SAN
storage for each of the following databases.
XenDesktop farm XenApp farm EdgeSight for NetScaler
XenApp Power and Capacity Management
Provisioning Services farm
EdgeSight for XenApp / Endpoints
Smart Auditor and Command Center
XenApp Configuration Logging
Virtual Machine Manager
App-V Data Store
3. Tiered Storage
A one-size-fits-all storage solution is unlikely to meet the requirements of most virtual desktop
implementations. The use of storage tiers provides an effective mechanism for offering a range
of different storage options differentiated by performance, scalability, redundancy and cost. In
this way, different virtual workloads with similar storage requirements can be grouped together
and a similar cost model applied. For example, the storage requirements for the Executive user
group are likely to be very different to those of the HR user group.
4. Performance
The performance requirements for each desktop group must be considered during the design of
the storage solution:
Storage Controllers. High performance solutions include Fiber Channel SAN and
hardware iSCSI, while lower throughput can be achieved using standard network
adapters and configuring software iSCSI. The performance of Fiber Channel and iSCSI
can be improved by implementing multiple storage adapters and configuring them for
multipathing, where supported.
Page 22
RAID. Virtual desktops are typically write-intensive:
PVS - 80% write / 20% read
MCS - 50% write / 50% read for MCS
Therefore, the Provisioning Services write-cache and MCS difference disks should be
hosted on RAID-1 or RAID-10 storage in preference to RAID-5 due to the associated
write penalties:
RAID Level Write Penalty
RAID-0 1
RAID-1 & RAID-10 2
RAID-5 (3 data & 1 parity) 4
RAID-5 (4 data & 1 parity | 3 data & 2 parity) 5
RAID-5 (5 data & 1 parity | 4 data & 2 parity) 6
Table 8: IOPS Write Penalty for RAID Configuration
The disk activity for the Provisioning Services vDisk Store will primarily consist of reads,
provided that it’s not used for private vDisks or server side caching. Therefore, RAID-5
offers an acceptable solution at a reduced cost to RAID-10.
Number of VMs per LUN. Storage LUNs that are connected to Hyper-V using Fiber
Channel or iSCSI may be represented as a single iSCSI queue on the storage side (storage
dependent). When a single LUN is shared among multiple virtual machines disks, all I/O
has to serialize through the iSCSI queue and only one virtual disk’s traffic can traverse
the queue at any point in time. This leaves all other virtual disks’ traffic waiting in line.
Although it is possible to parallelize iSCSI commands by implementing active/active
multipathing or optimizing the command traversal using techniques such as command
batching, LUNs and their respective iSCSI queue may still become congested and the
performance of the virtual machines may decrease.
Therefore, the number of virtual desktops allocated per LUN should typically be
restricted to between 20-30 virtual machines. The actual virtual machine per LUN ratio
is environment and workload specific and needs to be determined by means of
performance monitoring (i.e. VM specific read/write latency) and with reference to the
best practices of the storage vendor.
IOPS. The number of IOPS generated will vary based on application set, user behavior,
time of day and operating system used. Scalability testing should be performed to
determine the IOPS required during boot-up, logon, steady state and log-off phases.
The following table provides an estimate on the number of IOPS that will be generated
based on FlexCast model and user performance category:
FlexCast Model Performance Category Steady State IOPS per User
Hosted VM Light 8
Page 23
Normal 15
Heavy 50
Hosted Shared Light 2
Normal 4
Heavy 8
IOPS performance can be improved by selecting an appropriate RAID type, adding disks to the RAID array and/or using high performance disks (e.g. SAS 15k / SSD). For more information, please refer to the Citrix Knowledgebase Article CTX130632 – Storage Best Practices.
Note: The boot process typically generates the highest level of disk I/O activity. As such, virtual
desktops should be started in batches prior to the beginning of the business day to help reduce the load on
the storage subsystem. In addition, disabling the automatic restart of virtual desktops following logoff
will also help to reduce storage load.
Current and future requirements for multiple staggered shifts must be considered, as
there is likely to be a significant impact on performance due to the increased logon and
logoff activity.
Note: The number of IOPS generated by the virtual desktops can be reduced through operating system
optimizations. For more information, please refer to CTX124239 – Optimizing Windows XP for
XenDesktop and CTX127050 – Windows 7 Optimization Guide.
5. Thin Provisioning
Thin provisioning allows more storage space to be presented to the virtual machines than is
actually available on the storage repository.
NTFS has not been optimized for thin provisioned LUNs, specifically when an “over-commit”
state may occur, where the amount of virtual capacity reported exceeds the actual physical
capacity. Microsoft provides a number of recommendations to improve the reliability of NTFS
with thin provisioning that should be considered during the design of Hyper-V storage:
Do not create multiple volumes on a single thinly provisioned LUN
Do not store a system volume on a thinly provisioned LUN
Do not store paging files on a thinly provisioned LUN
Contact your storage vendor for guidance on whether they support thin provisioning with
Windows 2008 Server R2. For more information, please refer to Microsoft Knowledgebase
Article KB959613 – Guidelines for Thinly Provisioned LUNs in Windows Server 2008 R2.
Warning: If using thin provisioning in production environments, take appropriate measures to ensure that
sufficient storage exists and there is a system in place to alert administrators when storage is running low. If
available disk space is exhausted, virtual machines will fail to write to disk, and in some cases may fail to read
from disk, possibly rendering the virtual machine unusable.
Page 24
6. Data De-Duplication
Storage requirements may be reduced through the use of data de-duplication, whereby duplicate
data is replaced with pointers to a single copy of the original item. There are two
implementations of de-duplication available:
Post-Process De-Duplication. The de-duplication is performed after the data has
been written to disk. Post-process de-duplication should be scheduled outside business
hours to ensure that it does not impact system performance. Post Process de-
duplication offers minimal advantages for pooled desktops as the write cache /
difference disk is typically reset on a daily basis.
In-Line De-Duplication. Examines data before it is written to disk so that duplicate
blocks are not stored. The additional checks performed before the data is written to disk
can sometimes cause slow performance. If enabled, in-line duplication should be
carefully monitored to ensure that it is not affecting the performance of the XenDesktop
environment.
Hyper-V Failover Cluster
A Hyper-V failover cluster is a group of Hyper-V hosts (cluster nodes) that work together to
increase the availability of hosted virtual machines. Virtual machines on a failed host are
automatically restarted on another host within the same cluster, provided that sufficient resources
are available.
The following areas should be considered when designing the Hyper-V Failover Cluster design:
1. Virtual Machines to Cluster
The following virtual machines are critical to Hyper-V and XenDesktop are should be hosted on
a failover cluster:
VMM servers
Web Interface servers, License servers
SQL servers
Desktop Controllers
Provisioning Services
XML Brokers
XenApp Data Collectors
Dedicated Desktops
Page 25
Pooled Desktops (with personal vDisk)
Pooled desktops and XenApp servers do not typically need to be hosted on a cluster because
they shouldn’t have user-specific data or applications.
Note: Cluster Shared Volume services depend on Active Directory for authentication and will not start unless
they can successfully contact a Domain Controller. Therefore, always ensure that there is at least one Domain
Controller that is not hosted on a Cluster Shared Volume.
2. Number of Failover Clusters
The number of Hyper-V clusters required to support an implementation of XenDesktop varies
depending on the following key factors:
Infrastructure Nodes. When designing Hyper-V clusters, consider separating the host
server components such as AD, SQL and XenDesktop Controllers from the dedicated
virtual desktops by placing them on different Hyper-V clusters. This will ensure that
dedicated virtual desktop resources are isolated from infrastructure resources for optimal
performance and availability.
Performance. Many businesses will have dedicated desktops that require guaranteed
levels of performance. As such, it is sometimes necessary to create dedicated clusters to
meet the service level agreements associated with these desktops.
Application Set. It may be beneficial to dedicate clusters for large dedicated desktop
groups as they share a common, predictable resource footprint and application behavior.
Alternatively, grouping dedicated desktop groups together based on differing resource
footprints could help to improve desktop density per host. For example, splitting
processor intensive dedicated desktops across several clusters will help to distribute the
impact from processor saturation. If XenApp is used to provide virtualized application
access, it is advisable to separate the XenApp servers onto a separate Hyper-V cluster to
balance resource utilization.
Physical Network. Some environments may have complex network configurations
which require multiple clusters to be deployed. For example, some dedicated desktop
groups may need to be isolated onto specific subnets for reasons of security, whilst
others may have requirements for network connectivity which can only be satisfied
within specific data centers or network locations.
Virtual Network. Depending on the environment, it may be overly complex to trunk
every VLAN to every Hyper-V cluster. As such, it may be necessary to define clusters
based on the VLANs to which they are connected.
Page 26
Processor Architecture. Hyper-V requires that nodes within the same cluster have
processors from the same family. Therefore, the Hyper-V design will need to consider
the processor architecture available when determining the number of clusters required.
3. Nodes per Failover Cluster
Failover Cluster supports up to 16 nodes, however consider reserving at least one node in the
cluster for failover and maintenance.
A single cluster can support up to a maximum of 1,000 virtual machines. It is unlikely that a
cluster hosting dedicated virtual desktops would consist of 16 nodes as this would limit each
node to 63 dedicated virtual machines (66 if one host is dedicated to HA). It is far more likely
that a Hyper-V cluster supporting XenDesktop dedicated virtual desktops will consist of
between 7 and 13 VMs:
7 nodes per cluster - 167 VMs per node with one dedicated to HA
14 nodes per cluster - 76 VMs per node with one dedicated to HA
For more information, please see the Microsoft TechNet article Requirements and Limits for
Virtual Machines and Hyper-V in Windows Server 2008 R2.
Note: When multiple virtual machines exist for each server role, ensure that they are not all hosted on the same
physical host. This will help to ensure that the failure of a single virtualization host does not result in a service
outage. In addition, the physical hosts supporting the core infrastructure components should ideally be located in
different chassis/racks.
4. Cluster Shared Volumes
A Cluster Shared Volume (CSV) allows virtual machines that are distributed across multiple
cluster nodes to access their Virtual Hard Disk (VHD) files at the same time. The clustered
virtual machines can all fail over independently of one another. CSVs are required for Failover
Clustering and Live Migration functionality. Therefore infrastructure servers and dedicated
desktops are typically hosted on CSVs.
The following recommendations should be considered during the Cluster Shared Volume
design:
Microsoft recommends that the CSV communications take place over a different
network to the virtual machine and management traffic.
The network between cluster nodes needs to be low latency to avoid any lag in disk
operations but doesn’t need to be high bandwidth due to the minimal size of metadata
traffic generated under normal circumstances.
Page 27
Note: Since Clustered Shared Volume communication occurs over the Server Message Block (SMB)
protocol, the Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks
services must be enabled on the network adapters used for the cluster network. Disabling NetBIOS over
TCP/IP is recommended.
For more information, please see the Microsoft TechNet article Requirements for Using Cluster
Shared Volumes in a Failover Cluster in Windows Server 2008 R2.
System Center 2012 Virtual Machine Manager
The use of System Center 2012 Virtual Machine Manager (VMM) provides a number of key benefits
to a Hyper-V implementation supporting XenDesktop, including:
1. Simplified integration with XenDesktop
2. Integrated Performance and Resource Optimization (PRO) of Virtual Machines
3. Intelligent Placement of Virtual Machines
4. Central Library of Infrastructure Components
5. Delegated management and access of Virtual Machines
For more benefits, please refer to the Microsoft Article – Virtual Machine Manager Top Ten
Benefits. For more information on VMM, please refer to the Microsoft Whitepaper – Virtual
Machine Manager Technical Documentation.
1. VMM Architecture
There are four key components which make up the VMM role:
1. Management server
2. Database
3. Administrative console
4. VMM library
The VMM components can be installed on either physical or virtual servers. Within
XenDesktop, a single desktop group can be associated with only one VMM server; however one
VMM server can manage multiple desktop groups.
Note: VMM 2012 does not support SQL Server Express as a database option. Only SQL Server is
supported, either Standard or Enterprise edition.
Note: The VMM Administrator Console must be installed on each XenDesktop Controller within the site. In
addition, when the latest Provisioning Services 6.1 hotfixes have been applied, the VMM Administrator Console
must also be installed on the Provisioning Services server so that the Setup Wizards functions correctly.
Page 28
1. VMM Design and Scalability
Design
When designing a VMM 2012 environment there are two design choices that should be
considered. The two designs below are the tested and verified designs according to the
Microsoft Fast Track Reference Architecture Guide.
Design 1 (Recommended): A single VMM virtual machine exists within a host cluster, or a series
of VMM servers each supporting no more than 2000 Virtual Desktops each. These VMM virtual
machine have no VMM application level HA, however they do have VM based high availability
enabled with the Hyper-V cluster. The VM operating system and the VMM application
components can easily migrate between hosts in the cluster in the event of a hardware failure.
This design offers a simpler setup and is the most often used. SCVMM failure will not cause
user connection errors or user disconnects, however it will prevent XenDesktop from starting or
provisioning new desktops. Therefore, short windows of service disruptions are allowable
within the SCVMM design.
Design 2: Two physical servers are used to support VMM 2012 with the High Availability
feature role enabled. These two servers should not be virtualized. This design offers VMM
application layer HA, however it can also increase complexity during the initial setup for the
environment. This design is also more expensive, as each physical cluster of SCVMM server can
still only support up to 2000 Virtual desktops per cluster instance.
Refer to the Microsoft Fast Track Reference Architecture Guide for more information on VMM
2012 architecture guidance.
Scalability
The following considerations should be taken into consideration when determining the number
of VMM servers required:
Citrix has found that the best performance is achieved when each VMM server is limited
to managing 2000 virtual desktops.
While it is possible to run other applications on the VMM server, it is not recommended,
especially other System Center 2012 applications because they tend to have heavy
resource demands and could significantly impact VMM performance.
The following table provides recommended specifications for the key VMM
infrastructure components:
Component Recommended Recommended (>150 Hosts)
CPU Dual-Processor, Dual-core,
2.8 GHz (x64) or greater
Dual-Processor, Dual-core,
3.6 GHz (x64) or greater
Page 29
Memory 4 GB 8 GB
Disk space
(no local DB)
40 GB 50 GB
Disk space
(local DB)
150 GB Use a dedicated SQL server
CPU Dual-processor, Dual-core,
2 GHz (x64) or greater
Dual-processor, Dual-core,
2.8 GHz (x64) or greater
Memory 4 GB 8 GB
Disk space 150 GB 200 GB
CPU Dual-processor, Dual-core,
3.2 GHz (x64) or greater
Dual-processor, Dual-core,
3.2 GHz (x64) or greater
Memory 2 GB 2 GB
Disk space
(no local DB)
Size will depend on what
will be stored
Size will depend on what will
be stored
Page 30
Operating System Delivery
This section compares and contrasts the three main methods of operating system delivery available
with Hyper-V, including Template Installs, Provisioning Services and Machine Creation Services.
1. Templates
Templates are created using a previously installed and configured virtual machine. Templates are
stored within the VMM Library to simplify deployment across all managed hosts. New virtual
machines created from a template receive a unique identity including MAC address, IP address
and computer name. Templates are typically used for ensuring that infrastructure servers (Web
Interface, XenDesktop Controller, Data Collectors, etc.) are created quickly and consistently.
The following recommendations should be followed when designing the templates required for
Hyper-V.
1. Provision space in VMM for a Windows 2008 R2 template to use for building servers in
the infrastructure. This helps ensure server consistency and speeds up the server build
process. Microsoft recommends a disk size of 40GB or greater for a Windows Server
2008 R2 system. Actual disk space required will vary according to the applications and
features that are installed. If the server will have more than 16 GB of RAM, more disk
space will be needed for paging, and dump files.
2. Templates will need to be created for Provisioning Services. Ensure that you allocate
sufficient space for each desktop type required. Separate templates will be required if
different local write-cache sizes or NICs per desktop are required. The Streamed Virtual
Machine Setup Wizard will allow a different amount of vCPUs and RAM to be specified
based on a single template.
3. Use fixed disks instead of dynamic for optimal performance and to ensure that the
Cluster Shared Volume will not run out of disk space as the VHDs expand.
4. It is easy to expand and very difficult to shrink the size of a virtual disk once it has been
created. Therefore, start with a small disk and manually expand it later on if necessary.
2. Machine Creation Services
Machine Creation Services (MCS) does not require additional servers; it utilizes integrated
functionality built into Microsoft Hyper-V. Each pooled desktop has one difference disk and
one identity disk. The difference disk is used to capture any changes made to the master image.
The identity disk is used to store information such as machine name and password. When a
pooled desktop reboots, the difference disk is deleted and the user starts with a brand new
virtual desktop.
IntelliCache is not supported with Hyper-V. As such, MCS generates approximately 50% more
IOPS than Provisioning Services because there is no read or write caching performed.
Page 31
Therefore, MCS is not typically used for large XenDesktop environments hosted on Hyper-V.
However, MCS is still a good solution for small and medium sized XenDesktop deployments
due to the reduced number of infrastructure servers required and a simplified implementation
process. Unlike Provisioning Services, MCS cannot be used to provision XenApp servers. If
XenApp is to be provisioned in addition to virtual desktops, consider standardizing on
Provisioning Services.
If MCS is selected, the Hyper-V design must ensure that sufficient storage space is available:
Snapshot. Each storage repository used for MCS must include space for a full snapshot
of the master image.
Difference Disk. Each virtual machine has a difference disk which is used to store all
writes made to the virtual machine. The different disk is the same size as the base virtual
machine. Thin-provisioning can be used (if supported by the storage solution) to reduce
the amount of total storage required.
Identity Disk. Each virtual machine has an identity disk (16MB).
3. Provisioning Services
Citrix Provisioning Services allows a single virtual hard disk to be streamed simultaneously to
numerous virtual desktops or XenApp servers helping to reduce storage requirements and
simplify image management.
Provisioning Services is significantly more complicated to implement than MCS, however
Provisioning Services generates less IOPS than MCS when implemented with Hyper-V.
Therefore, Provisioning Services is typically used for large enterprise deployments and MCS for
small to medium sized businesses.
The following recommendations should be followed when designing Hyper-V to support
Provisioning Services:
Sufficient storage must exist for the Provisioning Services store to support the estimated
number of vDisks required, including backups and future updates. The vDisk store
should be configured so that Provisioning Services can leverage the Windows System
Cache. This can either be accomplished by hosting the vDisk store on a block-level
storage device with the default Provisioning Services configurations or on CIFS storage
through modifications to the Provisioning Services registry. For more information,
please refer to the Citrix Blog Post – Provisioning Services and CIFS Stores.
Each target provisioned from a shared vDisk must have sufficient storage available to
host its write-cache, which can either be hosted on the target itself (RAM/local
storage/shared storage) or a Provisioning Server (local storage/shared storage). In most
Page 32
situations, consider using either local or shared storage on the target device due to the
following concerns:
o Device-RAM. This can be an expensive use of memory and targets will fail if
memory is exhausted.
o Provisioning Services-Storage. Adds additional latency as requests to/from the
cache must cross the network.
Note: Live Migration, Failover Clustering and Performance & Resource Optimization (PRO) will
not be available when the write-cache is hosted on local disk.
Use fixed-size virtual hard disks for the Provisioning Services write cache because
dynamic disks suffer from a performance hit associated with disk expansion. The virtual
hard disks stored in the Provisioning Services vDisk Store can be either fixed or dynamic
because they rarely change in size and also leverage the Windows Server Cache. For
more information, please refer to the Citrix Blog Post – Fixed or Dynamic vDisks.
The Windows operating system used to host Provisioning Services is able to partially
cache vDisks in memory (system cache). In order to maximize the effectiveness of this
caching process, a Provisioning Server should have sufficient memory available. The
following formula outlines how the minimum amount of memory for a Provisioning
Server can be determined:
System Cache = 512MB + (# of active vDisks * Avg. data read from vDisk)
Total Server RAM = Committed Bytes under load + System Cache
If the amount of data read from a vDisk is unknown and cannot be determined, it is a
common practice to plan for a minimum of 2GB per active Desktop vDisk and 10GB
per active Server vDisk. For more information, please refer to the Citrix Knowledgebase
Article CTX125126 – Advanced Memory and Storage Considerations for Provisioning
Services.
There are five options for storing the cache file of provisioned virtual machines:
1. Cache on Device HD
2. Cache on Device Hard Drive Persisted
3. Cache in Device RAM
4. Cache on a Server Disk
5. Cache on Server Persisted
Page 33
The use of Cache on Device HD is recommended for the majority of XenDesktop and
XenApp implementations because it offers fast performance without consuming
expensive RAM.
Note: Live migration and Failover Clustering is not supported when using Cache on Device HD with
local storage. As such, consider using shared storage for dedicated virtual desktops, XenApp servers and
pooled desktops with Personal vDisks.
4. Desktop Customization
Personal vDisks are available in all versions of XenDesktop 5.6. Generally, the use of a Personal
vDisks is appropriate when there is a strong desire for personalization of the virtual desktop.
This could include a need to use a variety of departmental applications with a small, distinct user
groups or general personalization that is beyond what is available in the user profile. For more
information, please refer to the Citrix Knowledgebase Article CTX133227 – Citrix Personal
vDisk Technology Planning Guide.
The Hyper-V design should consider the following key Personal vDisk design decisions:
The Personal vDisks must utilize one of the storage solutions supported by Hyper-V.
For more information, please refer to the Storage section of this document.
Note: Microsoft does not support Network Attached Storage (SMB, CIFS or NFS protocols) with Hyper-V. For more information, please refer to Microsoft Knowledgebase Article KB2698995 – Microsoft Support Policy for Hyper-V Environments Utilizing Network Attached Storage.
The Personal vDisk consist of two drives; a persistent user disk (PUD) attached to each
VM which contains user profile information (typically the contents in the C:\Users
folder in Windows), and a hidden vhd drive which contains information about installed
applications. By default, both are sized 50% of the total volume size. So when a default
sized 10 GB PvD is created, 5 GB is dedicated to the PUD, and the remaining 5 GB is
assigned to the VHD. If a profile management solution is in place, consider increasing
the size of the application space to 70% or 80% of the total volume size. This change
has to be set in the master image before deploying desktops.
The application deployment strategy chosen can significantly alter the expected
hypervisor scalability. It has been observed that performance and scalability degrade
when more applications are installed into a Personal vDisk. The larger the application
and the more intensive the registry operations it performs, the greater the impact will be
in terms of hypervisor resource utilization. As such, it is important to incorporate an
application deployment strategy that is cohesive with the personalization needs of end-
users. Consider the use of published applications and application streaming when
possible to minimize the impact on scalability. For more information, please refer to the
Citrix Knowledgebase Article CTX126773 – Application Integration for Virtual
Desktops.
Page 34
Note: Scalability testing should be performed to determine the performance impact of Personal vDisks
with the anticipated application set.
Management
Monitoring
Designing and implementing an appropriate monitoring solution for Microsoft Hyper-V will help to
maintain the consistency, reliability and performance of the XenDesktop infrastructure. As such, the
Hyper-V design should include the following key monitoring decisions:
1. Management Tool
It is imperative that the monitoring tool selected includes support for Hyper-V. The use of
Microsoft System Center 2012 Operations Manager with the Hyper-V Management Pack fully
supports Hyper-V 2008 R2 and includes Performance and Resource Optimization (PRO) which
allows virtual machines to be automatically load balanced between nodes within a cluster. For
more information, please refer to the Microsoft TechNet Article – Operations Manager.
2. Management Tool Access
Selected tools should only be made available to relevant personnel. The ability to download,
launch and connect to servers using unauthorized copies of support tools should be minimized
wherever possible. Ensuring that each administrative tier is only available to authorized
personnel will ensure that unauthorized changes, intentional or otherwise, which could affect the
stability of both physical and virtual hosts and the associated cost of resulting systems outage,
can be avoided.
3. Hyper-V Hardware
Vendor supplied tools should be used where possible to capture and analyze bespoke hardware
events.
The high availability feature of Hyper-V can be used to mitigate the impact from a single host
failure. However, there will still be an element of downtime while the virtual machines are
restarted on alternative hosts. Effective proactive monitoring can help to avoid the unnecessary
invocation of high availability and the restart of virtual workloads.
4. Hyper-V Performance
The overall performance of each Hyper-V host will affect the performance of the virtual
machines that it supports. It is critical that the Hyper-V hosts are monitored for bottlenecks
within the processor, memory, disk and network subsystems:
For more information and detailed recommendations, please refer to the Microsoft Blog Post –
Monitoring Hyper-V Performance.
Page 35
5. XenDesktop Infrastructure
It is imperative that the XenDesktop environment is monitored in addition to the Hyper-V
infrastructure. For more information and detailed recommendations, please refer to the Citrix
Knowledgebase Article CTX133540 – Monitoring Citrix Desktop and Datacenter.
Backup and Recovery
The Hyper-V design should consider the backup requirements of the XenDesktop infrastructure.
For detailed information on the backup capabilities of Hyper-V, please refer to the Microsoft
TechNet article – Planning for Backup.
1. Backup Software
To backup virtual machines from within the parent partition the backup application used must
be compatible with both Hyper-V and the Hyper-V Volume Shadow Copy Service (VSS) writer.
For more information, please refer to the Microsoft Knowledgebase Article KB958662 – How
to Backup Hyper-V Virtual Machines from the Parent Partition.
Note: If using Windows Server Backup to backup virtual machines from the parent partition, it is not possible
to backup individual virtual machines. Also, Windows Server Backup does not support the backup of virtual
machines hosted on a Cluster Shared Volume.
2. Backup Type
The Hyper-V design must determine whether the backups will be performed on the host or
within the VM:
Backup from Hyper-V host. Performing the backup from the Hyper-V host is
preferred because it captures more data than performing the backup from within the
guest operating system including the configuration of virtual machines and snapshots
associated with the virtual machines.
Backup within virtual machine. The backup application runs within the virtual
machine. Use this method when backing up data from storage that is not supported by
the Hyper-V VSS writer.
3. Components to Backup
The following Hyper-V and XenDesktop components should be backed up so that it is possible
to recover from a complete failure:
XenDesktop database
XenApp database
Provisioning Services database
Provisioning Services vDisks (virtual desktops and XenApp servers)
Page 36
Hyper-V VMM database
Hyper-V VMM Library
Hyper-V host system state
Hyper-V host local disks
Hyper-V Cluster quorum
Dedicated virtual desktops
Web Interface configuration
License files
User profiles / home folders
Note: It is assumed that there is a fast automated rebuild process in place for the servers (XenDesktop
Controller, XenApp server, Web Interface server, Provisioning Server, etc.). If this assumption is not true, all
infrastructure servers must also be backed up. To simplify the backup and restore process, user data should not be
stored on the virtual desktops or XenApp servers.
It is imperative that the peripheral infrastructure is also backed up – including the streamed
application profiles, Active Directory, file servers (profile and home folder share, etc.) and
application servers.
Virtual networks are not included in a full server backup. Therefore, the Hyper-V design should
document the virtual network configuration so that it can be manually recreated if necessary.
High Availability
Hyper-V Failover Clustering functionality allows specific virtual machines to be restarted on a new
physical host within the same cluster should the current host system fail. For information on how
to configure Hyper-V Failover Clustering, please refer to the Microsoft TechNet Article – Using
Hyper-V and Failover Clustering.
It is important that all components of the infrastructure are considered when designing a highly
available XenDesktop solution:
1. VMM server
If the VMM server is unavailable, XenDesktop will not be able to manage the power state of the
virtual desktops that it manages or create additional virtual desktops. Therefore, Microsoft
Failover Clustering should be included in the design to ensure that the VMM server is highly
available. For more information, please refer to the Microsoft Blog Post – Creating a Highly
Available VMM server.
2. VMM Library
The VMM Library is typically required for the creation of new desktops, XenApp servers and
infrastructure servers. Therefore, the loss of the VMM Library is unlikely to impact the day-to-
Page 37
day activities of a standard XenDesktop deployment. However, if required VMM supports the
use of highly available library shares on a Failover Cluster.
3. SQL Server
The VMM database contains all VMM configuration information. If the VMM database is
unavailable, virtual machines will still be available but the VMM Console will be unavailable.
Therefore, Hyper-V Failover Clustering should be used to ensure that the SQL Server is highly
available.
4. XenDesktop Infrastructure VMs
If sufficient physical capacity exists within the Hyper-V Cluster, key XenDesktop Infrastructure
virtual machines should be configured for Hyper-V failover clustering to ensure that a single
host failure does not affect the availability or performance of the XenDesktop environment,
including:
Web Interface
XenDesktop Controllers
XenApp Controllers
License Server
Provisioning Services
5. Virtual Desktops
The decision as to whether the host servers for the virtual desktops should be made highly
available will depend on the type of desktops that are deployed. It is not typically necessary to
implement Microsoft Failover Clustering for XenApp servers or pooled desktops without
personal vDisks as they do not contain any user specific data or applications.
Hyper-V Failover Clustering is recommended for important dedicated desktops.
Appendix
Appendix A: Optimizations
Performance tuning of Hyper-V should occur at both the hypervisor host level and at the virtual
machine level. Optimizations that can be applied to improve performance include:
Hypervisor Host Optimizations
By default, Windows Server 2008 R2 uses a balanced power plan in Power Options, which
enables energy conservation by scaling the processor performance based on current CPU
Page 38
utilization. On some systems this can degrade server performance and cause issues with
CPU-intensive applications. To avoid this issue, Hyper-V hosts should be set to use the
High Performance power plan.
Power saving settings in the system BIOS should be set to maximum performance, or they
should be disabled to prevent the server from going into a low power state.
Windows Server 2008 R2 and Hyper-V include support for CPU Core Parking, which
consolidates processing onto the fewest number of processor cores and suspends inactive
processor cores. Hyper-V supports core parking by allowing VM threads to be moved
between cores, and putting cores into a low power state when not in use. The only hardware
requirement to take advantage of CPU Core Parking is a system with more than one core. Be
aware that CPU Core Parking can have a negative effect on Hyper-V performance and
should be used with caution in environments where high performance is important. For
more information and details on how to disable CPU Core Parking, please refer to the
Microsoft Blog Post – Tweaking CPU Core Parking.
The Windows XP setup and diskpart utilities create misaligned boot partitions resulting in
additional disk I/O activity. The diskpart utility included with Windows Server 2003 SP1
and Windows 7 addresses this issue. For more information on calculating the correct offset,
please refer to Microsoft Knowledgebase Article KB929491 – Disk performance may be
slower than expected.
Close any virtual machine connection windows that aren’t being used. Leaving connection
windows open when not in use consumes resources that could be utilized elsewhere.
Pooled virtual desktops should not be configured to automatically restart in VMM as this
will conflict with the power operation commands issued by the XenDesktop Controllers.
Close or minimize Hyper-V Manager when not in use. Hyper-V Manager continuously polls
running virtual machines for CPU utilization and uptime information which uses resources.
Enable offload capabilities for the physical network adapter driver in the root partition to
reduce CPU utilization.
If multiple physical network cards are installed on the Hyper-V host, bind device interrupts
for each network card to a single logical processor to improve Hyper-V performance.
Avoid storing system files on drives used for Hyper-V storage.
Virtual Machine Optimizations
Disable any unnecessary services and remove any unnecessary applications. This will reduce
the VM resource consumption.
Page 39
Do not allocate more than one CPU core to VMs unless they will be running multi-threaded
applications. Single threaded applications will not see a performance increase from multiple
cores unless the application is CPU-intensive.
For best performance configure a 1:1 ratio of virtual processors in the guest VM to available
Limit the number of snapshots taken for VMs. When there is a large hierarchy of snapshots,
Hyper-V has to perform a lot of read operations. Hyper-V must check multiple physical disk
files to find the latest version of data which creates a lot of I/O overhead.
Citrix Consulting published whitepapers specifically dedicated to desktop optimizations when
running Windows 7 and Windows XP. Please refer to both guides for more information.
Appendix B: High-Level Implementation Steps
The following are steps required to perform a XenDesktop 5.5 deployment on a Hyper-V
infrastructure running Windows 2008 Server R2. The steps provided do not go into specific details
and should not be used as a step-by-step handbook. This table is meant to aid when planning an
implementation using either Provisioning Services or Machine Creation Services.
Step Task Reference
1. Verify infrastructure pre-requisites. Verify that the following infrastructure components
are in place before building the environment.
Active Directory domain
DNS services
DHCP services
Physical networks and VLANs
Microsoft and Citrix licenses
Microsoft License server
File servers
2. Assess your hardware requirements. Access the processing, storage, and networking
requirements of all the workloads that will run on one
physical server to determine the hardware
requirements for the physical server. For more
information, please see the Microsoft TechNet article
Hardware Considerations for Hyper-V in Windows
Server 2008.
3. Assess your software requirements. Determine the applications that will be required on
the desktops, and which methods will be used to
deliver them. For more information, please see the
Citrix eDocs article Evaluating Application Delivery
Methods.
4. Determine which edition of Windows
Server to use for building Hyper-V, and the
type of installation to perform.
For more information, please see the Microsoft
TechNet article Virtualization Platform Comparison.
Page 40
5. Disable power conservation settings in the
system BIOS and set the Power State in the
Windows Control Panel to High
Performance.
Power state settings in the system BIOS should be set
to maximum performance or disabled to prevent the
server from going into a low power state which will
affect performance. For more information, please see
the Citrix whitepaper XenDesktop and XenApp Best
Practices, and the Microsoft Support article Degraded
overall performance on Windows Server 2008 R2.
6. Install the Hyper-V role on the physical
servers that will run the role.
If the Hyper-V Server 2008 R2 media is used, the
Hyper-V role will be installed automatically. For more
information, please see the Microsoft TechNet article
Hyper-V Planning and Deployment Guide.
7. Set the Hyper-V and Cluster Shared
Volume anti-virus exclusions in your anti-
virus solution.
For more information, please see the Microsoft
article Windows Anti-Virus Exclusion List.
8. Add firewall exclusions for the services
required by Microsoft and Citrix
technologies that will be used to support the
XenDesktop environment.
For more information, please see Citrix whitepaper
Communication ports used by Citrix Technologies
and Microsoft article Network Ports Used by Key
Microsoft Server Products.
9. Join the Hyper-V host servers to the Active
Directory domain.
All Hyper-V hosts that will be part of a cluster must
be in the same Active Directory domain. For more
information, please see the Microsoft article Hyper-V:
Using Hyper-V and Failover Clustering.
10. Attach and configure SAN unless direct
attached storage will be used.
For best performance, dedicate a private network for
storage traffic. Administrators should check their
routing tables and verify that the storage traffic is not
able to route along networks dedicated to Hyper-V
management or VM traffic. Please see the Storage
section earlier in this document for the advantages
and disadvantages of local and shared storage. Shared
storage must be used for Cluster Shared Volumes.
11. Configure Hyper-V networks. Create separate external networks for VM and Hyper-
V management traffic. For more information, please
see the Networking section earlier in the document
for information on the networks required.
12. Install and configure Failover Clustering
and enable Cluster Shared Volumes for
Hyper-V management servers and dedicated
desktops (if they will be used).
For more information, please see the Microsoft
TechNet article Hyper-V: Using Hyper-V and
Failover Clustering, and see Deploying Cluster Shared
Volumes (CSV) in Windows Server 2008 R2 Failover
Clustering.
13. Verify that the Failover Cluster is working
correctly.
Test a planned and unplanned failover to verify that
the cluster is working properly. For more
information, please refer to the Microsoft TechNet
Article Hyper-V: Using Hyper-V and Failover
Clustering.
14. Build a SQL Server or use an existing SQL
server in the domain for VMM,
For more information, please see the Microsoft
articles How To: Install SQL Server 2008 R2 (Setup),
Page 41
XenDesktop, and Provisioning Server
databases. If this server is built as a VM on
the cluster, reconfigure the automatic start
action and make the VM highly available.
and Hyper-V: Using Hyper-V and Failover
Clustering.
Note: that System Center 2012 Virtual Machine Manager
2012 also supports SQL Server version 2008 with SP2.
XenDesktop 5.6 supports SQL Server versions 2008 and
higher. There is a known issue when using SQL Server 2012
with XenDesktop. See Citrix support article Microsoft SQL
2012 Known Issues. Provisioning Services 6.1 supports SQL
Server versions 2008 and 2008 R2.
15. Build a VMM management server as a VM
on the cluster. Reconfigure the automatic
start action and make the VM highly
available.
For more information, please see the Microsoft
articles Installing a VMM Management Server, and
Hyper-V: Using Hyper-V and Failover Clustering.
16. Build Citrix License Server as a VM on the
cluster. Reconfigure the automatic start
action and make the VM highly available.
For more information, please see the Citrix eDocs on
Citrix Licensing 11.10, and Microsoft article Hyper-V:
Using Hyper-V and Failover Clustering.
17. Create VMs to be used for Provisioning
Services. Install the operating system, latest
updates, patches, and install the VMM
Administrator Console on each
Provisioning server.
When installing the VMM Administrator Console, be
sure to select the option Get the latest updates to
Virtual Machine Manager from Microsoft
Update. For more information, please see Microsoft
TechNet article How to Install the VMM Console.
18. Install Provisioning Services on the VMs
created in the previous step. Configure a
Provisioning Services Store and Device
Collections. Reconfigure the automatic start
action and make the VMs highly available.
For more information, please see the Citrix eDocs on
Provisioning Services 6.1, and Microsoft article
Hyper-V: Using Hyper-V and Failover Clustering.
19. Apply optimizations to the Provisioning
Services server.
For more information, please refer to the
Provisioning Services section of the Citrix
Knowledgebase Article CTX132799 – XenDesktop
and XenApp Best Practices.
20. Create VMs to be used for the XenDesktop
Controllers. Install the operating system,
latest updates, patches, and install the VMM
Administrator Console on each controller.
When installing the VMM Administrator Console, be
sure to select the option Get the latest updates to
Virtual Machine Manager from Microsoft
Update. For more information, please see Microsoft
TechNet article How to Install the VMM Console.
21. Launch the XenDesktop installation on the
VM. Select the server-side components:
Controller
Desktop Studio
Desktop Director
For more information, please see Citrix eDocs article
Installing and Removing XenDesktop Server
Components.
22. Configure Desktop Studio for Hyper-
V/VMM.
Once installation completes, Desktop Studio will
open. Select Desktop Deployment and complete the
configuration specifying:
Name of the SQL Server and XenDesktop
database
Page 42
Microsoft Virtualization as the Host type
FQDN of the VMM management server
Domain\administrator account and
password
Name of the Failover Cluster created in step
12.
Cluster Shared Volume if dedicated desktops
will be deployed
Network that will be used.
For more information, please see Citrix article
CTX127578 - How to Configure XenDesktop 5 with
Microsoft Hyper-V and System Center 2012 Virtual
Machine Manager.
23. Build VMs for the application delivery
solution. Install the operating system, latest
updates, and reconfigure the automatic start
action to make the VMs highly available.
Implement methodology for delivering applications
to the desktop. For more information on application
streaming with XenApp, please see the Citrix eDocs
article Publishing Applications for Streaming. For
more information on publishing hosted applications
in XenApp, please see the Ctirix eDocs article
Publishing Resources using the AppCenter.
If using App-V, please see
24. Build Web Interface VMs. Reconfigure the
automatic start action and make the VMs
highly available.
For more information, please see Citrix article
CTX130917 How to Install and Configure Web
Interface for XenDesktop, and Microsoft article
Hyper-V: Using Hyper-V and Failover Clustering.
25. Build Provisioning Services vDisks or
Machine Creation Services Master Images.
If using Provisioning Services prepare a vDisk to use
for provisioning desktops. If using Machine Creation
Services, build a VM to act as the master image. For
more information, please see Citrix eDocs articles
Managing vDisks, and Preparing a Master VM.
26. Configure anti-virus and firewall exclusions
for the virtual desktops.
For more information, please see Citrix article
CTX124185 Provisioning Services Antivirus Best
Practices, Microsoft support article Virus scanning
recommendations for Enterprise computers that are
running currently supported versions of Windows,
and Citrix whitepaper Communication ports used by
Citrix Technologies.
27. Install the XenDesktop Virtual Desktop
Agent on images.
The Virtual Desktop Agent must be installed for
Provisioning Services and Machine Creation Services
deployments. For more information, please see Citrix
eDocs article Installing and Upgrading to
XenDesktop 5.6.
28. Optimize images for XenDesktop. Perform optimizations on images to ensure optimal
desktop operation. For more information, please see
Citrix whitepapers Windows 7 Optimization Guide,
Page 43
Optimizing Windows XP for XenDesktop, and
Logon Optimization Guide – XenApp/XenDesktop.
29. Create VM templates and store in the VMM
Library.
For more information, please see Microsoft TechNet
article Creating Virtual Machine Templates in VMM,
and Citrix blog Create VMs automatically on Hyper-
V 2008 R2 with the PVS Streamed VM Setup Wizard.
30. If using Provisioning Services, add local
Write Cache disk.
Create a new virtual disk, format, and add it as the
Write Cache to the VM template. Detach the local
hard disk with the operating system from the
template.
Note: In Hyper-V, disks are created in 1GB increments so
select a Write Cache size appropriately.
31. Create machine catalogs in Desktop Studio. For more information, please see Citrix eDocs article
To create a new machine catalog.
32. Use Provisioning Services Streamed VM
Setup Wizard to provision desktops in
Hyper-V, or use Desktop Studio to create
Machine Creation Services desktops in
Hyper-V.
For more information, please see Citrix eDocs articles
Using the Streamed VM Setup Wizard, and Machine
Creation Services Primer – Part I.
33. Add machines to catalogs and create
desktop groups in Desktop Studio.
For more information, please see Citrix eDocs article
To create a desktop group.
34. Install and publish applications for desktop
delivery
For more information, please see Citrix eDocs article
Citrix XenApp 6.5.
35. Deploy Citrix Receiver to target devices. Citrix Receiver is the software client that provides
secure delivery of virtual desktops. User an electronic
software delivery solution to deploy Citrix Receiver to
target devices in the organization. For more
information, please see Citrix eDocs article Citrix
Receiver.
36. Test virtual desktops; ensure everything is
working properly before releasing to end
user community
Product Versions
Product Version
XenDesktop 5.x
Hyper-V 2008 R2
System Center 2012 Virtual Machine Manager
2012
Provisioning Services 6.1
Revision History
Revision Change Description Updated By Date
0.1 Document Created Andy Baker & Ed Duncan 6/11/2012
0.2 Document Modified Tony Sanchez 8/22/2012
1.0 Document Released Andy Baker 9/13/2012
About Citrix
Citrix Systems, Inc. (NASDAQ:CTXS) is a leading provider of virtual computing solutions that help companies deliver IT as an on-demand service. Founded in 1989, Citrix combines virtualization, networking, and cloud computing technologies into a full portfolio of products that enable virtual workstyles for users and virtual datacenters for IT. More than 230,000 organizations worldwide rely on Citrix to help them build simpler and more cost-effective IT environments. Citrix partners with over 10,000 companies in more than 100 countries. Annual revenue in 2011 was $2.20 billion.
©2012 Citrix Systems, Inc. All rights reserved. Citrix®, Access Gateway™, Branch Repeater™,
Citrix Repeater™, HDX™, XenServer™, XenApp™, XenDesktop™ and Citrix Delivery Center™
are trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered
in the United States Patent and Trademark Office and in other countries. All other trademarks and
registered trademarks are property of their respective owners.