VMware and Customer Confidential
VMware vSphere: Design Workshop [V5.0]
Enterprise Lab
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 2 of 27
Version History
Date Ver. Author Description Reviewers
19 Jan 2010 V1 Ben Lin
Shridhar Deuskar
Initial Draft Mahesh Rajani
Rupen Sheth
11 Nov 2011 V5 Mike Sutton Final Mike Sutton
© 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html.
VMware, VMware vSphere, VMware vCenter, the VMware “boxes” logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware, Inc 3401 Hillview Ave Palo Alto, CA 94304 www.vmware.com
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 3 of 27
Contents
1. Overview ...................................................................................... 4 1.1 Summary ........................................................................................................................ 4
1.2 Design Overview ............................................................................................................ 5
1.3 Requirements ................................................................................................................. 5
1.4 Constraints ..................................................................................................................... 6
1.5 Assumptions ................................................................................................................... 6
2. Storage ......................................................................................... 8 2.1 Requirements ................................................................................................................. 8
2.2 Design Patterns .............................................................................................................. 8
2.3 Logical Design .............................................................................................................. 10
2.4 Physical Design ............................................................................................................ 11
3. Network ...................................................................................... 13 3.1 Requirements ............................................................................................................... 13
3.2 Design Patterns ............................................................................................................ 13
3.3 Logical Design .............................................................................................................. 15
3.4 Physical Design ............................................................................................................ 16
4. Host ............................................................................................ 17 4.1 Requirements ............................................................................................................... 17
4.2 Design Patterns ............................................................................................................ 17
4.3 Logical Design .............................................................................................................. 18
4.4 Physical Design ............................................................................................................ 18
5. Virtual Machine .......................................................................... 20
5.1 Requirements ............................................................................................................... 20
5.2 Design Patterns ............................................................................................................ 20
6. Virtual Datacenter ...................................................................... 21
6.1 Requirements ............................................................................................................... 21
6.2 Design Patterns ............................................................................................................ 21
6.3 Logical Design .............................................................................................................. 24
6.4 Physical Design ............................................................................................................ 25
7. Management and Monitoring ..................................................... 26
7.1 Requirements ............................................................................................................... 26
7.2 Design Patterns ............................................................................................................ 26
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 4 of 27
1. Overview
1.1 Summary
ACME Energy Corporation engages in the acquisition, development, and operation of utility-scale renewable energy generation projects. It focuses on wind and solar energy, and selling the energy it produces to regulated utility companies. The company is headquartered in Phoenix, Arizona, and maintains remote offices in Bakersfield, California, and Fort Worth, Texas.
As part of a datacenter optimization project, IT has been asked to virtualize all x86-based servers onto the VMware vSphere™ platform. The primary datacenter is in Phoenix, with smaller datacenters in the other locations. After consolidation, all servers will be in the primary datacenter in Phoenix.
ACME Energy’s environment has three “zones”: Production, Dev/Test, and QA.
From the preliminary virtualization assessment, it was determined that ACME Energy can consolidate a considerable number of existing and expected future workloads. This increases average server utilization and lowers the overall hardware footprint and associated costs.
The virtualization assessment shows that 1000 physical servers can be virtualized. The consolidation ratio depended upon two possible target platforms.
Target Platform Consolidation Ratio
Production Dev/Test QA
Blade server. Two quad-core CPUs, 64GB of RAM 20:1 50:1 50:1
Rack server. Four quad-core CPUs, 96GB of RAM 30:1 60:1 60:1
No estimates for the number of network or storage adapters were made during the assessment. Assume that eight full-height blade servers can fit in one blade chassis. The blade chassis is 10U in height. The rack server is 4U in height. Several existing servers are powerful enough that they can be reused as VMware® ESX™/ESXi hosts. Availability of the virtual machines is an important requirement. Separation of management and production virtual machines is desired.
The 1000 physical servers are comprised of 400 Linux servers and 600 Windows servers.
Linux server distribution:
100 servers – Production
200 servers – Dev/Test
100 servers – QA
Windows server distribution
300 servers – Production
200 servers – Dev/Test
100 servers – QA
On average, each Windows server is provisioned with a 15GB operating system drive (average used 10GB) and 40GB (average used 25GB) data drive. Each Linux server is configured with 60GB total storage (40GB average used).
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 5 of 27
Most of the servers have two CPUs. Some are uniprocessor servers. The production servers must be segregated from the Dev/Test/QA servers. After it has been virtualized, a production virtual machine cannot share the same ESX/ESXi host as Dev/Test or QA virtual machines. This is a requirement. ACME Energy expects 10 percent annual server growth over the next three years.
Due to security and network infrastructure requirements, production network traffic must be isolated from Dev/Test and QA network traffic. The security team at ACME Energy has insisted that the IDS software used by their team requires each server’s networking port to have consistent properties. This requires the networking properties of each virtual machine’s virtual networking port to be preserved after a VMware vMotion™ migration.
TASK: Develop an architecture design for ACME Energy’s virtualization project.
1.2 Design Overview
The architecture is described by a logical design, which is independent of hardware-specific details. Specifications of physical design components that were chosen for the logical design are also provided.
This architecture design can be used to implement the solution using different hardware vendors, so long as the requirements do not change.
This design includes:
One physical site (Phoenix)
Clusters of hosts for load balancing through VMware High Availability/VMware Distributed Resource Scheduler (DRS) for host and guest operating system (virtual machine) failure.
VMware vCenter™ Server integrated with Microsoft Active Directory. vCenter Server will leverage the extensive inventory of existing Active Directory users and groups to secure access to vSphere.
Redundancy in network and storage infrastructure
System component monitoring, with SNMP traps and email alerts
VMware vCenter Update Manager for automating patching of all hosts and VMware Tools
1.3 Requirements
Requirements describe, in business or technical terms, the necessary properties, qualities, and characteristics of a solution. These are provided by the client and used as a basis for the design.
Number Description
R001 Virtualize existing 1000 servers as virtual machines with no significant change in performance or stability, compared to current physical workloads.
R002 Establish a sound and best practice architecture design while addressing ACME Energy’s requirements and constraints.
R003 Design should address security zone requirements for Production, Dev/Test, and QA workloads.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 6 of 27
R004 Design should be scalable and the implementation easily repeatable.
R005 Design should be resilient and provide high levels of availability where possible.
R006 Operations should help facilitate automated deployment of systems and services.
R007 Overall anticipated cost of ownership should be reduced after deployment.
R008 Business-critical applications should be given higher priority to network resources than noncritical virtual machines.
R009 Business-critical applications should be given higher priority to storage resources than noncritical virtual machines
1.4 Constraints
Constraints can limit the design features as well as the implementation of the design.
Number Description
C001 Storage array will be EMC Symmetrix DMX-4 950.
C002 Target Platform Option 1: Blade server, two quad-core CPUs, 32GB RAM
C003 Target Platform Option 2: Rack server, four quad-core CPUs, 96GB RAM
C004 Eight full-height blade servers can fit in one blade chassis. Blade chassis is 10U.
1.5 Assumptions Assumptions are expectations regarding the implementation and use of a system. These assumptions cannot be confirmed at the design phase and are used to provide guidance within the design.
Number Description
A001 All required upstream dependencies will be present during the implementation phase. ACME Energy will determine which dependencies sit outside of the virtual infrastructure.
A002 All VLANs and subnets required will be configured before implementation.
A003 There is sufficient network bandwidth to support operational requirements.
A004 ACME Energy will maintain a change management database (CMDB) to track all objects in the virtual infrastructure.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 7 of 27
A005 Storage will be provisioned and presented to the ESXi hosts accordingly.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 8 of 27
2. Storage
2.1 Requirements
- EMC Symmetrix DMX-4 950 will be used (active-active storage array). - Average Windows server is provisioned with a 15GB operating system drive (average used
10GB) and 40GB (average used 25GB) data drive. - Average Linux server is configured with 60GB total storage (40GB average used). - Must optimize for performance.
2.2 Design Patterns
LUN Sizing
Design Choice 67x 750GB LUNs will be used.
Justification Each virtual machine will be provisioned with 40GB of disk (Windows 15GB thick OS disk + 25GB thin data disk, Linux 40GB thin disk). With an average of 15 VMs per LUN and adding 20% for virtual machine swap and snapshots, 15x 40GB / .8 = 750GB. 1000 total VMs / 15 VMs per LUN = 66.7 LUNs
Impact A tiered storage approach should be used to align storage costs with virtual machine performance requirements.
References (lecture guide page number)
Storage Load Balancing
Design Choice EMC PowerPath/VE multipathing plug-in (MPP) will be used.
Justification EMC PowerPath/VE leverages the vSphere Pluggable Storage Architecture (PSA), providing performance and load-balancing benefits over the VMware native multipathing plug-in (NMP).
Impact Requires additional cost for PowerPath licenses.
References (lecture guide page number)
VMFS or RDM
Design Choice For most applications, VMware vStorage VMFS virtual disks will be used unless there is a specific need for raw device mapping (RDM). The use cases for RDM included using Microsoft clustering, NPIV, or running SAN management software in a virtual machine.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 9 of 27
VMFS or RDM
Justification VMFS is a clustered file system specifically engineered for storing virtual machines. A datastore is like a storage appliance that serves up storage space for virtual disks in the virtual machines.
Impact To ensure proper disk alignment, create datastores using the VMware vSphere Client.
References (lecture guide page number)
Host Zoning
Design Choice Single-initiator zoning will be used. Each host will have two paths to the storage ports across separate fabrics.
Justification EMC best practices dictate single-initiator zoning, with multiple paths to storage targets across separate fabrics.
Impact More zones will need to be created by the storage team.
References (lecture guide page number)
LUN Presentation
Design Choice LUNs will be masked consistently across all hosts in a cluster.
Justification Having consistent storage presentation ensures that virtual machines can be run on any host in a cluster. This optimizes high availability and DRS while reducing storage troubleshooting. It is important to minimize differences in LUNs visible across hosts within the same cluster or vMotion scope.
Impact Requires close coordination with the storage team because LUN masking is performed at the array level.
References (lecture guide page number)
Thin vs. Thick Provisioning
Design Choice Unless constrained by specific application or workload requirements, or special circumstances such as being protected by VMware Fault Tolerance (FT), all data volumes will be provisioned as thin disks with the system volumes deployed as thick.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 10 of 27
Thin vs. Thick Provisioning
Justification The rate of change for a system volume is low, while data volumes tend to have a variable rate of change.
Impact Alarms must be configured to alert if approaching an "out of space" condition to provide sufficient time to source and provision additional disk.
References (lecture guide page number)
Virtual machine I/O Priority
Design Choice Include Storage I/O Control into the design.
Justification Storage I/O Control ensures that those virtual machines that are considered critical to the organization are guaranteed a percentage of the storage resources when resources are exhausted.
Impact Reduces the performance impact of critical virtual machines when resources are exhausted.
References (lecture guide page number)
Storage profiles
Design Choice Storage Profiles will not be configured.
Justification Storage administrators will manage the storage.
Impact Storage administrator will configure storage based on virtual infrastructure needs.
References (lecture guide page number)
2.3 Logical Design
Attribute Specification
Storage type Fibre Channel
Number of storage processors multiple (redundant)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 11 of 27
Attribute Specification
Number of FC switches
Number of ports per host per switch
2 (redundant)
1
LUN size
Total LUNs
VMFS datastores per LUN 1
Figure 5. Logical SAN Design
2.4 Physical Design
Attribute Specification
Vendor and model EMC Symmetrix DMX-4 950
Type Active-active
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 12 of 27
Attribute Specification
ESXi host multipathing policy PowerPath/VE MPP
Min./Max. speed rating of switch ports 2GB/8GB
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 13 of 27
3. Network
3.1 Requirements
- Production traffic should be isolated from Dev/Test and QA. - Network properties and statistics of each virtual machine must be preserved after a vMotion
migration. - Virtual networking must be configured for availability, security, and performance.
3.2 Design Patterns
vNetwork Standard Switch or vNetwork Distributed Switch
Design Choice A single vNetwork distributed switch will be configured.
Justification vNetwork distributed switch functionality (that is, PVLAN, network vMotion) is required to meet the requirements of the solution.
Impact Unifies vSwitch configuration for all hosts
References (lecture guide page number)
vSwitch VLAN Configuration
Design Choice Separate VLANs will be assigned to Mgmt Network, VM Network, VMotion, and Fault Tolerance. Virtual Switch Tagging (VST) will be used.
Justification Virtual LANs provide isolation and separation of traffic.
Impact All ESX host facing ports must be configured as trunk ports.
References (lecture guide page number)
vSwitch Private VLAN (PVLAN) Configuration
Design Choice PVLANs will be used to further isolate traffic within the VM Network.
Justification Leverage vDS functionality to separate network traffic between Production and Dev/Test, QA virtual machines. Allows isolation of traffic within VLANs.
Impact Reduces the number of VLANs that must be configured.
References (lecture guide page number)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 14 of 27
vSwitch Load-Balancing Configuration
Design Choice Virtual port ID- based load balancing will be used.
Justification Under this setting, traffic from a given virtual NIC is consistently sent to the same physical adapter unless a failover occurs. This setting provides an even distribution of traffic if the number of virtual NICs is greater than the number of physical adapters.
Impact This is the default load-balancing setting. Minimal configuration is required.
References (lecture guide page number)
Use network I/O control? User-defined resource pools?
Design Choice
Justification
Impact
References
vShield Zones
Design Choice vShield Zones will not be implemented.
Justification Inspection of virtual networking traffic is not a current requirement.
Impact Existing hardware firewalls will be used to inspect and filter virtual machine traffic.
References (lecture guide page number)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 15 of 27
vSwitch Security Settings
Design Choice vSwitch default security settings:
Forged Transmits: Reject, MAC address changes: Reject, Promiscuous Mode: Reject
Justification There are no requirements that necessitate the use of any of the vSwitch security settings.
Impact Setting all options to Reject provides optimal vSwitch security by preventing potentially risky network behavior.
References (lecture guide page number)
3.3 Logical Design
Figure 4. ESXi Network Logical Design
Shading denotes active physical adapter to port group mapping. The vmnics shaded in the same color as a given port group will be configured as active, with all other vmnics designated as standby.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 16 of 27
3.4 Physical Design
dvSwitch vmnic NIC/Slot Port Function
0 0
Onboard
0
Management Network (Active) Fault Tolerance (Active) VM Network (Standby) VMotion (Standby)
0 1 1
VM Network (Active) VMotion (Standby) Fault Tolerance (Standby) Management Network (Standby)
0 2
PCIe Slot 2 Dual GbE
0
VM Network (Active) Management Network (Standby) VMotion (Standby) Fault Tolerance (Standby)
0 3 1
VMotion (Active) Fault Tolerance (Standby) VM Network (Standby) Management Network (Standby)
vSwitch Port Group Name VLAN ID
0 Management_VLAN10 10
0 VM Network_VLAN20 20
0 VMotion_VLAN30 30
0 FT_VLAN40 40
Primary VLAN VM Type PVLAN Type
Secondary VLAN ID
20 Production Community 100
20 Dev/Test Community 200
20 QA Community 200
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 17 of 27
4. Host
4.1 Requirements
- Host capacity must accommodate the planned virtualization of 1000 physical servers. - Size capacity to ensure that there is no significant change in performance or stability, compared
to current physical workloads. - Expected 10 percent annual server growth
4.2 Design Patterns
Blade or Rack Servers
Design Choice Blade servers will be used.
Justification Blade solution is modular and offers increased processing power in less space.
Impact Power and cooling requirements for blade chassis must be considered. Multiple chassis should be deployed for availability.
References (lecture guide page number)
Server Consolidation (minimum number of hosts required)
Design Choice Production: 20 hosts, Dev/Test: 8 hosts, QA: 4 hosts
Justification Formula: Total hosts / consolidation ratio
Prod: 400 / 20 = 20 hosts
Dev/Test: 400 / 50 = 8 hosts
QA: 200 / 50 = 4 hosts
Impact Additional hosts will be required for high availability.
References (lecture guide page number)
Server Containment (number of additional hosts required)
Design Choice Production: 8 hosts, Dev/Test: 3 hosts, QA: 2 hosts
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 18 of 27
Server Containment (number of additional hosts required)
Justification Formula: 10% of total hosts * 3 year period / consolidation ratio
Production: 40 * 3 / 20 = 8 hosts
Dev/Test: 40 * 3 / 50 = 2.4 (round up to 3 hosts)
QA: 20 * 3 / 50 = 1.2 (round up to 2 hosts)
Impact Server containment figures can influence procurement planning.
References (lecture guide page number)
4.3 Logical Design
Attribute Specification
Host type and version ESXi 4.0 Embedded
Number of CPU sockets
Number of cores per CPU
Total number of cores
Processor speed
2
4
8
2.93GHz (2930MHz)
Memory 64GB
Number of NIC ports 4
Number of HBA ports 2
4.4 Physical Design
Attribute Specification
Vendor and model Dell PowerEdge M610
Processor type
Total CPU sockets
Cores per CPU
Total number of cores
Processor speed
Intel Xeon X5570 (Nehalem)
2
4
8
2.93GHz
Memory 64GB (8x8GB)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 19 of 27
Attribute Specification
Onboard NIC vendor and model
Onboard NIC ports x speed
Number of attached NICs
NIC vendor and model
Number of ports/NIC x speed
Total number of NIC ports
Broadcom 5709 with TOE
2 x 1GbE
1
Intel Gigabit ET
2 x 1GbE
4
Storage HBA vendor and model
Storage HBA type
Number of HBAs
Number of HBA ports
Total number of HBA ports
Emulex LPE1205-M
Fibre Channel
1
2 x 8Gbps
2
Number and type of local drives
RAID level
Total storage
N/A
N/A
N/A
System monitoring Dell Management Console
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 20 of 27
5. Virtual Machine
5.1 Requirements
Requirement 1
Requirement 2
Requirement 3
5.2 Design Patterns
Virtual Machine Deployment Considerations
Design Choice "Right-size" virtual machines based on application profile.
Justification Virtual machines must be properly designed, provisioned, and managed to ensure the efficient operation of these applications and services.
Impact Create standardized templates for each guest operating system used in production.
References (lecture guide page number)
Swap and Operating System Paging File Location
Design Choice Place the virtual machine swap files in the same location as the other virtual machine files (default behavior).
Justification Keeping files on default datastore is easier to manage. Moving the vmswap files to a different location for performance or replication bandwidth issues requires additional configuration and management processes.
Impact If future requirements mandate that virtual machine swap files be moved to a separate location, all relevant virtual machines will need to be reconfigured.
References (lecture guide page number)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 21 of 27
6. Virtual Datacenter
6.1 Requirements
- Site o 1 primary datacenter (1000 VMs and grow fast) o 10 branch office (less than 10 servers per site)
- Availability o Design for maximum availability o There is an existing highly available SQL database system which can be leveraged
- Management: o All component must use corporate authentication (Active Directory) o Some VM administrator are running Mac OS
- Compute o Production VMs cannot reside on the same ESX/ESXi host as Dev/Test or QA VMs o Maximum agility (stateless preferred)
6.2 Design Patterns
vCenter Server Physical or Virtual (VM or Virtual Appliance)
Design Choice vCenter Server will be provisioned as a virtual machine.
Justification The vCenter Server system will be set up as a virtual system on a separate ESX cluster (Management cluster) due to cost and management considerations. This allows ACME Energy to leverage the benefits of VMware infrastructure like vMotion, DRS, and VMware HA.
The Virtual Appliance version will not be used, as some of the features (SQL database, vCenter Update Manager, vCenter Linked Mode and vCenter Heartbeat) are not available.
Impact To improve manageability, the location of the vCenter Server virtual machine should be static. This can be handled by pinning the vCenter Server virtual machine to a specific ESX host or by setting up a separate management cluster.
References (lecture guide page number)
vSphere Installation and Setup Guide
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 22 of 27
vCenter Server Shared or Dedicated
Design Choice The vCenter Server system will be dedicated, with the exception of vSphere additional components (vCenter Update Manager, Core Dump Collector, Syslog Collector…).
Justification Availability and scalability of a dedicated vCenter Server system is better because resources are not shared with other services.
Impact Cost of a dedicated vCenter Server system is higher. A dedicated vCenter Server system will provide better performance because it is not competing with other services running on the system.
References (lecture guide page number)
vSphere Installation and Setup Guide
vCenter Server Database Shared or Dedicated
Design Choice Separate instance on a existing highly available database system
Justification ACME Energy has a clustered production database system that is highly available, so a shared vCenter Server database system offers better availability. If virtual, SQL cluster is hosted on the Management cluster
Impact Database management is offloaded to a separate database team.
References (lecture guide page number)
vSphere Installation and Setup Guide
vCenter Update Manager Location
Design Choice VMware Update Manager will be co-located on the vCenter Server system and requires a separate database instance on an external database system.
Justification The vCenter System server will be sized appropriately to accommodate only Update Manager. Only VMware components will be installed on the vCenter System server, to allow for better performance and scalability. The size of the environment required a separate database instance but not a dedicated server VM.
Impact Co-located with vCenter Server will required to allocate more resource to the vCenter VM but will decrease the management cost as no more management VM is required.
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 23 of 27
vCenter Update Manager Location
References (lecture guide page number)
VMware vSphere Update Manager Sizing Estimator
Cluster Architecture
Design Choice Production cluster requires 30 hosts over 4 blade chassis.
Dev/Test/QA requires 19 hosts over 3 blade chassis.
A separate Management 2 host cluster will be created from existing physical servers (hardware reuse).
Branch office : Each branch office will have 2 host cluster
Justification Formula: Minimum hosts + server containment + 2 additional hosts per cluster (N+2 redundancy for HA)
Production: 20 + 8 + 2 = 30 host
Dev/Test/QA : 12 + 5 +2 = 19 host
Impact
References (lecture guide page number)
Resource Pools
Design Choice Production : Initially, no resource pools will be used.
Dev/Test/QA : Ressource pools used to separate Dev, Test and QA
Justification Production : Separation of workload already exists at the cluster level.
Dev/Test/QA : Share the same cluster, must limit/guarantee resource for each function
Impact Production : As more virtual machines are added to the clusters, resource pools might need to be configured to guarantee resources to more critical workloads.
Dev/Test/QA : Need adjustments as VM are added to reflect resource requirement
Branch office : No need for resource pool
References (lecture guide page number)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 24 of 27
vSphere License Edition
Design Choice vSphere Enterprise Plus Edition
Justification vNetwork distributed switch functionality (that is, PVLAN, network vMotion) is required to meet the requirements of the solution. vSphere Enterprise Plus Edition enables features like Host Profiles, vNetwork distributed switch, and third-party multipathing.
Impact vSphere Enterprise Plus is the most expensive vSphere edition. vSphere Enterprise Plus also provides an upgrade path to the Cisco Nexus 1000V virtual switch.
References (lecture guide page number)
6.3 Logical Design
Figure 3. Cluster Logical Design
Attribute Specification
vCenter Server version 4.0
Physical or virtual system Virtual
Number of CPUs
Processor type
2
vCPU
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 25 of 27
Attribute Specification
Processor speed N/A
Memory 4GB
Number of NIC and ports 1 / 1
Number of disks and disk size(s) 2: 30GB (OS) and 30GB (data)
Operating System Type Windows Server 2008 SP2
6.4 Physical Design
Attribute Specification
Vendor and model VMware virtual hardware 7
Processor type VMware vCPU
NIC vendor and model
Number of ports
Network
vmxnet3
1x GbE
Management Network
Local disk N/A
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 26 of 27
7. Management and Monitoring
7.1 Requirements
Requirement 1
Requirement 2
Requirement 3
7.2 Design Patterns
Server, Network, SAN Infrastructure Monitoring
Design Choice All of the physical systems, including the network and SAN, will continue to be monitored directly by the enterprise monitoring system, which will be configured to incorporate any additional infrastructure required to support vSphere.
Justification Leverages existing enterprise monitoring system. Allows for exploration of virtualization-specific offerings in the future.
Impact Requires integration of vCenter Server and ESX with existing monitoring systems.
References (lecture guide page number)
vSphere Management
Design Choice The vSphere infrastructure will be managed through vMA and VMware vSphere PowerCLI.
Justification vMA is a virtual appliance that is preloaded with a 64-bit Enterprise Linux operating system, VMware Tools, vSphere SDK for Perl 4.0, and vSphere CLI 4.0. Centralized logging using vi-logger will be configured to consolidate logs from all ESXi hosts into one location.
Impact Requires compute resources for the vMA. vMA should be placed in the management cluster.
References (lecture guide page number)
VMware vSphere: Design Workshop Course Lab
© 2011 VMware, Inc. All rights reserved.
Page 27 of 27
Backup/Restore Considerations
Design Choice There is an agent-based enterprise backup system that is used to back up each physical system. The plan is to continue using this method to back up virtual machines. Restoring virtual machine guest operating systems, applications, and associated data will also follow the same method as for physical machines.
Justification Leverage existing backup and restore mechanisms. Virtualization-specific solutions will be explored in the future.
Impact RTOs and RPOs should be determined for virtualized workloads.
References (lecture guide page number)