+ All Categories
Transcript
Page 1: VMware Validated Design · VC OS Top-of-Rack Leaf Switches L3 L2 BGP Peering Spine Switches NSX Edge Services Gateway One-Arm Load Balancer To Shared Edge and Compute Domain ... All

VMware Validated Designfor Software-Defined Data Center 4.3

Copyright © 2018 VMware, Inc. All rights reserved.

Reference

Secondary Storage

@vmwcf | vmware.com/go/vvd-docs

SSD PCIe

Read and Write Cache

Capacity

NVMeCaching

Tier

SD

CapacityTier

The design uses NFS as a secondary storage tier.NFS is used for the content library and templates consumed by vRealize Automation blueprints and for vRealize Log Insight log archives.

NFS is also used by any vSphere APIs for Data Protection compatible solution to store backups.

Storage

Internet orEnterprise

WAN/MPLS

172.16.11.0/24

192.168.11.0/24

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

Region BRegion A

ECMPNSX Edge

Services Gateways

NSXMOS

VDPOS

PSCOS

VCOS

Top-of-RackLeaf Switches

L3

L2

BGP Peering

SpineSwitches

NSX Edge Services GatewayOne-Arm Load Balancer

To Shared Edge and Compute Domain

Workload Domains

Inte

rnet

or

Ent

erp

rise

WA

N/M

PLS

192.168.11.0/24

NSX Edge Services GatewayOne-Arm Load Balancer

Internet orEnterprise

WAN/MPLS

ECMPNSX Edge

Services Gateways

Top-of-RackLeaf Switches

L3

L2

BGP Peering

SpineSwitches

Inte

rnet

or

Ent

erp

rise

WA

N/M

PLS

172.17.11.0/24

PSCOS

VDPOS

VCOS

NSXMOS

Management Universal Distributed Logical Router

192.168.10.0/24 192.168.10.0/24

192.168.31.0/24

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

192.168.32.0/24

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Reserved for Disaster Recovery

IWS

IMSVRA

VIP: 192.168.11.56192.168.11.54 > Active Node192.168.11.55 > Active Node

VIP: 192.168.11.59192.168.11.57 > Active Node192.168.11.58 > Passive Node

VIP: 192.168.11.53

Region B

192.168.11.0/24NSX Edge Services Gateway

One-Arm Load Balancer

ECMPNSX Edge

Services Gateways

Region Independent Application Virtual Network

192.168.10.0/24

192.168.32.0/24

IAS

APP

OS

IAS

APP

OS

BUC

APP

OS

192.168.31.0/24

IAS

APP

OS

IAS

APP

OS

BUC

APP

OS

Region Independent Application Virtual Network

192.168.11.0/24

Region AECMPNSX Edge

Services Gateways

NSX Edge Services GatewayOne-Arm Load Balancer

192.168.10.0/24

VRA

IWS

IMS

DEM

IWS

IMS

DEM

APP

OS

APP

OS

APP

OS

APPOS

APP

OSOS

APP

OS

APP

SSQL BUSAPP

OSAPP

OS

APP

OS

APP

OSVRA

IWS

IMS

DEM

SQL

VRA

IWS

IMS

DEM

BUS

APP

OS

APP

OS

APP

OS

APPOS

APP

OS

APP

OS

OS

APP

OS

APP

APP

OS

APP

OS

APP

OS

Management Universal Distributed Logical Router

VRA

Distributed Logical Routing and Application Virtual Networks for Management, Operations and Automation Solutions

vRealize Automation ApplianceVRA

vRealize Automation IaaS Web ServerIWSvRealize Automation IaaS Manager ServiceIMSvRealize Automation IaaS vSphere Proxy AgentIASvRealize Automation Distributed Execution ManagerDEMvRealize Business ApplianceBUSvRealize Business Data CollectorBUC

Microsoft SQL Server DatabaseSQL

Networks Notable Acronyms

Management Application Virtual Network VXLAN

Universal Transit Network VXLAN

External Transit Network(s)

Management Distributed Port Group

Logical Component Architecture

In a dual-region Software-Defined Data Center, the two Platform Service Controllers and two vCenter Server instances are deployed in each region.This includes a vCenter Server for the management domain and a vCenter Server for the shared edge and compute domains.

Each vCenter Server instance is connected to a load-balanced pair of Platform Services Controllers using an NSX Edge Services Gateway.To enable enhanced linked mode, the design joins the Platform Services Controller instances into a unified Single Sign-On domain.

Region A

Common vCenter Single Sign-On Domain(Ring Topology)

Region B

Management DomainvCenter Server

Appliance

Platform ServicesControllerAppliance

Compute DomainvCenter Server

Appliance

Platform ServicesControllerAppliance

Platform ServicesControllerAppliance

vSphere Update Manager Download

Service

Compute DomainvCenter Server

Appliance

In a dual-region Software-Defined Data Center, two primary NSX Manager instances are deployed in Region A. One for the management domain and one for the shared edge and compute domains, along with associated NSX Universal Controller Clusters.

In Region B, secondary NSX Manager instances automatically import the configurations of the NSX Universal Controller Clusters from Region A.

Region B Management DomainRegion A Management Domain

Region B Shared Edge and Compute Domain(Edge Resource Pool)

Region A Shared Edge and Compute Domain(Edge Resource Pool)

NSX Edge Services Gateways

(N/S Routing)

NSX Edge Services Gateway w/ HA

(One-Arm Load Balancer)

NSX Edge Services Gateways

(N/S Routing)

NSX Edge Services Gateway w/ HA

(One-Arm Load Balancer)

Management DomainNSX Manager

(Primary)

Compute DomainNSX Manager

(Primary)

Management DomainNSX Universal

Controller Cluster

Management DomainNSX Manager(Secondary)

Compute DomainNSX Manager(Secondary)

Import of Management DomainNSX Controller Configurationfrom Primary NSX Manager

Compute Domain NSX Universal Controller

Cluster

Import of Compute DomainNSX Controller Configuration

from Primary NSX Manager

NSX Manager Pairing

NSX Manager Pairing

Region A Region B

NSX Edge Services Gateways

(N/S Routing)

NSX Edge Services Gateways

(N/S Routing)

In a dual-region Software-Defined Data Center, a vRealize Log Insight cluster is deployed in each region. Each cluster consists of three nodes, enabling continued availability and increased log ingestion rates.

vRealize Log Insight collects and analyzes log data across the domain using the syslog protocol and the ingestion API.vRealize Log Insight also integrates with vRealize Operations Manager to facilitate root cause analysis.

Region A Region B

Management / Compute

vCenter Servers

NSX

vSAN

vRealize Log Insight Cluster vRealize Log Insight Cluster NSX

MasterNode

WorkerNode

WorkerNode

MasterNode

WorkerNode

WorkerNode

vRealizeAutomation

Management / Compute

vCenter Servers

vRealizeOperations

Any Supported

Primary Storage

NFS

Log Archives

Any Supported

Primary Storage

NFS

Log Archives

Event Forwarding

via Ingestion API

vSAN

vRealize Log Insight

BUS

BUC

vRealizeBusiness

VRA IWS IMS DEM IAS

VRA

VRA

IWS IMS DEM IAS

SQL

vRealizeAutomation

BUC

vRealizeBusiness

IAS

IAS

vRealizeAutomation

BGPPeering

BGPPeering

BGPPeering

BGPPeering

BGPPeering

BGPPeering

Refer to the design release notes for products and versions included in the design.

Replicated for Disaster RecoveryvRealize Automation / vRealize Orchestrator vRealize Business for Cloud

Core vSphere Management NSX

vRealize Automation, vRealize Orchestrator and vRealize Business for Cloud

Distributed Logical Routingand Application Virtual Networks

vRealize Operationsand vRealize Log Insight

vRealize Automation and vRealize Business for Cloud

Primary Storage

Core and Domain Architecture

Management Distributed Switch

Universal Management Transport Zone

plus NFS

Any Supported StoragevSAN Recommended

Minimum 4 Nodes | vSAN ReadyNodes RecommendedvSphere HA and DRS Enabled

ESXi

VTEP VTEP

ESXi

VTEP VTEP

ESXi

VTEP VTEP

ESXi

VTEP VTEP

Management Domain

The management domain hosts the infrastructure components used to instantiate, manage and monitor the SDDC. This includes the core infrastructure

components, such as the Platform Services Controllers, vCenter Server instances, NSX Managers, NSX Controllers for the management domain, vSphere Replication,

Site Recovery Manager, as well as the SDDC monitoring and automation solutions like vRealize Operations, vRealize Log Insight and vRealize Automation.

Managed by Management Domain vCenter Server

Workloads running in the SDDC do not have direct access to external networks.To access external networks, tra c is routed through distributed routing tothe NSX Edge Services Gateways in the shared edge and compute domain.

Expansions beyond the initial shared domain are simply compute Domains.

plus NFS

Minimum 4 Nodes | vSAN ReadyNodes Recommended vSphere HA and DRS Enabled | Business Workload Requirements

Compute Distributed Switch

ESXi ESXi ESXi ESXi

VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

Shared Edge and Compute DomainManaged by Compute Domain vCenter Server

L3

L2

The design supports L3 or L2 network transport services. For a scalable and vendor-neutral data center network, use an L3 transport.

All design documentation is provided for an L3 transport. Adjust the deployment and operations guidance under the context of an L2 transport..

Spine Spine Spine

vSphere Update Manager Download

Service

vRealizeAutomation

Proxy Agents

vRealizeOperations

RemoteCollectors

vSphere Update Manager Download Service, vRealize Operations Analytics Cluster and Remote Collectors, Regional vRealize Log Insight Cluster,

Distributed vRealize Automation and Proxy Agents, and vRealize Business for Cloud Server and Collector.

vSphere Update Manager Download Service,vRealize Operations Remote Collectors, Regional vRealize Log Insight Cluster,vRealize Automation Proxy Agents and vRealize Business for Cloud Collector.

Disaster Recovery vRealize Operations Analytics Cluster, Distributed vRealize Automation, and vRealize Business for Cloud Server.

Application Virtual Networks for SDDC Management Solutions in Region A Application Virtual Networks for SDDC Management Solutions in Region B

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Universal Logical Switch / VXLAN Segment Universal Logical Switch / VXLAN Segment

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Independent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Independent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Any Supported StoragevSAN Recommended

All design documentation for is provided for an L3 transport with BGP based peering.

A TechNote is provided for the alternative mixed-use or end-to-end use of OSPF.

Platform ServicesControllerAppliance

The design uses standardized building blocks called workload domains. Below is the standard design based on a two domain

model with a dedicated management domain and shared edge/compute domain.

To Shared Edge and Compute Domain

Workload Domains

Management DomainvCenter Server

Appliance

Management DomainvCenter Server

Appliance

Compute DomainvCenter Server

Appliance

Management DomainvCenter Server

Appliance

Compute DomainvCenter Server

Appliance

Server

10 GigE

L3

L2

L3

L2

10 GigE

Additional Compute Domains

Shared Edge and Compute Domain(4+ Hosts)

Management Domain(4+ Hosts)

LeafLeafLeafLeafLeafLeaf

IGMP IGMP IGMP IGMP IGMP IGMP

Leaf Leaf

IGMP IGMP

Host Connectivity

Layer 3 ToR Switch

VLAN 1611 VLAN 1612 VLAN 1613 VLAN 1614

Management172.16.11.0/24

DGW:172.16.11.253

vMotion172.16.12.0/24

DGW:172.16.12.253

VXLAN172.16.13.0/24

vSAN172.16.14.0/24

ESXi Host

Routed Uplinks (ECMP)

VLAN Trunk (802.1Q)

L2

L3

Span

of

VLA

Ns

Span

of

VLA

Ns

When using the recommended L3 network transport, the top-of-rack leaf switches of each rack act as thecorresponding L3 interface for the associated subnets. The management domain and the shared edge and compute

domain are provided with externally accessible VLANs to access to the Internet and corporate networks.

The two 10GbE NICs on each host are connected across the top-of-rack leaf switches and teamed on the vSphere Distributed Switch via an active-active configuration.All port groups, except for the ones that carry VXLAN tra c, are configured for the 'Route based on physical NIC load' teaming algorithm.

VTEP kernel ports and VXLAN tra c use the ’Route based on SRC-ID' algorithm. The vSphere Distributed Switch has a MTU of 9000 configured for Jumbo Frames along with with necessary VMkernel ports.

Spine Spine

40 GigE 40 GigE

Network Transport

Region Protection and Disaster Recovery

Workload Domains

Region ANon-Replicated

vRealize Log Insight

Protection Groups• vRealize Automation• vRealize Business for Cloud• vRealize Operations

Protection Groups• vRealize Automation• vRealize Business for Cloud• vRealize Operations

Region BNon-Replicated

vRealize Log Insight

Region A Infrastructure Management

vSphereNSX

Site Recovery Manager

Region B Infrastructure Management

vSphereNSX

Site Recovery Manager

Region A Replicated Region B Replicated

Domains

Region B Management DomainRegion A Management Domain

Region A Management Domain Region B Management Domain

Region A Management Domain Region B Management Domain

Region A

Export forContent Libraryand Templates

Export forLog Archives

Export forBackups

NFS Storage Array

Volume 2Volume 1

Region B

NFS Storage Array

Volume 2Volume 1

vRealize AutomationBusiness Groups & Reservations

The design integrates solutions for compute, storage, network, cloud operations, and cloud management. A single vRealize Operations analytics cluster monitors and performs diagnostics across the Software-Defined

Data Center by using a series of remote collectors and solution management packs.vRealize Operations

The design establishes a Cloud Management Platform with vRealize Automation to provide aservice catalog and self-service portal to deploy, update, and manage the workloads. Its embedded instance of

vRealize Orchestrator provides a repository of extensible workflows and integrations. vRealize Business for Cloudprovides visibility into the financial aspects of the cloud infrastructure, allowing cost to be tracked and optimized.

The design implements a single vRealize Automation tenant. Business groups can be created to fit your needs.Within each business group the tenant administrators are able to manage users and groups, apply tenant-specific

branding, enable notifications, configure business policies, and manage the service catalog.The IT Automating IT Use Case documenation provides implementation steps for a set of scenarios.

One region is designated as the primary region and the other as the secondary region. SDDC management, automation and operations solutions are deployed in the primary region and configured to migrate to the secondary region in the event of a disaster. All regions actively run business workloads.

https://my.sddc.local/vcac/org/company

Sign In

BusinessGroupManager

BusinessGroupManager

TenantAdmin

IaaSAdmin

TenantAdmin

IaaSAdmin

Business GroupReservation

Business GroupReservation

EdgeReservation

Region B Data Center Infrastructure Fabric

Region B Fabric GroupFabricAdmin

Additional Compute Domain(s)

Export forContent Libraryand Templates

Export forLog Archives

Export forBackups

Shared Edge/Compute Domain

EdgeReservation

Business GroupReservation

Business GroupReservation

Region A Data Center Infrastructure Fabric

Region A Fabric GroupFabricAdmin

Additional Compute Domain(s)Shared Edge/Compute Domain

Universal Compute Transport Zone

Platform Services ControllerPSCNSX ManagerNSXM

Site Recovery ManagerSRMUniversal Distributed Logical RouterUDLR

VXLAN Tunnel EndpointVTEP

vSphere Data ProtectionVDPvSphere ReplicationVR

192.168.11.0/24

Region AECMPNSX Edge

Services Gateways

NSX Edge Services GatewayOne-Arm Load Balancer

192.168.10.0/24

192.168.31.0/24

MasterNode

APP

OS

APP

OSAPP

OS

ReplicaNode

DataNode

APP

OS

APP

OS

CollectorNode

CollectorNode

vRealize Operations

vRealize Operations

vRealize Log Insight

ClusterVIP

APP

OSAPP

OS

APP

OS

MasterNode

WorkerNode

WorkerNode

Region B

192.168.11.0/24

NSX Edge Services GatewayOne-Arm Load Balancer

ECMPNSX Edge

Services Gateways

192.168.10.0/24

192.168.32.0/24

APP

OS

APP

OS

CollectorNode

CollectorNode

vRealize Operations

vRealize Log Insight

Replicated for Disaster Recovery

ClusterVIP

APP

OSAPP

OS

APP

OS

MasterNode

WorkerNode

WorkerNode

MasterNode

APP

OS

APP

OSAPP

OS

ReplicaNode

DataNode

Management Universal Distributed Logical Router

Local Compute Transport Zone

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Universal Transit NetworkUniversal Logical Switch / VXLAN Segment

Region Independent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Independent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

NSX Edge Services Gateway w/ HA

(Load Balancer)

NSX Edge Services Gateway w/ HA

(Load Balancer)

SharedStorageSystems

Region A Region B

Management / Compute

vCenter Servers

NSX Remote Collectors Remote Collectors NSX

SharedStorageSystems

Analytics Cluster

MasterNode

ReplicaNode

DataNode

ClctrNode

ClctrNode

ClctrNode

ClctrNode

vRealizeAutomation

Management / Compute

vCenter Servers

Region B Management DomainRegion A Management Domain

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

Region Dependent Application Virtual NetworkUniversal Logical Switch / VXLAN Segment

All design documentation and validation is provided using vSAN as the primary storage system.vSAN enables both all-flash and hybrid architectures. Adjust deployment and operations for supported storage systems.

Use of vSAN ReadyNodes is recommended to ensure seamless compatibility and support.The configuration and assembly of the components are standardized to eliminate system variability.

A consolidated management and compute design is also available.Refer to the VVD documentation.

or

(failover/failback)SRM

vSphere Replication when using vSANReplication

192.168.11.51 > Active Node192.168.11.52 > Active Node

192.168.11.50 > Active Node

Management Distributed Switch

Management Domain ESXi Host

nic0 nic1

NFS VMK MTU9000

Management VMK MTU9000

vMotion VMK MTU9000

VTEP (VXLAN) VMK MTU9000

vSAN VMK MTU9000

10 GigE 10 GigE

vDS MTU 9000

Compute Distributed Switch

Shared Edge and Compute Domain ESXi Host

nic0

vDS MTU 9000

nic110 GigE 10 GigE

NFS VMK MTU9000

Management VMK MTU9000

vMotion VMK MTU9000

VTEP (VXLAN) VMK MTU9000

vSAN VMK MTU9000

Uplink 02

Uplink 01

External ConnectivityUplink 02

External Connectivity

Uplink 01

vSphere Replication VMK MTU9000

Management

vMotion

vSAN

VTEP (VXLAN)

NFS

Any SupportedStorage

ESXi-MGMT-01 ESXi-MGMT-02 ESXi-MGMT-03VTEPs VTEPs VTEPs

Any SupportedStorage

ESX-COMP-01VTEPs

NSX Controllers(Compute)

vCenter(Compute)

PSC (Compute)

NSX Manager(Compute)

NSX Manager(Management)

PSC(Management)

vCenter(Management)

VDP(Management)

SRM(Management)

VR(Management)

ExternalNetworks

N/S NSX EDGE(Compute)

vSphere Replication

North/South Uplink(s)

Management

vMotion

vSAN

VTEP (VXLAN)

NFS

North/South Uplink(s)

External Connectivity

External Connectivity

Core and Domain Architecture Core and Domain Architecture

L3

L2

L3

L2

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

Universal Logical Switch

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

Universal Logical Switch

APP

OS

APP

OS

APP

OS

APP

OS

Universal Logical Switch

APP

OS

APP

OS

APP

OS

Universal Logical Switch

APP

OS

L2

L2

UDLRUDLRUDLR

EdgeResource

Pool

UDLR

UDLR

UDLR UDLR & DLR

N/S NSX EDGE(Management)

UDLR & DLR

L3

L3

L2

L2

L3

L3

L3 L3L3 L3

Dis

trib

uted

Swit

ches

Uni

vers

alT

rans

po

rt Z

one

sC

ore

Pla

tfo

rmSe

rvic

esA

pp

licat

ion

Vir

tual

Net

wo

rks

for

SDD

C S

olu

tio

ns

Wo

rklo

ad V

irtu

al N

etw

ork

s

No

rth/

So

uth

Ro

utin

g

NSX Controllers(Management)

Man

agem

ent

Cus

ter

Ed

ge/

Co

mp

ute

Clu

ster

UDLR & DLR

UDLR & DLR

Top Related