1
FlexPod Datacenter with Microsoft
Exchange 2013, F5 BIG-IP and Cisco
Application Centric Infrastructure Design
Guide Last Updated: May 2, 2016
2
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster,
more reliable, and more predictable customer deployments. For more information visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS
(COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND
ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM
A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE
LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR
INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER
PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR
OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON
FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the
Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast
Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,
IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,
MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect,
ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your
Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc.
and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The
use of the word partner does not imply a partnership relationship between Cisco and any other company.
(0809R)
© 2016 Cisco Systems, Inc. All rights reserved.
3
Table of Contents
About Cisco Validated Designs ................................................................................................................................................ 2
Executive Summary ................................................................................................................................................................. 6
Solution Overview .................................................................................................................................................................... 7
Introduction ......................................................................................................................................................................... 7
Audience ............................................................................................................................................................................. 7
Technology Overview .............................................................................................................................................................. 8
Cisco Unified Computing System ......................................................................................................................................... 8
Cisco UCS Blade Chassis .............................................................................................................................................. 10
Cisco UCS B200 M4 Blade Server ................................................................................................................................. 11
Cisco UCS Virtual Interface Card 1240 .......................................................................................................................... 11
Cisco UCS 6248UP Fabric Interconnect ......................................................................................................................... 11
Cisco Nexus 2232PP 10GE Fabric Extender .................................................................................................................. 12
Cisco Unified Computing System Manager .................................................................................................................... 12
Cisco UCS Service Profiles ............................................................................................................................................ 13
Cisco Nexus 9000 Series Switch ....................................................................................................................................... 16
Cisco UCS Central ............................................................................................................................................................. 16
FlexPod ............................................................................................................................................................................. 16
FlexPod System Overview ............................................................................................................................................. 17
NetApp FAS and Data ONTAP ....................................................................................................................................... 18
NetApp Clustered Data ONTAP ..................................................................................................................................... 19
NetApp Storage Virtual Machines .................................................................................................................................. 19
VMware vSphere ............................................................................................................................................................... 20
Firewall and Load Balancer ............................................................................................................................................ 20
SnapManager for Exchange Server Overview .................................................................................................................... 23
SnapManager for Exchange Server Architecture ................................................................................................................ 24
Migrating Microsoft Exchange Data to NetApp Storage ................................................................................................. 25
High Availability ................................................................................................................................................................. 25
Microsoft Exchange 2013 Database Availability Group Deployment Scenarios ............................................................... 26
Exchange 2013 Architecture .............................................................................................................................................. 27
Client Access Server ..................................................................................................................................................... 27
Mailbox Server .............................................................................................................................................................. 28
Client Access Server and Mailbox Server Communication ............................................................................................. 28
High Availability and Site Resiliency ............................................................................................................................... 29
4
Exchange Clients ........................................................................................................................................................... 30
POP3 and IMAP4 Clients ............................................................................................................................................... 30
Name Space Planning ........................................................................................................................................................ 30
Namespace Models ....................................................................................................................................................... 30
Network Load Balancing .................................................................................................................................................... 31
Common Name Space and Load Balancing Session Affinity Implementations ................................................................ 32
Cisco Application Centric Infrastructure ............................................................................................................................. 33
Cisco ACI Fabric ............................................................................................................................................................ 33
Solution Design ...................................................................................................................................................................... 35
FlexPod, Cisco ACI and L4-L7 Services Components ...................................................................................................... 35
Validated System Hardware Components ...................................................................................................................... 38
FlexPod Infrastructure Design ............................................................................................................................................ 38
Hardware and Software Revisions ................................................................................................................................. 38
FlexPod Infrastructure Physical Build ............................................................................................................................. 39
Cisco Unified Computing System ................................................................................................................................... 40
Cisco Nexus 9000 ......................................................................................................................................................... 49
Application Centric Infrastructure (ACI) Design .................................................................................................................. 52
ACI Components ........................................................................................................................................................... 52
End Point Group (EPG) Mapping in a FlexPod Environment ............................................................................................ 54
Virtual Machine Networking ........................................................................................................................................... 55
Virtual Machine Networking ........................................................................................................................................... 56
Onboarding Infrastructure Services ................................................................................................................................ 57
Onboarding Microsoft Exchange on FlexPod ACI Infrastructure ......................................................................................... 58
Exchange Logical Topology ........................................................................................................................................... 59
Microsoft Exchange as Tenant on ACI Infrastructure ...................................................................................................... 59
Common Services and Storage Management ................................................................................................................ 65
Connectivity to Existing Infrastructure ............................................................................................................................ 66
Exchange - ACI Design Recap ....................................................................................................................................... 67
Exchange Server Sizing ..................................................................................................................................................... 70
Exchange 2013 Server Requirements Calculator Inputs ................................................................................................. 70
Exchange 2013 Server Requirements Calculator Output ................................................................................................ 72
Exchange and Domain Controller Virtual Machines ............................................................................................................ 76
Namespace and Network Load Balancing ...................................................................................................................... 77
NetApp Storage Design ..................................................................................................................................................... 77
Network and Storage Physical Connectivity ................................................................................................................... 77
5
Storage Configurations .......................................................................................................................................................... 84
Aggregate, Volume, and LUN Sizing .................................................................................................................................. 84
Storage Layout .................................................................................................................................................................. 84
Exchange 2013 Database and Log LUNs ........................................................................................................................... 85
Validation ............................................................................................................................................................................... 86
Validating the Storage Subsystem with JetStress .............................................................................................................. 86
Validating the Storage Subsystem with Exchange 2013 LoadGen .................................................................................. 87
Conclusion ............................................................................................................................................................................. 88
References ........................................................................................................................................................................ 88
Interoperability Matrixes ................................................................................................................................................ 89
About the Authors .................................................................................................................................................................. 90
Acknowledgements ........................................................................................................................................................... 90
Executive Summary
6
Executive Summary
Microsoft Exchange 2013 deployed on FlexPod with Cisco ACI and F5 BIG-IP LTM is a predesigned, best
Cisco Nexus 9000 family of switches, F5 BIG-IP Application Delivery Controller (ADC) and NetApp fabric-
attached storage (FAS) or V-Series systems. The key design details and best practices to be followed for
deploying this new shared architecture are covered in this design guide.
This Exchange Server 2013 solution is implemented on top of the FlexPod with VMware vSphere 5.5 and
Cisco Nexus 9000 Application Centric Infrastructure (ACI). The details for this infrastructure is not covered in
this document, but can be found at the following link:
FlexPod Datacenter with Microsoft Exchange 2013, F5 BiG-IP, and Cisco Application Centric Infrastructure
(ACI) Deployment Guide
Solution Overview
7
Solution Overview
Introduction
Cisco® Validated Designs include systems and solutions that are designed, tested, and documented to
facilitate and improve customer deployments. These designs incorporate a wide range of technologies and
products into a portfolio of solutions that have been developed to address the business needs of customers.
Achieving the vision of a truly agile, application-based data center requires a sufficiently flexible
infrastructure that can rapidly provision and configure the necessary resources independently of their
location in the data center.
This document describes the Cisco solution for deploying Microsoft Exchange® with NetApp FlexPod®
solution architecture®, F5® BIG-
using Cisco Application Centric Infrastructure (ACI). Cisco ACI is a holistic architecture that introduces
hardware and software innovations built upon the new Cisco Nexus® 9000 Series product line. Cisco ACI
provides a centralized policy-driven application deployment architecture, which is managed through the
Cisco Application Policy Infrastructure Controller (APIC). Cisco ACI delivers software flexibility with the
scalability of hardware performance.
Audience
The audience of this document includes, but is not limited to, sales engineers, field consultants, professional
services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure
that is built to deliver IT efficiency and enable IT innovation.
Technology Overview
8
Technology Overview
Cisco Unified Computing System
The Cisco Unified Computing System is a third-generation data center platform that unites computing,
networking, storage access, and virtualization resources into a cohesive system designed to reduce TCO
and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet (10GbE)
unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable,
multi-chassis platform in which all resources participate in a unified management domain that is controlled
and managed centrally.
Figure 1 Cisco Unified Computing System
Technology Overview
9
Figure 2 Cisco Unified Computing System Components
Figure 3 Cisco Unified Computing System
The main components of the Cisco UCS are:
Compute
The system is based on an entirely new class of computing system that incorporates blade servers
based on Intel Xeon® E5-2600 Series Processors. Cisco UCS B-Series Blade Servers work with
Technology Overview
10
virtualized and non-virtualized applications to increase performance, energy efficiency, flexibility and
productivity.
Network
The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric. This network
foundation consolidates LANs, SANs, and high-performance computing networks which are separate
networks today. The unified fabric lowers costs by reducing the number of network adapters, switches,
and cables, and by decreasing the power and cooling requirements.
Storage access
The system provides consolidated access to both storage area network (SAN) and network-attached
storage (NAS) over the unified fabric. By unifying storage access, Cisco UCS can access storage over
Ethernet, Fiber Channel, Fiber Channel over Ethernet (FCoE), and iSCSI. This provides customers with
the options for setting storage access and investment protection. Additionally, server administrators can
reassign storage-access policies for system connectivity to storage resources, thereby simplifying
storage connectivity and management for increased productivity.
Management
The system uniquely integrates all system components which enable the entire solution to be managed
as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user
interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to
manage all system configuration and operations.
The Cisco UCS is designed to deliver:
A reduced Total Cost of Ownership (TCO), increased Return on Investment (ROI) and increased
business agility.
Increased IT staff productivity through just-in-time provisioning and mobility support.
A cohesive, integrated system which unifies the technology in the data center. The system is
managed, serviced and tested as a whole.
Scalability through a design for hundreds of discrete servers and thousands of virtual machines and
the capability to scale I/O bandwidth to match demand.
Industry standards supported by a partner ecosystem of industry leaders.
Cisco UCS Blade Chassis
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing
System, delivering a scalable and flexible blade server chassis.
The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-
standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers
and can accommodate both half-width and full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power
supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-
Technology Overview
11
redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors
(one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.
A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O
bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.
Figure 4 Cisco Blade Server Chassis (Front, Rear and Populated Blades View)
Cisco UCS B200 M4 Blade Server
The Cisco UCS B200 M4 Blade Server is a half-width, two-socket blade server. The system uses two Intel
Xeon® E5-2600 Series Processors, up to 384 GB of DDR3 memory, two optional hot-swappable small form
factor (SFF) serial attached SCSI (SAS) disk drives, and two VIC adapters that provides up to 80 Gbps of I/O
throughput. The server balances simplicity, performance, and density for production-level virtualization and
other mainstream data center workloads.
Figure 5 Cisco UCS B200 M4 Blade Server
Cisco UCS Virtual Interface Card 1240
A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet, FCoE-capable modular LAN
on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers.
When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be
expanded to eight ports of 10 Gigabit Ethernet.
Cisco UCS 6248UP Fabric Interconnect
The Fabric interconnects provide a single point for connectivity and management for the entire system.
Typically deployed as an active-
Technology Overview
12
single, highly-available management domain controlled by Cisco UCS Manager. The fabric interconnects
manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a
-logical location in the system.
Cisco UCS 6200 Series Fabric Interconnect -Gbps unified fabric with low-latency,
lossless, cut-through switching that supports IP, storage, and management traffic using a single set of
cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections
equivalently, establishing a virtualization-aware environment in which blade, rack servers, and virtual
machines are interconnected using the same mechanisms. The Cisco UCS 6248UP is a 1-RU fabric
interconnect that features up to 48 universal ports that can support 80 Gigabit Ethernet, Fiber Channel over
Ethernet, or native Fiber Channel connectivity.
Figure 6 Cisco UCS 6248UP Fabric Interconnect
Cisco Nexus 2232PP 10GE Fabric Extender
The Cisco Nexus 2232PP 10G provides 32 10 Gb Ethernet and Fibre Channel Over Ethernet (FCoE) Small
Form-Factor Pluggable Plus (SFP+) server ports and eight 10 Gb Ethernet and FCoE SFP+ uplink ports in a
compact 1 rack unit (1RU) form factor.
When a C-Series Rack-Mount Server is integrated with Cisco UCS Manager, through the Nexus 2232
platform, the server is managed using the Cisco UCS Manager GUI or Cisco UCS Manager CLI. The Nexus
2232 provides data and control traffic support for the integrated C-Series server.
Cisco Unified Computing System Manager
Cisco UCS Manager provides unified, centralized, embedded management of all Cisco Unified Computing
System software and hardware components across multiple chassis and thousands of virtual machines.
Administrators use the software to manage the entire Cisco Unified Computing System as a single logical
entity through an intuitive GUI, a command-line interface (CLI), or an XML API.
The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered,
active-standby configuration for high availability. The software gives administrators a single interface for
performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault
detection, auditing, and statistics collection. Cisco UCS Manager service profiles and templates support
versatile role- and policy-based management, and system configuration information can be exported to
configuration management databases (CMDBs) to facilitate processes based on IT Infrastructure Library
(ITIL) concepts. Service profiles benefit both virtualized and non-virtualized environments and increase the
mobility of non-virtualized servers, such as when moving workloads from server to server or taking a server
offline for service or upgrade. Profiles can also be used in conjunction with virtualization clusters to bring
new resources online easily, complementing existing virtual machine mobility.
Some of the key elements managed by Cisco UCS Manager include:
UCS 6248UP Rear
UCS 6248UP Front
Technology Overview
13
Cisco UCS Integrated Management Controller (IMC) firmware
RAID controller firmware and settings
BIOS firmware and settings, including server universal user ID (UUID) and boot order
Converged network adapter (CNA) firmware and settings, including MAC addresses and worldwide
names (WWNs) and SAN boot settings
Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology
Interconnect configuration, including uplink and downlink definitions, MAC address and WWN
pinning, VLANs, VSANs, quality of service (QoS), bandwidth allocations, Cisco Data Center VM-FEX
settings, and Ether Channels to upstream LAN switches
For more Cisco UCS Manager information, refer to:
http://www.cisco.com/en/US/products/ps10281/index.html
Cisco UCS Service Profiles
Figure 7 Traditional Provisioning Approach
BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP
addresses, number of HBAs, HBA WWNs, HBA firmware, FC fabric assignments, QoS settings, VLAN
other
server within your data center. Some of these parameters are kept in the hardware of the server itself (like
BIOS firmware version, BIOS settings, boot order, FC boot settings, etc.) while some settings are kept on
Technology Overview
14
your network and storage switches (like VLAN assignments, FC fabric assignments, QoS settings, ACLs,
etc.). This results in the following server deployment challenges:
Lengthy Deployment Cycles
— Every deployment requires coordination among server, storage, and network teams
— Need to ensure correct firmware & settings for hardware components
— Need appropriate LAN and SAN connectivity
Response Time To Business Needs
— Tedious deployment process
— Manual, error prone processes, that are difficult to automate
— High OPEX costs, outages caused by human errors
Limited OS And Application Mobility
— Storage and network settings tied to physical ports and adapter identities
— Static infrastructure leads to over-provisioning, higher OPEX costs
Cisco UCS has uniquely addressed these challenges with the introduction of service profiles (see Figure 8)
that enables integrated, policy based infrastructure management. Cisco UCS Service Profiles hold the DNA
for nearly all configurable parameters required to set up a physical server. A set of user defined policies
(rules) allow quick, consistent, repeatable, and secure deployments of Cisco UCS servers.
Figure 8 Cisco UCS Service Profiles
Cisco UCS Service Profiles contain values for a server's property settings, including virtual network interface
cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external management,
and high availability information. By abstracting these settings from the physical server into a Cisco UCS
Technology Overview
15
Service Profile, the Service Profile can then be deployed to any physical compute hardware within the Cisco
UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one physical server to
another. This logical abstraction of the server personality separates the dependency of the hardware type or
This innovation is still unique in the industry despite competitors claiming to offer similar functionality. In
most cases, these vendors must rely on several different methods and interfaces to configure these server
settings. Furthermore, Cisco is the only hardware provider to offer a truly unified management platform, with
Cisco UCS Service Profiles and hardware abstraction capabilities extending to both blade and rack servers.
Some of the key features and benefits of Cisco UCS Service Profiles are:
Service Profiles and Templates
A service profile contains configuration information about the server hardware, interfaces, fabric
connectivity, and server and network identity. The Cisco UCS Manager provisions servers utilizing
service profiles. The UCS Manager implements a role-based and policy-based management focused on
service profiles and templates. A service profile can be applied to any blade server to provision it with
the characteristics required to support a specific software stack. A service profile allows server and
network definitions to move within the management domain, enabling flexibility in the use of system
resources.
Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by
server, network, and storage administrators. Service profile templates consist of server requirements and
the associated LAN and SAN connectivity. Service profile templates allow different classes of resources
to be defined and applied to a number of resources, each with its own unique identities assigned from
predetermined pools.
The UCS Manager can deploy the service profile on any physical server at any time. When a service
profile is deployed to a server, the Cisco UCS Manager automatically configures the server, adapters,
Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service profile. A
service profile template parameterizes the UIDs that differentiate between server instances.
This automation of device configuration reduces the number of manual steps required to configure
servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches.
Programmatically Deploying Server Resources
Cisco UCS Manager provides centralized management capabilities, creates a unified management
domain, and serves as the central nervous system of the Cisco UCS. Cisco UCS Manager is embedded
device management software that manages the system from end-to-end as a single logical entity
through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and policy-based
management using service profiles and templates. This construct improves IT productivity and business
maintenance to strategic initiatives.
Dynamic Provisioning
Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and
WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin
groups, and threshold policies) all are programmable using a just-in-time deployment model. A service
Technology Overview
16
profile can be applied to any blade server to provision it with the characteristics required to support a
specific software stack. A service profile allows server and network definitions to move within the
management domain, enabling flexibility in the use of system resources. Service profile templates allow
different classes of resources to be defined and applied to a number of resources, each with its own
unique identities assigned from predetermined pools.
Cisco Nexus 9000 Series Switch
The 9000 Series Switches offer both modular and fixed 10/40/100 Gigabit Ethernet switch configurations
with scalability up to 30 Tbps of non-blocking performance with less than five-microsecond latency, 1152
10 Gbps or 288 40 Gbps non-blocking Layer 2 and Layer 3 Ethernet ports and wire speed VXLAN gateway,
bridging, and routing support.
Figure 9 Cisco Nexus 9000 Series Switch
For more information, see: http://www.cisco.com/c/en/us/products/switches/nexus-9000-series-
switches/index.html
Cisco UCS Central
For Cisco UCS customers managing growth within a single data center, growth across multiple sites, or
both, Cisco UCS Central Software centrally manages multiple Cisco UCS domains using the same concepts
that Cisco UCS Manager uses to support a single domain. Cisco UCS Central Software manages global
resources (including identifiers and policies) that can be consumed within individual Cisco UCS Manager
instances. It can delegate the application of policies (embodied in global service profiles) to individual
domains, where Cisco UCS Manager puts the policies into effect. In its first release, Cisco UCS Central
Software can support up to 10,000 servers in a single data center or distributed around the world in as many
domains as are used for the servers.
For more information on Cisco UCS Central, see:
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-central-software/index.html
FlexPod
Cisco and NetApp have carefully validated and verified the FlexPod solution architecture and its many use
cases while creating a portfolio of detailed documentation, information, and references to assist customers
in transforming their data centers to this shared infrastructure model. This portfolio includes, but is not
limited to, the following items:
Best practice architectural design
Technology Overview
17
Workload sizing and scaling guidance
Implementation and deployment instructions
Technical specifications (rules for what is, and what is not, a FlexPod configuration)
Frequently asked questions (FAQs)
Cisco Validated Designs (CVDs) and NetApp Validated Architectures (NVAs) covering a variety of use
cases
Cisco and NetApp have also built a robust and experienced support team focused on FlexPod solutions,
from customer account and technical sales representatives to professional services and technical support
engineers. The support alliance between NetApp and Cisco gives customers and channel services partners
direct access to technical experts who collaborate with cross vendors and have access to shared lab
resources to resolve potential issues.
FlexPod supports tight integration with virtualized and cloud infrastructures, making it the logical choice for
long-term investment. FlexPod provides a uniform approach to IT architecture, offering a well-characterized
and documented pool of shared resources for application workloads. FlexPod delivers operational efficiency
and consistency with the versatility to meet a variety of SLAs and IT initiatives, including the following:
Application roll out or application migration
Business continuity and disaster recovery
Desktop virtualization
Cloud delivery models (public, private, hybrid) and service models (IaaS, PaaS, SaaS)
Asset consolidation and virtualization
FlexPod System Overview
FlexPod is a best practice data center architecture that includes three components:
Cisco Unified Computing System (Cisco UCS)
Cisco Nexus Switches
NetApp fabric-attached storage (FAS) systems
Technology Overview
18
Figure 10 FlexPod Component Families
These components are connected and configured according to the best practices of both Cisco and NetApp
and provide the ideal platform for running a variety of enterprise workloads with confidence. FlexPod can
scale up for greater performance and capacity (adding compute, network, or storage resources individually
as needed), or it can scale out for environments that require multiple consistent deployments (rolling out
additional FlexPod stacks). The reference architecture covered in this document leverages the Cisco Nexus
9000 for the switching element.
One of the key benefits of FlexPod is the ability to maintain consistency at scale. Each of the component
families shown in Figure 10 (Cisco UCS, Cisco Nexus, and NetApp FAS) offers platform and resource
options to scale the infrastructure up or down, while supporting the same features and functionality that are
required under the configuration and connectivity best practices of FlexPod.
NetApp FAS and Data ONTAP
NetApp solutions offer increased availability while consuming fewer IT resources. A NetApp solution includes
hardware in the form of FAS controllers and disk storage and the NetApp Data ONTAP® operating system
that runs on the controllers. Data ONTAP is offered in two modes of operation: Data ONTAP operating in 7-
Mode and clustered Data ONTAP. The storage efficiency built into Data ONTAP provides substantial space
Technology Overview
19
savings, allowing more data to be stored at a lower cost. The NetApp portfolio affords flexibility for selecting
the controller that best fits customer requirements.
NetApp offers the NetApp unified storage architecture which simultaneously supports storage area network
(SAN), network-attached storage (NAS), and iSCSI across many operating environments such as VMware,
Windows®, and UNIX®. This single architecture provides access to data by using industry-standard
protocols, including NFS, CIFS, iSCSI, FCP, SCSI, FTP, and HTTP. Connectivity options include standard
Ethernet (10/100/1000, or 10GbE) and Fibre Channel (1, 2, 4, or 8Gb/sec). In addition, all systems can be
configured with high-performance solid state drives (SSDs) or serial ATA (SAS) disks for primary storage
applications, low-cost SATA disks for secondary applications (such as backup and archive), or a mix of
different disk types.
For more information, see: http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx
NetApp Clustered Data ONTAP
With clustered Data ONTAP, NetApp provides enterprise-ready, unified scale-out storage. Developed from a
solid foundation of proven Data ONTAP technology and innovation, clustered Data ONTAP is the basis for
large virtualized shared storage infrastructures that are architected for nondisruptive operations over the
system lifetime. Controller nodes are deployed in HA pairs in a single storage domain or cluster.
Data ONTAP scale-out is a way to respond to growth in a storage environment. As the storage environment
grows, additional controllers are added seamlessly to the resource pool residing on a shared storage
infrastructure. Host and client connections as well as datastores can move seamlessly and non-disruptively
anywhere in the resource pool, so that existing workloads can be easily balanced over the available
resources, and new workloads can be easily deployed. Technology refreshes (replacing disk shelves, adding
or completely replacing storage controllers) are accomplished while the environment remains online and
continues to serve data.
Data ONTAP is the first product to offer a complete scale-out solution, and it offers an adaptable, always-
available storage infrastructure for today's highly virtualized environment.
NetApp Storage Virtual Machines
A cluster serves data through at least one and possibly multiple storage virtual machines (SVMs; formerly
called Vservers). An SVM is a logical abstraction that represents the set of physical resources of the cluster.
Data volumes and network logical interfaces (LIFs) are created and assigned to an SVM and may reside on
any node in the cluster to which the SVM has been given access. An SVM may own resources on multiple
nodes concurrently, and those resources can be moved non-disruptively from one node to another. For
example, a flexible volume can be non-disruptively moved to a new node and aggregate, or a data LIF can
be transparently reassigned to a different physical network port. In this manner, the SVM abstracts the
cluster hardware and is not tied to specific physical hardware.
An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be
junctioned together to form a single NAS namespace, which makes all of an SVM's data available through a
single share or mount point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs
can be created and exported using iSCSI, Fibre Channel, or FCoE. Any or all of these data protocols may be
configured for use within a given SVM.
Technology Overview
20
Because it is a secure entity, an SVM is only aware of the resources that have been assigned to it and has no
knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct
entity with its own security domain. Tenants may manage the resources allocated to them through a
delegated SVM administration account. Each SVM may connect to unique authentication zones such as
Active Directory®, LDAP, or NIS.
VMware vSphere
VMware vSphere is a virtualization platform for holistically managing large collections of infrastructure
resources-CPUs, storage, networking-as a seamless, versatile, and dynamic operating environment. Unlike
traditional operating systems that manage an individual machine, VMware vSphere aggregates the
infrastructure of an entire data center to create a single powerhouse with resources that can be allocated
quickly and dynamically to any application in need.
The VMware vSphere environment delivers a robust application environment. For example, with VMware
vSphere, all applications can be protected from downtime with VMware High Availability (HA) without the
complexity of conventional clustering. In addition, applications can be scaled dynamically to meet changing
loads with capabilities such as Hot Add and VMware Distributed Resource Scheduler (DRS).
For more information, see: http://www.vmware.com/products/datacenter-
virtualization/vsphere/overview.html
Firewall and Load Balancer
Cisco ACI is a policy driven framework which optimizes application delivery. Applications consist of server
end points and network services. The relationship between these elements and their requirements forms an
application-centric network policy. Through Cisco APIC automation application-centric network policies are
managed and dynamically provisioned to simplify and accelerate application deployments on the fabric.
Network services such as load balancers and firewalls can be readily consumed by the application end
points as the APIC controlled fabric directs traffic to the appropriate services. This is the data center network
agility application teams have been demanding to reduce deployment from days or weeks to minutes.
L4-L7 service integration is achieved by using service specific Device Packages. These Device Packages are
imported into the Cisco APIC which are used to define, configure, and monitor a network service device
such as a firewall, SSL offload, load balancer, context switch, SSL termination device, or intrusion prevention
system (IPS). Device packages contain descriptions of the functional capability and settings along with
interfaces and network connectivity information for each function.
The Cisco APIC is an open platform enabling a broad ecosystem and opportunity for industry interoperabil-
ity with Cisco ACI. Numerous Device Packages associated with various vendors are available and can be
found at http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/solution-overview-c22-732445.html
An L4-L7 network service device is deployed in the fabric by adding it to a service graph which essentially
identifies the set of network or service functions that are provided by the device to the application. The
service graph is inserted between source and destination EPGs by a contract. The service device itself can
be configured through the Cisco APIC or optionally through the devices traditional GUI or CLI. The level of
APIC control is dependent on the functionality defined in the Device Package device scripts.
Technology Overview
21
It should be noted that firewalls and load balancers are not a core component of the FlexPod solution but
since most of the application deployments are incomplete without security and load distribution, Firewall and
Load balancer designs are covered as part of the infrastructure deployment.
Cisco Adaptive Security Appliance (ASA)
The Cisco ASA 5500-X Series helps organizations to balance security with productivity. It combines the
industry's most deployed stateful inspection firewall with comprehensive, next-generation network security
services. All Cisco ASA 5500-X Series Next-Generation Firewalls are powered by Cisco Adaptive Security
Appliance (ASA) Software, with enterprise-class stateful inspection and next-generation firewall capabilities.
ASA software also:
Integrates with other essential network security technologies
Enables up to eight ASA 5585-X appliances to be clustered, for up to 320 Gbps of firewall and 80
Gbps of IPS throughput
Delivers high availability for high-resiliency applications
Based on customer policies defined for a particular tenant, Cisco ASA configuration is automated using
device packages installed in the APIC.
F5 BiG-IP
F5 BIG-
optimize, and load balance application traffic. This gives the control to add servers easily, eliminate
downtime, improve application performance, and meet the security requirements.
Figure 11 F5 BIG-IP
F5s Synthesis architecture is a vision for delivering Software Defined Application Services. Its high
performance services fabric enables organizations to rapidly provision, manage and orchestrate a rich
catalog of services using simplified business models that dramatically changes the economy of scale for
layer 4-7 services.
The Cisco ACI APIC provides a centralized service automation and policy control point to integrate F5
through
critical data center technology directly incorporates F5 application solutions into the ACI automation
Technology Overview
22
framework. Both Cisco ACI and F5 Synthesis are highly extensible through programmatic extensions,
enabling consistent automation and orchestration of critical services needed to support application
requirements for performance, security and reliability. Cisco's ACI and F5 SDAS offer a comprehensive,
application centric set of network and L4-L7 services, enabling data centers to rapidly deploy and deliver
applications.
The F5 BIG-IP platform supports SDAS and Cisco ACI. The BIG-IP platform provides an intelligent
application delivery solution for Exchange Server, which combines dynamic network products for securing,
optimizing, and accelerating the Exchange environment. The F5 BIG-IP application services are instantiated
and managed through Cisco APIC readily providing Microsoft Exchange services when required. Figure 12
captures the service graph instantiation of the BIG-IP application service.
Figure 12 F5 BIG-IP Service Graph Example
NetApp OnCommand System Manager and Unified Manager
NetApp OnCommand® System Manager allows storage administrators to manage individual NetApp storage
systems or clusters of NetApp storage systems. Its easy-to-use interface saves time, helps prevent errors,
and simplifies common storage administration tasks such as creating volumes, LUNs, qtrees, shares, and
exports. System Manager works across all NetApp storage systems: FAS2000 series, FAS3000 series,
FAS6000 series, FAS8000 series, and V-Series systems or NetApp FlexArray® systems. NetApp
OnCommand Unified Manager complements the features of System Manager by enabling the monitoring and
management of storage within the NetApp storage infrastructure.
This solution uses both OnCommand System Manager and OnCommand Unified Manager to provide storage
provisioning and monitoring capabilities within the infrastructure.
NetApp Virtual Storage Console
The NetApp Virtual Storage Console (VSC) software delivers storage configuration and monitoring, datastore
provisioning, virtual machine (VM) cloning, and backup and recovery of VMs and datastores. VSC also
includes an application-programming interface (API) for automated control.
-in that provides end-to-end VM lifecycle management for
VMware environments that use NetApp storage. VSC is available to all VMware vSphere Clients that connect
to the VMware vCenter Server. This availability is different from a client-side plug-in that must be installed
on every VMware vSphere Client. The VSC software can be installed either on the VMware vCenter Server or
on a separate Windows Server® instance or VM.
VMware vCenter Server
VMware vCenter Server is the simplest and most efficient way to manage VMware vSphere, irrespective of
the number of VMs you have. It provides unified management of all hosts and VMs from a single console and
aggregates performance monitoring of clusters, hosts, and VMs. VMware vCenter Server gives
administrators a deep insight into the status and configuration of compute clusters, hosts, VMs, storage, the
guest OS, and other critical components of a virtual infrastructure. A single administrator can manage 100 or
Technology Overview
23
more virtualization environment workloads using VMware vCenter Server, more than doubling typical
productivity in managing physical infrastructure. VMware vCenter manages the rich set of features available
in a VMware vSphere environment.
For more information, see:
http://www.vmware.com/products/vcenter-server/overview.html
NetApp SnapDrive
NetApp SnapDrive® data management software automates storage provisioning tasks. It can back up and
restore business-critical data in seconds by using integrated NetApp Snapshot® technology. With Windows
Server and VMware ESX server support, SnapDrive software can run on Windows-based hosts either in a
physical environment or in a virtual environment. Administrators can integrate SnapDrive with Windows
failover clustering and add storage as needed without having to pre-allocate storage resources.
For additional information about NetApp SnapDrive, refer to the NetApp SnapDrive Data Management
Software datasheet.
SnapManager for Exchange Server Overview
SnapManager for Exchange provides an integrated data management solution for Microsoft Exchange Server
2013 that enhances the availability, scalability, and reliability of Microsoft Exchange databases. SME
provides rapid online backup and restoration of databases, along with local or remote backup set mirroring
for disaster recovery.
SME uses online Snapshot technologies that are part of Data ONTAP. It integrates with Microsoft Exchange
backup and restores APIs and the Volume Shadow Copy Service (VSS). SnapManager for Exchange can use
SnapMirror to support disaster recovery even if native Microsoft Exchange Database Availability Group (DAG)
replication is leveraged.
SME provides the following data-management capabilities:
Migration of Microsoft Exchange data from local or remote storage onto NetApp LUNs.
Application-consistent backups of Microsoft Exchange databases and transaction logs from NetApp
LUNs.
Verification of Microsoft Exchange databases and transaction logs in backup sets.
Management of backup sets.
Archiving of backup sets.
Restoration of Microsoft Exchange databases and transaction logs from previously created backup
sets, providing a lower recovery time objective (RTO) and more frequent recovery point objectives
(RPOs).
Capability to reseed database copies in DAG environments and prevent full reseed of a replica
database across the network.
Some of the new features released in SnapManager for Exchange 7.1 include the following:
Technology Overview
24
Capability to restore a passive database copy without having to reseed it across the network, by
using the database Reseed wizard.
Native SnapVault integration without using Protection Manager.
RBAC support for service account.
Single installer for E2010/E2013.
Retention enhancements.
SnapManager for Exchange Server Architecture
SnapManager for Microsoft Exchange versions 7.1 and 7.2 support Microsoft Exchange Server 2013. SME is
tightly integrated with Microsoft Exchange, which allows consistent online backups of Microsoft Exchange
environments while leveraging NetApp Snapshot technology. SME is a VSS requestor, meaning that it uses
the VSS framework supported by Microsoft to initiate backups. SME works with the DAG, providing the
ability to back up and restore data from both active database copies and passive database copies.
Figure 13 shows the SnapManager for Exchange Server architecture. For more information about VSS, refer
to Volume Shadow Copy Service Overview on the Microsoft Developer Network.
Technology Overview
25
Figure 13 SnapManager for Exchange Server Architecture
Migrating Microsoft Exchange Data to NetApp Storage
The process of migrating Microsoft Exchange databases and transaction log files from one location to
another can be a time-consuming and lengthy process. Many manual steps must be taken so that the
Microsoft Exchange database files are in the proper state to be moved. In addition, more manual steps must
be performed to bring those files back online for handling Microsoft Exchange traffic. SME automates the
entire migration process, eliminating any potential user errors. After the data is migrated, SME automatically
mounts the Microsoft Exchange data files and allows Microsoft Exchange to continue to serve e-mail.
High Availability
In Microsoft Exchange 2013, the DAG feature was implemented to support mailbox database resiliency,
mailbox server resiliency, and site resiliency. The DAG consists of two or more servers, and each server can
store up to one copy of each mailbox database.
The DAG Active Manager manages the database and mailbox failover and switchover processes. A failover
is an unplanned failure, and a switchover is a planned administrative activity to support maintenance
activities.
Technology Overview
26
The database and server failover process is an automatic process when a database or mailbox server incurs
a failure. The order in which a database copy is activated is set by the administrator.
For more information on Microsoft Exchange 2013 DAGs, refer to the Microsoft TechNet article Database
Availability Groups.
Microsoft Exchange 2013 Database Availability Group Deployment Scenarios
Single-Site Scenario
Deploying a two-node DAG with a minimum of two copies of each mailbox database in a single site is best
suited for companies that want to achieve server-level and application-level redundancy. In this situation,
deploying a two-node DAG using RAID DP provides not only server-level and application-level redundancy
but protection against double disk failure as well. Adding SnapManager for Exchange in a single-site
scenario enables point-in-time restores without the added capacity requirements and complexity of a DAG
copy. Using the reseed functionality in SME 7.1 allows the database copies to be in a healthy state and
reduces the RTO for failed databases, enabling resiliency all the time.
Multisite Scenario
Extending a DAG across multiple data centers provides high availability of servers and storage components
and adds site resiliency. When planning a multisite scenario, NetApp recommends having at least three
mailbox servers as well as three copies of each mailbox database, two in the primary site and one in the
secondary site. Adding at least two copies in both primary and secondary sites provides not only site
resiliency but also high availability in each site. Using the reseed functionality in SME 7.1 allows the database
copies to be in a healthy state and reduces the RTO for failed databases, enabling resiliency all the time.
For additional information on DAG layout planning, refer to the Microsoft TechNet article Database
Availability Groups.
When designing the storage layout and data protection for a DAG scenario, use the following design
considerations and best practices.
Deployment Best Practice
In a multisite scenario, it is a best practice to deploy at least three mailbox servers as well as three copies of each mailbox database, two in the primary site and one in the secondary site. Adding at least two copies in both primary and secondary sites provides not only site resiliency but also high availability in each site.
Storage Design Best Practices
Design identical storage for active and passive copies of the mailboxes in terms of capacity
and performance.
Provision the active and passive LUNs identically with regard to path, capacity, and perfor-
mance.
Place flexible volumes for active and passive databases onto separate aggregates that are
connected to separate SVMs. If a single aggregate is lost, only the database copies on that
aggregate are affected.
Technology Overview
27
Volume Separation Best Practice
Place active and passive copies of the database into separate volumes.
Backup Best Practices
Perform a SnapManager for Exchange full back up on one copy of the database and a copy-
only backup on the rest of the database copies.
Verification of database backups is not required if Microsoft Exchange 2013 is in a DAG con-
figuration with at least two copies of the databases, with Microsoft Exchange background da-
tabase maintenance enabled.
Verification of database backups and transaction log backups is required if Microsoft Ex-
change 2013 is in a standalone (non-DAG) configuration.
In Microsoft Exchange 2013 standalone environments that use SnapMirror, configure data-
base backup and transaction log backup verification to occur on the SnapMirror destination
storage.
For more information about the optimal layout, refer to SnapManager 7.1 Microsoft Exchange Server
documentation.
For more information about the NetApp Clustered Data ONTAP best practices, refer to TR-4221 Microsoft
Exchange Server 2013 and SnapManager
Exchange 2013 Architecture
Microsoft Exchange 2013 introduces significant architectural changes. While Exchange 2010 had five server
roles, Exchange 2013 has two primary server roles: Mailbox server and Client Server. The mailbox server
processes the client connections to the active mailbox database. The Client Access Server is a thin and
stateless component that proxies client connections to the Mailbox server. Both the Mailbox server role and
the Client Access Role can run on the same server or on separate servers.
Revisions in Exchange 2013 architecture has changed the some aspects of Exchange client connectivity.
RPC/TCP is no longer used for Outlook client access protocol. RPC/TCP access protocol has been replaced
with RPC over HTTPS. This is known as Outlook Anywhere or, starting with Exchange 2013 SP1, MAPI over
HTTP. This change removes the need for the RPC Client Access protocol on the Client Access Server and
thus simplifies the Exchange server namespace.
Client Access Server
The Client Access Server provides authentication, limited redirection, and proxy services. It support standard
client access protocols such as HTTP, POP IMAP, and SMTP. The thin Client Access Server is stateless and
does not render data. It does not queue or store any messages or transactions.
The Client Access Server has the following attributes:
Stateless Server
The Client Access Server no longer requires session affinity. This means that it no longer matters with
member of the Client Access Array receives the individual client request. This functionality avoids the
need for the network load balancer to have session affinity. The hardware load balancer can support a
greater number of concurrent sessions when session affinity is not required.
Technology Overview
28
Connection Pooling
Connection pooling allow the Client Access Server to use fewer connections to the Mailbox server when
acting as the client request proxy. This improves processing efficiency and client response time latency.
Mailbox Server
The Mailbox server role hosts the mailbox and public folder databases like Exchange Server 2010. In
addition to these roles, The Exchange 2013 Mailbox server role also includes Client Access protocols,
Transport services, and Unified Messaging services. The Exchange Store has been rewritten in managed
code to improve performance and scale across a greater number of physical processor cores in a server.
Each Exchange database now runs under its own process instead of sharing a single process for all
database instances.
Client Access Server and Mailbox Server Communication
The Client Access Server, Mailbox Server and Exchange clients communicate in the following manner:
Client Access Server and Mailbox Server use LDAP to query Active Directory.
Outlook Clients connect to the Client Access Server.
Client
Technology Overview
29
Figure 14 Logical Connectivity of Client Access and Mailbox Server
High Availability and Site Resiliency
The Mailbox server has built-in high availability and site resiliency. Like in Exchange 2010, the DAG
(Database Availability Group) and Windows Failover Clustering are the base component for high availability
and site resiliency in Exchange 2013. Up to 16 Mailbox server can participate in a single DAG.
Database Availability Group
The DAG use database copies and replication combined with databases mobility and activation to implement
data center high availability and site resiliency. Up to 16 copies of each database can be maintained at any
given time. One of these copies can be of each database can be active at any time while the remainder of
the databases are passive copies. These databases are distributed across multiple DAG member serves.
Activation Manager manages the activation of these database on the DAG member servers.
Active Manager
Active Manager manages the health and status of the database and database copies. It also managers
continues replication and mailbox server high availability. Mailbox servers that are member of a DAG have a
Primary active Manager (PAM) role and a Standby Active Manager (SAM) role. Only one server in the DAG
runs the PAM role at any given time.
Technology Overview
30
The PAM role determines which database copies are active and which are passive. The PAM role also reacts
to DAG member server failures and topology changes. The PAM role can move from one server to another
within a DAG so there will always be a DAG member server running the PAM role.
The SAM role tracks which DAG member server is running the active database copy and which one is
running the passive copy. This information is provided to other Exchange roles such as the Client Access
Server role and the Transport Service role. The SAM also tracks the state of the databases on the local
server and informs the PAM when database copy failover is required.
Site Resiliency
Site Resiliency can be implemented by stretching a DAG across two data center sites. This is achieved by
placing member servers from the same and Client Access Server roles in two sites. Multiple copies of each
database are deployed on DAG members in both sites to facilitated mailbox databases availability in all sites.
Database activation controls which site has the active database. The DAG replication keeps the database
copies synchronized.
Exchange Clients
Exchange 2013 mailboxes can be accessed by variety of clients. These clients run in web browsers, mobile
devices, and computers. Most clients access their mailboxes through the virtual directory which is presented
by the Internet Information Service that runs on the Mailbox server.
Outlook Client
Outlook 2007, Outlook 2010, and Outlook 2013 run on computers. They use RPC over HTTP to access the
Exchange 2013 mailboxes.
Exchange ActiveSync Clients
Exchange ActiveSync Clients run on mobile devices and use the Exchange ActiveSync protocol to access
the Exchange Mailboxes.
Outlook WebApp
Outlook WebApp provides access to Exchange 2013 mailboxes from a web browsers.
POP3 and IMAP4 Clients
POP3 and IMAP4 clients can run on a mobile devise or a computer. They use the POP3 protocol to access
the mailbox. The SMTP protocol is used in combination with these clients to send email.
Name Space Planning
Exchange 2013 simplifies the namespace requirements compared to earlier versions of Exchange.
Namespace Models
Various namespace models are commonly used with Exchange 2013 to achieve various functional goals.
Technology Overview
31
Unified Namespace
The unified name space is the simplest to implement. It can be used in a single data center and multi-data
center deployments. The namespace is tide to one or more DAGs. In the case of multiple data centers with
DAGs that span the data centers, the name space also spans the data centers. In this namespace model all
mailbox servers in the DAGs are have active mailboxes and Exchange clients connect to the Exchange
servers irrespective of the location of the Exchange server or the location of the client.
Dedicated Namespace
The dedicated namespace is associated with a specific data center or geographical location. This
namespace usually corresponds mailbox servers in one or more DAGs in that location. Connectivity is
controlled by where which data center has the mailbox server with the active database. Dedicated name
space deployments typically use two name spaces for each data center. One is the primary namespace that
is used during normal operation, and the other is a failover namespace that is used service availability is
transferred to a partner data center. Switchover to the partner data center is an administrator managed event
in this case.
Internal and External Namespace
Internal and external namespace is typically used in combination with a split-DNS scheme for providing
different IP address resolution for a given namespace based on the client connection point. This is
commonly used to provide different connection endpoints for clients that are connected on the external
sided of the firewall as compared to the internal side of the firewall.
Regional Namespace
Regional Namespace provides a method to optimize client connectivity based on client proximity to the
mailbox server hosting their mailbox. Regional namespace can be used with both unified namespace and
dedicated namespace schemes. For example, a company that has data centers in Europe for Europe based
employees and data centers in America for America bases employees can deploy separate name space for
Europe and for America.
Network Load Balancing
Network load balancing enables scalability and high availability for the Client Access Servers. Scalability and
redundancy is enabled by deploying multiple Client Access Servers and distributing the Exchange client
traffic between the deployed Client Access Servers.
Exchange 2013 has several options for implementing network load balancing. Session affinity is no longer
required at the network load balancing level, although it can still be implemented to achieve specific health
probe goals.
Health probe checking enables the network load balancer to verify which Client Access Server is servicing
specific Exchange client connectivity protocols. The health probe checking scheme determines the
granularity of detecting protocol availability on the Client Access Server. Exchange 2013 has a virtual
directory for health checking. This directory is Exchange client specific and can be used by load balancers to
verity the arability of a particular protocol on a Client Access Server.
Technology Overview
32
Common Name Space and Load Balancing Session Affinity Implementations
There are four common network load balancing implementations that are used for load balancing client
connections to Exchange 2013 Client Access Servers. Each implementation has pros and cons for simplicity,
health probe granularity, and network load balancer resource consumption.
Layer 4 Single Namespace without Session Affinity
Layer 7 Single Namespace without Session Affinity
Layer 7 Single Namespace with Session Affinity
Multi-Namespace without Session Affinity
Layer 4 Single Namespace without Session Affinity
This is the simplest implementation and consumes the fewest load balancer resources. This implementation
uses a single namespace and layer 4 load balancing. Health probe checks are performs based on IP
address, network port, and a single Exchange Client virtual directory for health checks. Since most Exchange
clients use HTTP and thus the same HTTP port, the health check can be performed on just one Exchange
client protocol that is in use. The most frequently used Exchange client protocol is unusually selected for
health probe checking in this implementation. When the health probe fails, the network load balancer will
remove the entire Client Access server from the Client Access Server pool until the time the health check
returns to a healthy state.
Pros: Simple to implement. Requires fewest load balancer resources
Cons: Server level granularity. Health Check Probe may miss a particular Exchange Client protocol
being offline and thus fail to remove the Client Access Server from the rotation pool.
Layer 4 Multi-Namespace without Session Affinity
This implementation is like the previous layer 4 implementation without session affinity with the exception
that an individual namespace is used for each Exchange Client protocol type. This method provides the
ability to configure a health check probe for each Exchange client protocol and thus gives the load balancer
the capability to identifying and removing just the unhealthy protocols on a given Client Access Server from
the Client Access Server pool rotation.
Pros: Protocol level service availability detection. Session affinity maintained on the Client Access
Server.
Cons: Multiple Namespace management. Increased load balancer complexity.
Layer 7 Single Namespace without Session Affinity
This implementation uses a single namespace and layer 7load balancing. The load balancer performs SSL
checks are configured and performed for each Exchange Client protocol virtual directory. Since the health
probe check is Exchange Client protocol specific, the load balancer is capable of identifying and removing
just the unhealthy protocols on a given Client Access Server from the Client Access Server pool rotation.
Pros: Protocol level service availability detection. Session affinity maintained on the Client Access
Server.
Technology Overview
33
Cons: SSL termination uses more network load balancer resources
Layer 7 Single Namespace with Session Affinity
This implementation is like the previous layer 7 single namespace implementation with the exception that
session affinity is also implemented.
Pros: Protocol level service availability detection. Session affinity maintained on the Client Access
Server.
Cons: SSL termination and session affinity uses more network load balancer resources. Increased
load balancer complexity.
Cisco Application Centric Infrastructure
workloads. ACI is built on a network fabric that combines time-tested protocols with new innovations, to
create a highly flexible, scalable, and resilient architecture of low-latency, high-bandwidth links. This fabric
delivers a network that can support the most demanding and flexible data center environments.
The fabric is designed to support the industry move to management automation, programmatic policy, and
-
policy-based control systems, and software closely coupled to provide advantages not possible in other
models.
The fabric consists of three major components: the Application Policy Infrastructure Controller, spine
switches, and leaf switches. These three components handle both the application of network policy and the
delivery of packets.
Cisco ACI Fabric
The Cisco ACI fabric consists of three major components:
The Application Policy Infrastructure Controller (APIC)
Spine switches
Leaf switches
These components are connected and configured according to best practices of both Cisco and NetApp and
provide the ideal platform for running a variety of enterprise workloads with confidence.
The ACI switching architecture is laid out in a leaf-and-spine topology where every leaf connects to every
spine using 40G Ethernet interface(s). This design enables linear scalability and robust multipathing within
the fabric, optimized for the east to west traffic required by applications. No connections are created
between leaf nodes or spine nodes because all non-local traffic flows from ingress leaf to egress leaf across
a single spine switch. The only exceptions to this rule are certain failure scenarios. Bandwidth scales linearly
with the addition of spine switches. Also, each spine switch added creates another network path, which is
used to load-balance traffic on the fabric.
The ACI Fabric Architecture is outlined in Figure 15.
Technology Overview
34
Figure 15 Cisco ACI Fabric Architecture
The software controller, APIC, is delivered as an appliance and three or more such appliances form a cluster
for high availability and enhanced performance. APIC is responsible for all tasks enabling traffic transport
including:
Fabric activation
Switch firmware management
Network policy configuration and instantiation
Though the APIC acts as the centralized point of configuration for policy and network connectivity, it is never
in line with the data path or the forwarding topology. The fabric can still forward traffic even when
communication with the APIC is lost.
The Application Policy Infrastructure Controller (APIC) is the unifying point of automation and management
for the ACI fabric. APIC provides centralized access to all fabric information, optimizes the application
lifecycle for scale and performance, and supports flexible application provisioning across physical and virtual
resources. Some of the key benefits of Cisco APIC are:
Centralized application-level policy engine for physical, virtual, and cloud infrastructures
Detailed visibility, telemetry, and health scores by application and by tenant
Designed around open standards and open APIs
Robust implementation of multi-tenant security, quality of service (QoS), and high availability
Integration with management systems such as VMware, Microsoft, and OpenStack
APIC provides both a command-line interface (CLI) and graphical-user interface (GUI) to configure and
control the ACI fabric. APIC also exposes a northbound API through XML and JavaScript Object Notation
(JSON) and an open source southbound API.
For more information on Cisco APIC, see: http://www.cisco.com/c/en/us/products/cloud-systems-
management/application-policy-infrastructure-controller-apic/index.html
Solution Design
35
Solution Design
FlexPod, Cisco ACI and L4-L7 Services Components
FlexPod with ACI is designed to be fully redundant in the compute, network, and storage layers. There is no
single point of failure from a device or traffic path perspective. Figure 16 shows how the various elements
are connected together.
Figure 16 FlexPod Design with Cisco ACI and NetApp Clustered Data ONTAP
Fabric: As in the previous designs of FlexPod, link aggregation technologies play an important role in
FlexPod with ACI providing improved aggregate bandwidth and link resiliency across the solution
stack. The NetApp storage controllers, Cisco Unified Computing System, and Cisco Nexus 9000
platforms support active port channeling using 802.3ad standard Link Aggregation Control Protocol
(LACP). Port channeling is a link aggregation technique offering link fault tolerance and traffic
distribution (load balancing) for improved aggregate bandwidth across member ports. In addition, the
Cisco Nexus 9000 series features virtual Port Channel (vPC) capabilities. vPC allows links that are
physically connected to two different Cisco Nexus 9000 Series devices to appear as a single
"logical" port channel to a third device, essentially offering device fault tolerance. The Cisco UCS
Fabric Interconnects and NetApp FAS controllers benefit from the Cisco Nexus vPC abstraction,
gaining link and device resiliency as well as full utilization of a non-blocking Ethernet fabric.
Solution Design
36
Compute: Each Fabric Interconnect (FI) is connected to both the leaf switches and the links provide a
robust 40GbE connection between Cisco Unified Computing System and ACI fabric. Figure 17
illustrates the use of vPC enabled 10GbE uplinks between the Cisco Nexus 9000 leaf switches and
Cisco UCS FI. Additional ports can be easily added to the design for increased bandwidth as
needed. Each Cisco UCS 5108 chassis is connected to the FIs using a pair of ports from each IO
Module for a combined 40G uplink. Current FlexPod design supports Cisco UCS C-Series
connectivity both for direct attaching the Cisco UCS C-Series servers into the FIs or by connecting
Cisco UCS C-Series to a Cisco Nexus 2232PP Fabric Extender hanging off of the Cisco UCS FIs.
FlexPod designs mandate Cisco UCS C-Series management using Cisco UCS Manager to provide a
uniform look and feel across blade and standalone servers.
Figure 17 Compute Connectivity
Storage: The ACI-based FlexPod design is an end-to-end IP-based storage solution that supports
SAN access by using iSCSI. The solution provides a 10/40GbE fabric that is defined by Ethernet
uplinks from the Cisco UCS fabric interconnects and NetApp storage devices connected to the Cisco
Nexus switches. Optionally, the ACI-based FlexPod design can be configured for SAN boot by using
Fibre Channel over Ethernet (FCoE). FCoE access is provided by directly connecting the NetApp FAS
controller to the Cisco UCS fabric interconnects as shown in 0.
Solution Design
37
Figure 18 FCoE Connectivity - Direct Attached SAN
Figure 18 shows the initial storage configuration of this solution as a two-node high availability (HA) pair
running clustered Data ONTAP. Storage system scalability is easily achieved by adding storage capacity
(disks and shelves) to an existing HA pair, or by adding more HA pairs to the cluster or storage domain.
For SAN environments, NetApp clustered Data ONTAP allows up to 4 HA pairs or 8 nodes; for NAS envi-
ronments it allows 12 HA pairs or 24 nodes to form a logical entity.
The HA interconnect allows each node in an HA pair to assume control of its partner's storage (disks and
shelves) directly. The local physical HA storage failover capability does not extend beyond the HA pair.
Furthermore, a cluster of nodes does not have to include similar hardware. Rather, individual nodes in an HA
pair are configured alike, allowing customers to scale as needed, as they bring additional HA pairs into the
larger cluster.
For more information about the virtual design of the environment that consist of VMware vSphere, Cisco
Nexus 1000v virtual distributed switching, and NetApp storage controllers, refer to the section FlexPod
Infrastructure Physical Build.
L4-L7 Services: The APIC manages F5 BIG-IP LTM and its supported functions through the use of device
package, which are used to define, configure, and monitor service devices. A device package allows adding,
modifying, removing and monitoring a network service on the APIC without interruption. Adding a new
device type to the APIC is done by uploading a Device Package. The Device Package is a zip file containing
all the information needed for the APIC to integrate with any type of Service Devices.
Solution Design
38
F5 BIG-IPs are connected to the Nexus 9300 Leaf switches where one acting as active and other as
standby. BIG-IP are connected through vPC to Nexus 9300 Leaf switch using LACP bundling protocol. Static
routes are configured to redirect traffic from source (client) to application servers through F5 BIG-IP LTM.
F5 BIG-IP LTM Microsoft Exchange Deployment Reference: https://devcentral.f5.com/articles/microsoft-
exchange-server
F5 BIG-IP v11 with Microsoft Exchange 2013 Deployment Guide: http://www.f5.com/pdf/deployment-
guides/microsoft-exchange-iapp-v1_3-dg.pdf
Validated System Hardware Components
The following components are required to deploy the Cisco Nexus 9000 ACI design:
Cisco Unified Compute System
Cisco Nexus 2232 Fabric Extender (optional)
Cisco Nexus 9396 Series leaf Switch
Cisco Nexus 9508 Series spine Switch
Cisco Application Policy Infrastructure Controller (APIC)
NetApp Unified Storage
Enterprise applications are typically deployed along with a firewall and a load balancer for high availability
and security. FlexPod with ACI validated design includes these optional components in the design:
Cisco ASA 5585
F5 Big-IP 5000
Both these devices are configured using APIC controller.
FlexPod Infrastructure Design
Hardware and Software Revisions
Table 1 describes the hardware and software versions used during solution validation. It is important to note
that Cisco, NetApp, and VMware have interoperability matrixes that should be referenced to determine
support for any specific implementation of FlexPod. For more information, see the following links:
NetApp Interoperability Matrix Tool: http://support.netapp.com/matrix/
Cisco UCS Hardware and Software Interoperability Tool:
http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html
VMware Compatibility Guide: http://www.vmware.com/resources/compatibility/search.php
Table 1 Validated Software Versions
Layer Device Image Comments
Solution Design
39
Layer Device Image Comments
Compute Cisco UCS Fabric
Interconnects 6200
Series, UCS B-200
M3, UCS C-220 M3
2.2(1d) Includes the Cisco
UCS-IOM 2208XP,
Cisco UCS
Manager, and UCS
VIC 1240
Cisco eNIC 2.1.2.38
Cisco fNIC 1.5.0.20
Network Cisco APIC 1.0.1e
Cisco Nexus 9000
iNX-OS
11.0 (1b)
Load Balancer F5 BIG-IP 11.5.1
Storage NetApp FAS 8040 Data ONTAP
8.2.3
Nexus 5596 Cluster
Switches
5.2(1)N1(1)
Software VMware vSphere
ESXi
5.5u2
VMware vCenter 5.5u2
OnCommand Unified
Manager for
clustered Data
ONTAP
6.1
NetApp Virtual
Storage Console
(VSC)
4.2.2
FlexPod Infrastructure Physical Build
Figure 18 illustrates the new ACI connected FlexPod design. The infrastructure is physically redundant
across the stack, addressing Layer 1 high-availability requirements where the integrated stack can withstand
failure of a link or failure of a device. The solution also incorporates additional Cisco and NetApp
technologies and features that to further increase the design efficiency. The compute, network and storage
design overview of the FlexPod solution is covered in Figure 19. The individual details of these components
will be covered in the upcoming sections.
Figure 19 Cisco Nexus 9000 Design for Clustered Data ONTAP
Solution Design
40
Cisco Unified Computing System
The FlexPod compute design supports both Cisco UCS B-Series and C-Series deployments. The
components of the Cisco Unified Computing System offer physical redundancy and a set of logical structures
to deliver a very resilient FlexPod compute domain. In this validation effort, multiple Cisco UCS B-Series and
C-Series ESXi servers are booted from SAN using iSCSI.
Solution Design
41
Cisco UCS Physical Connectivity
Cisco UCS Fabric Interconnects are configured with two port-channels, one from each FI, to the Cisco
Nexus 9000. These port-channels carry all the data and storage traffic originated on the Cisco Unified
Computing System. The validated design utilized two uplinks from each FI to the leaf switches for an
aggregate bandwidth of 40GbE (4 x 10GbE). The number of links can be easily increased based on customer
data throughput requirements.
Out of Band Network Connectivity
Like many other compute stacks, FlexPod relies on an out of band management network to configure and
manage network, compute and storage nodes. The management interfaces from the physical FlexPod
devices are physically connected to the out of band switches. Out of band network access is also required
to access vCenter, ESXi hosts, and some of the management Virtual Machines (VMs). To support a true out
of band management connectivity, Cisco UCS fabric interconnects are directly connected to the out of band
management switches and a disjoint layer-2 configuration is used to keep the management network path
separate from the data network (Figure 20).
Figure 20 Out of Band Management Network
The disjoint Layer 2 feature simplifies deployments within Cisco UCS end-host mode without the need to
turn on switch mode. The disjoint layer-2 functionality is enabled by defining groups of VLANs and
associating them to uplink ports. Since a server vNIC can only be associated with a single uplink ports, two
additional vNICs, associated with the out of band management uplinks, are deployed per ESXi host. Figure
21 shows how different VLAN groups are deployed and configured on Cisco Unified Computing System.
Figure 29 covers the network interface design for the ESXi hosts.
Solution Design
42
Figure 21 Cisco UCS VLAN Group Configuration for Disjoint Layer-2
FCoE Connectivity
The FlexPod with ACI design optionally supports boot from SAN using FCoE by directly connecting NetApp
controller to the Cisco UCS Fabric Interconnects. The updated physical design changes are covered in
Figure 22.
In the FCoE design, zoning and related SAN configuration is configured on Cisco UCS Manager and Fabric
Interconnects provide the SAN-A and SAN-B separation. On NetApp, Unified Target Adapter is needed to
provide physical connectivity.
Solution Design
43
Figure 22 Boot from SAN using FCoE (Optional)
Cisco Unified Computing System I/O Component Selection
FlexPod allows customers to adjust the individual components of the system to meet their particular scale or
performance requirements. Selection of I/O components has a direct impact on scale and performance
characteristics when ordering the Cisco UCS components. Figure 23 illustrates the available backplane
connections in the Cisco UCS 5100 series chassis. As shown, each of the two Fabric Extenders (I/O module)
has four 10GBASE KR (802.3ap) standardized Ethernet backplane paths available for connection to the half-
width blade slot. This means that each half-width slot has the potential to support up to 80Gb of aggregate
traffic depending on selection of the following:
Cisco Fabric Extender model (2204XP or 2208XP)
Modular LAN on Motherboard (mLOM) card
Mezzanine Slot card
Solution Design
44
Figure 23 Cisco UCS B-Series M3 Server Chassis Backplane Connections
Fabric Extender Modules (FEX)
Each Cisco UCS chassis is equipped with a pair of Cisco UCS Fabric Extenders. The fabric extenders have
two different models, 2208XP and 2204XP. Cisco UCS 2208XP has eight 10 Gigabit Ethernet, FCoE-capable
ports that connect the blade chassis to the fabric interconnect. The Cisco UCS 2204 has four external ports
with identical characteristics to connect to the fabric interconnect. Each Cisco UCS 2208XP has thirty-two
10 Gigabit Ethernet ports connected through the midplane to the eight half-width slots (4 per slot) in the
chassis, while the 2204XP has 16 such ports (2 per slot).
Figure 24 Cisco UCS Fabric Extender Model Comparison
Cisco UCS FEX Model
Network Facing
Interface
Host Facing
Interface
UCS 2204XP 4 16
UCS 2208XP 8 32
MLOM Virtual Interface Card (VIC)
FlexPod solution is typically validated using Cisco VIC 1240 or Cisco VIC 1280. Cisco VIC 1240 is a 4-port
10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM)
designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in combination
with an optional Port Expander, the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10
Gigabit Ethernet with the use of Cisco UCS 2208 Fabric Extender.
Solution Design
45
Mezzanine Slot Card
Cisco VIC 1280 is an eight-port 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable mezzanine
card designed exclusively for Cisco UCS B-Series Blade Servers.
Server Traffic Aggregation
Selection of the FEX, VIC and Mezzanine cards plays a major role in determining the aggregate traffic
throughput to and from a server. Figure 23 shows an overview of backplane connectivity for both the I/O
Modules and Cisco VICs. The number of KR lanes indicates the 10GbE paths available to the chassis and
therefore blades. As shown in Figure 23, depending on the models of I/O modules and VICs, traffic
aggregation differs. Cisco FEX 2204XP enables 2 KR lanes per half-width blade slot while the 2208XP
enables all four. Similarly number of KR lanes varies based on selection of Cisco VIC 1240, VIC 1240 with
Port Expander and VIC 1280.
Validated I/O Component Configurations
Two of the most commonly validated I/O component configurations in FlexPod designs are:
Cisco UCS B200M3 with VIC 1240 and FEX 2204
Cisco UCS B200M3 with VIC 1240 and FEX 2208
Figure 25 and Figure 26 show the connectivity for these two configurations.
Figure 25 Validated Backplane Configuration VIC 1240 with FEX 2204
In Figure 25, the FEX 2204XP enables 2 KR lanes to the half-width blade while the global discovery policy
dictates the formation of a fabric port channel. This results in 20GbE connection to the blade server.
Solution Design
46
Figure 26 Validated Backplane Configuration VIC 1240 with FEX 2208
In Figure 26, the FEX 2208XP enables 8 KR lanes to the half-width blade while the global discovery policy
dictates the formation of a fabric port channel. Since VIC 1240 is not using a Port Expander module, this
configuration results in 40GbE connection to the blade server.
Cisco Unified Computing System Chassis/FEX Discovery Policy
Cisco Unified Computing System can be configured to discover a chassis using Discrete Mode or the Port-
Channel mode (Figure 27). In Discrete Mode each FEX KR connection and therefore server connection is tied
or pinned to a network fabric connection homed to a port on the Fabric Interconnect. In the presence of a
failure on the external "link" all KR connections are disabled within the FEX I/O module. In Port-Channel
mode, the failure of a network fabric link allows for redistribution of flows across the remaining port channel
members. Port-Channel mode therefore is less disruptive to the fabric and is recommended in the FlexPod
designs.
Solution Design
47
Figure 27 Cisco UCS Chassis Discovery Policy Discrete Mode vs. Port Channel Mode
Cisco Unified Computing System QoS and Jumbo Frames
FlexPod accommodates a myriad of traffic types (vMotion, NFS, FCoE, control traffic, etc.) and is capable of
absorbing traffic spikes and protect against traffic loss. Cisco UCS and Nexus QoS system classes and
policies deliver this functionality. In this validation effort the FlexPod was configured to support jumbo frames
with an MTU size of 9000. Enabling jumbo frames allows the FlexPod environment to optimize throughput
between devices while simultaneously reducing the consumption of CPU resources.
When setting the Jumbo frames, it is important to make sure MTU settings are applied uniformly across the
stack to prevent fragmentation and the negative performance.
Cisco Unified Computing System Cisco UCS C-Series Server Design
Fabric Interconnect Direct Attached Design
Cisco UCS Manager 2.2 now allows customers to connect Cisco UCS C-Series servers directly to Cisco
UCS Fabric Interconnects without requiring a Fabric Extender (FEX). While the Cisco UCS C-Series
connectivity using Cisco Nexus 2232 FEX is still supported and recommended for large scale Cisco UCS
C-Series server deployments, direct attached design allows customers to connect and manage Cisco
UCS C-Series servers on a smaller scale without buying additional hardware.
For detailed connectivity requirements, refer to:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c-series_integration/ucsm2-2/b_C-
Series-Integration_UCSM2-2/b_C-Series-Integration_UCSM2-
2_chapter_0110.html#reference_EF9772524CF3442EBA65813C2140EBE6
Fabric Interconnect Fabric Extender Attached Design
Figure 28 illustrates the connectivity of the Cisco UCS C-Series server into the Cisco UCS domain using
a Fabric Extender. Functionally, the 1 RU Nexus FEX 2232PP replaces the Cisco UCS 2204 or 2208 IOM
(located within the Cisco UCS 5108 blade chassis). Each 10GbE VIC port connects to Fabric A or B
through the FEX. The FEX and Fabric Interconnects form port channels automatically based on the
chassis discovery policy providing a link resiliency to the Cisco UCS C-series server. This is identical to
the behavior of the IOM to Fabric Interconnect connectivity. Logically, the virtual circuits formed within
Solution Design
48
the Cisco UCS domain are consistent between B and C series deployment models and the virtual
constructs formed at the vSphere are unaware of the platform in use.
Figure 28 Cisco UCS C-Series with VIC 1225
Cisco UCS Server Configuration for vSphere
The ESXi nodes consist of Cisco UCS B200-M3 series blades with Cisco 1240 VIC or Cisco UCS C220-M3
rack mount servers with Cisco 1225 VIC. These nodes are allocated to a VMware High Availability (HA)
cluster supporting infrastructure services such as vSphere Virtual Center, Microsoft Active Directory and
NetApp Virtual Storage Console (VSC).
At the server level, the Cisco 1225/1240 VIC presents multiple virtual PCIe devices to the ESXi node and the
vSphere environment identifies these interfaces as vmnics. The ESXi operating system is unaware of the fact
that the NICs are virtual adapters. In the FlexPod design, six vNICs are created and utilized as follows:
Two vNICs carry out of band management traffic
Two vNICs carry data traffic including storage traffic
One vNICs carries iSCSI-A traffic (SAN A)
One vNICs carries iSCSI-B traffic (SAN B)
These vNICs are pinned to different Fabric Interconnect uplink interfaces based on which VLANs they are
associated.
Solution Design
49
Figure 29 ESXi Server vNICs and vmnics
Figure 29 covers the ESXi server design showing both virtual interfaces and VMkernel ports. All the Ethernet
adapters vmnic0 through vmnic5 are virtual NICs created using Cisco UCS Service Profile.
Cisco Nexus 9000
In the current validated design, Cisco Nexus 9508 Spine and Cisco Nexus 9396 leaf switches provide ACI
based Ethernet switching fabric for communication between the virtual machine and bare metal compute,
NFS and iSCSI based storage and the existing traditional enterprise networks. Like previous versions of
FlexPod, virtual port channel plays and important role in providing the necessary connectivity.
Virtual Port Channel (vPC) Configuration
A virtual PortChannel (vPC) allows a device's Ethernet links that are physically connected to two different
Cisco Nexus 9000 Series devices to appear as a single PortChannel. In a switching environment, vPC
provides the following benefits:
Allows a single device to use a PortChannel across two upstream devices
Eliminates Spanning Tree Protocol blocked ports and use all available uplink bandwidth
Provides a loop-free topology
Provides fast convergence if either one of the physical links or a device fails
Helps ensure high availability of the overall FlexPod system
Unlike an NxOS based design, vPC configuration in ACI does not require a VPC peer-link to be explicitly
connected and configured between the peer-devices (leaves). The peer communication is carried over the
40G connections through the Spines.
Solution Design
50
Compute and Storage Connectivity
Cisco UCS Fabric Interconnects and NetApp storage systems are connected to the Nexus 9396 switches
using virtual vPC (Figure 30). The PortChannels connecting NetApp controllers to the ACI fabric are
configured with three types of VLANs:
iSCSI VLANs to provide boot LUN and direct attached storage access
NFS VLANs to access ESXi server datastores used for hosting VMs
Management VLANs to provide access to Tenant Storage virtual machines (SVM)
Similarly, the PortChannels connecting UCS Fabric Interconnects to the ACI fabric are also configured with
three types of VLANs:
iSCSI VLANs to provide ESXi hosts access to boot LUNs
NFS VLANs to access infrastructure datastores to be used by vSphere environment to host
infrastructure services
A pool of VLANs associated with ACI Virtual Machine Manager domain. VLANs are dynamically
allocated from this pool by APIC to newly created end point groups (EPGs).
These VLAN configurations are covered in detail in the next sub-sections.
Figure 30 Compute and Storage Connectivity to Cisco Nexus 9000 ACI
VLAN Configuration for Cisco Unified Computing System
When configuring Cisco Unified Computing System for Cisco Nexus 9000 connectivity, iSCSI VLANs
associated with boot from SAN configuration and the NFS VLANs used by the infrastructure ESXi hosts are
Solution Design
51
defined on the Cisco UCS Fabric Interconnects. In Figure 31, VLANs 3802, 3804 and 3805 are the NFS,
iSCSI-A and iSCSI-B VLANs. On ESXi hosts, these VLANs are added to appropriate virtual NICs (vNICs) such
that iSCSI-A only exist on FI-A and iSCSI-B exists on FI-B. NFS VLAN on the other hand is enabled on both
FIs and is therefore added to both uplink vNICs for redundancy.
In ACI based configuration, Cisco APIC connects to VMware vCenter and automatically configures port-
groups on the VMware distributed switch based on the user-defined End Point Group (EPG) configuration.
These port-groups are associated with a dynamically assigned VLAN from a pre-defined pool in Cisco APIC.
Since Cisco APIC does not configure UCS FI, the range of VLANs from this pool is also configured on the
host uplink vNIC interfaces. In Figure 31, VLAN 1101-1200 is part of the APIC defined pool.
In future releases of FlexPod with ACI solution, Cisco UCS Director (UCSD) will be incorporated into the
solution and will add the appropriate VLANs to the vNIC interfaces on demand. Defining the complete
range will be unnecessary.
Figure 31 VLAN Configuration for Cisco UCS Connectivity
VLAN Configuration for NetApp
When configuring NetApp controllers for Cisco Nexus 9000 connectivity, iSCSI VLANs used for boot from
SAN, NFS VLANs for the ESXi hosts and SVM management LIFs are defined on the NetApp controllers. In
Figure 32, VLANs 3902, 3904 and 3905 are the NFS, iSCSI-A and iSCSI-B VLANs for a tenant. VLAN 3910
is the SVM management LIF.
In ACI, the NFS, iSCSI, and management VLAN numbers used on Cisco Unified Computing System and on
NetApp controllers are different (38xx on Cisco Unified Computing System and 39xx on NetApp) because
of the way EPGs and contracts are defined. The ACI fabric provides the necessary VLAN translation to en-
able communication be-tween the VMkernel and the LIF EPGs. More discussion around EPGs and VLAN
mapping can be found in the ACI design section.
Solution Design
52
Figure 32 VLAN Configuration for NetApp Connectivity
Application Centric Infrastructure (ACI) Design
The Cisco ACI fabric consists of discrete components that operate as routers and switches but are
provisioned and monitored as a single entity. These components and the integrated management allow ACI
to provide advanced traffic optimization, security, and telemetry functions for both virtual and physical
workloads. Cisco ACI fabric is deployed in a leaf-spine architecture. The network provisioning in ACI based
FlexPod is quite different from traditional FlexPod and requires understanding of some of the core concepts
of ACI.
ACI Components
The following are the ACI components:
Leaf switches: The ACI leaf provides physical server and storage connectivity as well as enforces
ACI policies. A leaf typically is a fixed form factor switches such as the Cisco Nexus N9K-C9396PX,
the N9K-C9396TX and N9K-C93128TX switches. Leaf switches also provide a connection point to
the existing enterprise or service provider infrastructure. The leaf switches provide both 10G and
40G Ethernet ports for connectivity.
Spine switches: The ACI spine provides the mapping database function and connectivity among leaf
switches. A spine typically can be either the Cisco Nexus® N9K-C9508 switch equipped with N9K-
X9736PQ line cards or fixed form-factor switches such as the Cisco Nexus N9K-C9336PQ ACI spine
switch. Spine switches provide high-density 40 Gigabit Ethernet connectivity between leaf switches.
Tenant: A tenant (Figure 33) is a logical container or a folder for application policies. This container
can represent an actual tenant, an organization, an application or can just be used for the
convenience of organizing information. A tenant represents a unit of isolation from a policy
perspective. All application configurations in Cisco ACI are part of a tenant. Within a tenant, you
define one or more Layer 3 networks (VRF instances), one or more bridge domains per network, and
EPGs to divide the bridge domains.
Application Profile: Modern applications contain multiple components. For example, an e-commerce
application could require a web server, a database server, data located in a storage area network,
and access to outside resources that enable financial transactions. An application profile (Figure 33)
Solution Design
53
models application requirements and contains as many (or as few) End Point Groups (EPGs) as
necessary that are logically related to providing the capabilities of an application.
Bridge Domain: A bridge domain represents a L2 forwarding construct within the fabric. One or more
EPG can be associated with one bridge domain or subnet. A bridge domain can have one or more
subnets associated with it. One or more bridge domains together form a tenant network.
End Point Group (EPG): An End Point Group (EPG) is a collection of physical and/or virtual end points
that require common services and policies. An End Point Group example is a set of servers or
storage LIFs on a common VLAN providing a common application function or service. While the
scope of an EPG definition is much wider, in the simplest terms an EPG can be defined on a per
VLAN segment basis where all the servers or VMs on a common LAN segment become part of the
same EPG.
Contracts: A service contract can exist between two or more participating peer entities, such as two
applications running and talking to each other behind different endpoint groups, or between
providers and consumers, such as a DNS contract between a provider entity and a consumer entity.
Contracts utilize filters to limit the traffic between the applications to certain ports and protocols.
Service Graph: Cisco ACI treats services as an integral part of an application. Any services that are
required are treated as a service graph that is instantiated on the ACI fabric from the APIC. Users
define the service for the application, while service graphs identify the set of network or service
functions that are needed by the application, and represent each function as a node. When the graph
is configured in the APIC, the APIC automatically configures the services according to the service
function requirements that are specified in the service graph. The APIC also automatically configures
the network according to the needs of the service function that is specified in the service graph,
which does not require any change in the service device. A service graph is represented as two or
more tiers of an application with the appropriate service function inserted between.
Figure 33 covers relationship between the ACI elements defined above. As shown in the figure, a
Tenant can contain one or more application profiles and an application profile can contain one or
more end point groups. The devices in the same EPG can talk to each other without any special
configuration. Devices in different EPGs can talk to each other using contracts and associated filters.
A tenant can also contain one or more bridge domains and multiple application profiles and end point
groups can utilize the same bridge domain.
Solution Design
54
Figure 33 ACI Relationship Between Major Components
End Point Group (EPG) Mapping in a FlexPod Environment
In the FlexPod with ACI infrastructure, traffic is associated with an EPG in one of the two ways:
Statically mapping a VLAN to an EPG (Figure 34)
Associating an EPG with a Virtual Machine Manager (VMM) domain and allocating a VLAN
dynamically from a pre-defined pool in APIC (Figure 35)
Figure 34 EPG Static Binding to a Path
Solution Design
55
Figure 35 EPG Virtual Machine Manager Domain Binding
The first method of statically mapping a VLAN is useful for:
Mapping storage VLANs on NetApp Controller to storage related EPGs. These storage EPGs become
the storage "providers" and are accessed by the VM and ESXi host EPGs through contracts. This
mapping can be seen in Figure 52.
Connecting ACI environment to an existing layer-2 bridge domain, for example, existing
management segment. A VLAN on an out of band management switch is statically mapped to a
management EPG in the common tenant to provide management services to VMs across all the
tenants
Mapping iSCSI and NFS datastores VLANs on Cisco Unified Computing System to EPGs that
consume the NetApp storage EPGs defined in Step 1. Figure 52 illustrates this mapping as well.
The second method of dynamically mapping a VLAN to an EPG by defining a VMM domain is used for:
Deploying VMs in a multi-tier Application as shown in Figure 39
Deploying iSCSI and NFS related storage access for application VMs (Figure 39)
Virtual Machine Networking
The Cisco APIC automates the networking for all virtual and physical workloads including access policies and
L4-L7 services. When connected to the VMware vCenter, APIC controls the configuration of the VM
switching as discussed below.
Virtual Machine Manager (VMM) Domains
For a VMware vCenter, all the networking functionalities of the Virtual Distributed Switch (VDS) and port
groups are performed using the APIC. The only function that a vCenter administrator needs to perform on
the vCenter is to place the vNICs into the appropriate port group(s) that were created by the APIC. The APIC
communicates with the controller to publish network policies that are applied to the virtual workloads. An
EPG can span multiple VMM domains, and a VMM domain can contain multiple EPGs.
vSwitch and Virtual Distributed Switch (VDS)
For a FlexPod deployment, while application deployment utilizes APIC controlled VDS, some of the core
functionality such as out of band management access for an ESXi and vCenter server, iSCSI, and NFS
access for boot LUNs and VM datastores utilizes vSphere vSwitches. The resulting distribution of VMkernel
Solution Design
56
ports and VM port-groups on an ESXi server is shown in Figure 36. In the UCS service profiles, storage,
management and VM VLANs are defined on the vNICs tied to different virtual switches. On vSphere
infrastructure, VMkernel ports and port-groups utilize these VLANs to forward the traffic correctly.
Figure 36 Application Server Design
Virtual Machine Networking
The Cisco APIC automates the networking for all virtual and physical workloads including access policies and
L4-L7 services. When connected to the VMware vCenter, APIC controls the configuration of the VM
switching as discussed below.
Virtual Machine Manager (VMM) Domains
For a VMware vCenter, all the networking functionalities of the Virtual Distributed Switch (VDS) and port
groups are performed using the APIC. The only function that a vCenter administrator needs to perform on
the vCenter is to place the vNICs into the appropriate port group(s) that were created by the APIC. The APIC
communicates with the controller to publish network policies that are applied to the virtual workloads. An
EPG can span multiple VMM domains, and a VMM domain can contain multiple EPGs.
vSwitch and Virtual Distributed Switch (VDS)
For a FlexPod deployment, while application deployment utilizes APIC controlled VDS, some of the core
functionality such as out of band management access for an ESXi and vCenter server, iSCSI, and NFS
access for boot LUNs and VM datastores utilizes vSphere vSwitches. The resulting distribution of VMkernel
ports and VM port-groups on an ESXi server is shown in Figure 37. In the UCS service profiles, storage,
management and VM VLANs are defined on the vNICs tied to different virtual switches. On vSphere
infrastructure, VMkernel ports and port-groups utilize these VLANs to forward the traffic correctly.
Solution Design
57
Figure 37 Application Server Design
If the management infrastructure, specially vCenter, vCenter database and AD servers are hosted on the
FlexPod infrastructure, a separate set of service profiles is recommended for supporting infrastructure
services. These infrastructure ESXi hosts will be configured with two additional vNICs tied to a dedicated
storage (NFS) vSwitch as shown in Figure 38. This updated server design helps maintain access to the
infrastructure services. This access remains on an out of band management infrastructure, independent of
the core ACI fabric.
Figure 38 Infrastructure Server Design
Onboarding Infrastructure Services
In an ACI fabric, all the applications, services and connectivity between various elements are defined within
the confines of tenants, application profiles, bridge domains and EPGs. The ACI constructs for core
infrastructure services are as defined below. Figure 52 provides an overview of the connectivity and
relationship between various ACI elements.
Solution Design
58
Onboarding Microsoft Exchange on FlexPod ACI Infrastructure
The ACI constructs for an application deployment involve both providing storage connectivity for the
application VMs as well defining communication between various tiers of the application. Figure 39 below
provides an overview of the connectivity and relationship between various ACI elements for deployment of
Microsoft Exchange.
Some of the key highlights for deploying Exchange on the new infrastructure are the following:
Three application profiles, NFS, iSCSI, and Ex-Application are utilized to deploy Exchange application
and to provide access to storage.
ESXi servers will map an NFS datastore on NetApp controller to host Exchange VMs. APIC will deploy
a port-group on the virtual distributed switch to host VMkernel ports for all ESXi hosts.
In order to provide Exchange VMs direct access to storage LUNs, APIC will deploy two iSCSI port-
groups for redundant iSCSI paths. VMs which need iSCSI access to storage will be configured with
additional NICs in the appropriate iSCSI port-groups.
NetApp Snap Manager and Snap drive applications require management access to the storage
virtual machine. A fourth application profile, SVM-Mgmt, provides this access by statically mapping
the SVM management VLANs to a unique EPG called SVM-Mgmt.
Four unique bridge domains are needs to host SVM-Mgmt, iSCSI-A, iSCSI-B and NFS traffic. The
actual VM traffic shares the bridge domain with NFS traffic.
Figure 39 Design Details of the 3-Tier (Exchange) Application
Solution Design
59
Exchange Logical Topology
The Exchange Application is deployed using three core and one external facing end point groups as shown
Figure 40. By deploying VMs in different EPGs, communication between the VM is controlled using following
two basic rules:
1. VMs that are part of the same EPG can communicate with each other freely.
2. VMs that are part of different EPGs can only communicate to each other if a contract is defined be-
tween the EPGs to allow the communication.
EPG MAIL provides connectivity to between the network load balancer and the CAS/Mailbox role
running in each Exchange VM. The EPG MAIL also has a contract with EPG Infrastructure.
EPG Infrastructure provides connectivity to common resources like Active Directory Services and
Domain Name Services.
EPG DAG provides connectivity to other Exchange server VMs and facilitates a dedicated DAG
replication network.
Figure 40 Exchange Topology
Connecting the ACI leaf switches to the Core routers provides the connectivity between the application
servers and the clients. OSPF routing is enabled between the Core router and ACI leaf on a per-tenant (per-
VRF) basis. To keep the client traffic separate from the internal application communication, a unique bridge
domain (BD_Intranet) is defined under the Exchange Tenant. The application load balancer is deployed to
provide the connectivity between the two bridge domains as shown in Figure 40. The logical topology and
the components are further explained in the following sections.
Microsoft Exchange as Tenant on ACI Infrastructure
Tenant: Exchange
To host different
configured.
Solution Design
60
Figure 41 APIC - Exchange Tenant
Application Profile and EPGs
As previously discussed, the Exchange tenant comprises of three application profiles to support Exchange
-
- - in Figure 42:
- statically maps the VLAN used for defining NFS LIF on the Exchange SVM (VLAN
3922).
- -group. This port-group is
utilized by the ESXi host NFS specific VMkernel ports for direct access to NFS datastores.
Solution Design
61
Figure 42 Exchange Application Profile NFS
Application Profile "iSCSI" comprises four EPGs, "lif-iSCSI-a", "lif-iSCSI-b", "vmk-iSCSI-a" and "vmk-
iSCSI-b":
EPGs "lif-iSCSI-a" and "lif-iSCSI-b" statically maps the VLANs used for accessing NFS LIF on
Exchange SVM (VLAN 3924 and 3925).
EPGs "vmk-iSCSI-a" and "vmk-iSCSI-b" are tied to the VMM domain to provide iSCSI port-groups.
These port-groups are utilized by ESXi VMkernel ports as well as application VMs for direct access
to storage LUNs.
Figure 43 Exchange Application Profile iSCSI
- - - -
- -group for hosting the Exchange
mail servers.
Solution Design
62
- is attached to the VMM domain to provide a port-group for communication
between Exchange Database Availability Groups (DAG) replication.
- -group for hosting common
Exchange services (Active directory etc.).
Figure 44 Exchange Application Profile Exchange-Application
Bridge Domains
-a, BD_iSCSI-b, BD_Internal,
BD_External and BD_SVM-Mgmt:
BD_iSCSI-a is configured to host iSCSI-A EPGs
BD_iSCSI-b is configured to host iSCSI-B EPGs
BD_Internal is configured to host all the Exchange Application and NFS EPGs
BD_External is configured to connect application to external infrastructure
BD_SVM-Mgmt is configured to host NetApp Exchange SVM management EPG
Figure 45 Exchange - Bridge Domains
Solution Design
63
Contracts
In order to enable communication between various application tiers, VMkernel ports and the NetApp
controller LIFs, and from ACI infrastructure to rest of the enterprise, contracts are defined between the
the source EPG and consumed in the destination EPG.
The traffic between the contract-EPGs is controller using filters. Figure 46 through Figure 48 show details
about the contracts defined in the tenant to enable the application and storage communication.
NFS Traffic:
EPG nfs-vmk consumes the contract provided by EPG lif-NFS
Figure 46 Exchange - NFS Contracts
iSCSI Traffic:
EPG vm-iSCSI-a consumes the contract provided by EPG lif-iscsi-a
EPG vm-iSCSI-b consumes the contract provided by EPG lif-iscsi-b
Figure 47 Exchange - iSCSI Contracts
Application Traffic:
EPG vm-mail consumes contracts provided by SVM-Mgmt and Common-Mgmt
EPG vm-mail consumes contract provided by EPG vm-infra
EPG vm-mail provides a contract to enable communication outside ACI fabric
EPG vm-dag does not allow any communication from any other tenant
Solution Design
64
Figure 48 Exchange - Application Contracts
VMware Design
The Exchange applications EPG discussed above are associated automatically associated with the vCenter
Domain. The DVS and port groups created for EPGs and the ESXi hosts are visible to VM admin in the
vCenter as shown in Figure 49.
Figure 49 Exchange - VMware VDS Port-Groups
When Application admin deploys the applications in appropriate port-groups, the policies and contracts
defined in APIC are automatically implemented to all the VM to VM communication.
Solution Design
65
Common Services and Storage Management
Accessing Common Services
In almost all the application deployments, application servers need to access common services such as
Active Directory (AD), Domain Name Services (DNS), management servers and monitoring software etc.
the rest of the tenants where in addition to the locally defined
contracts, all the tenants in the
In the FlexPod environment, access to common services is provided as shown in Figure 50. To provide this
access:
A common services segment is defined where common services VMs connect using a secondary
NIC. A separate services LAN segment ensures that the access of the tenant VMs is limited to only
common services VMs
The EPG for common services segment, Management-Access is defined in the common tenant
The tenant VMs access the common management segment by defining contracts between
applicat
The contract filters only allow specific services related ports to keep the environment secure
Figure 50 Common Services and Storage Management
Solution Design
66
Accessing SVM Management
NetApp SnapDrive require direct connectivity from application VMs to the management LIF of the Exchange
SVM. To provide this connectivity securely, a separate VLAN is dedicated to be used by the SVM
management LIF. This VLAN is then statically mapped to an EPG in the application tenant as shown in Figure
50. Application VMs can access this management interface by defining contracts.
If the application tenant (such as Exchange) contains mappings for any other LIFs (iSCSI, NFS, etc.), a
separate bridge domain is required for SVM management LIF because of the overlapping LIF MAC
addresses.
Connectivity to Existing Infrastructure
In order to connect ACI fabric to existing infrastructure, Leaf nodes are connected to a pair of core
infrastructure routers/switches. In this CVD, Nexus 7000 switch was configured as the core router. Figure 51
shows the connectivity details including tenant virtual routing and forwarding (VRF) mappings.
Figure 51 ACI Connectivity to Existing Infrastructure
Some of the design principles followed for external connectivity are:
Both Leaf switches are connected to both Nexus 7000 switches
A unique private network and associated bridge domain is defined; this network (VRF Exch-Intranet)
provides connectivity to external infrastructure.
Unique VLANs are configured to provide per-tenant connectivity between ACI fabric and the
infrastructure core. VLAN 111-114 are configured for the Exchange tenants
OSPF is configured between the tenant and the core router such that on ACI fabric, per-VRF OSPF
instances are configured to keep Exchange traffic segregated from rest of the tenants
Core router is configured to advertise a default route for the Exchange tenant while ACI fabric is
configured to advertise only public facing subnets
Solution Design
67
Exchange - ACI Design Recap
In a FlexPod with ACI environment, when Exchange is deployed based on the design covered so far,
resulting application tenant configuration can be summarized as follows:
Exchange Tenant is configured with (at least) two separate bridge domains, bridge domain for
internal application communication and a bridge domain for external application connectivity.
Application tiers communicate internally using contracts between various EPGs.
External communication for the application is provided by defining an external facing private network
(VRF) and configuring OSPF routing between this network and core router.
A load balancer provides connectivity between the intranet clients and Exchange server bridge
domains.
Figure 52 Design Details of the Foundation (Infrastructure) Tenant
Tenant: To enable the compute to storage connectivity for accessing boot LUNs (iSCSI) and NFS
datastores, a tenant named "Foundation" is configured.
Application Profile and EPGs: The foundation tenant comprises two application profiles, "iSCSI" and
"NFS".
Application Profile "NFS" comprises three EPGs, "lif-NFS", "infra-NFS" and "vmk-NFS" as shown
in Figure 53.
Solution Design
68
— EPG "lif-NFS" statically maps the VLAN associated with NFS LIF interface on the NetApp
Infrastructure SVM (VLAN 3902). This EPG "provides" storage access to the compute environment.
— EPG "infra-NFS" statically maps the VLAN associated with NFS VMkernel port (Figure 53) for the
infrastructure ESXi server (VLAN 3802*). This EPG "consumes" storage system access provided by
EPG "lif-NFS"
— EPG "vmk-NFS" is attached to the VMM domain to provide an NFS port-group in the vSphere
environment. This port-group is utilized both by the application ESXi servers and by VMs that
require direct access to NFS datastores. EPG "vmk-NFS" "consumes" storage access provided by
EPG "lif-NFS"
LANs (3802 and 3902) are configured for these EPGs. By utilizing contracts, ACI fabric allows the neces-
sary connectivity between ESXi hosts and NetApp controllers and the different VLAN IDs within the ACI
fabric do not matter.
Similar configuration (with different VLANs) is utilized for iSCSI connectivity as well.
Figure 53 Foundation Tenant Application Profile NFS
Application Profile "iSCSI" comprises four EPGs, "lif-iSCSI-a", "lif-iSCSI-b", "vmk-iSCSI-a" and "vmk-
iSCSI-b" as shown in 0.
EPGs "lif-iSCSI-a" and "lif-iSCSI-b" statically maps the VLANs associated with iSCSI-A and iSCSI-
B LIF interfaces on the NetApp Infrastructure SVM (VLAN 3904 and 3905). These EPGs "provide"
boot LUN access to the ESXi environment.
EPGs "vmk-iSCSI-a" and "vmk-iSCSI-b" statically maps the VLAN associated with iSCSI VMkernel
ports on the ESXi servers (VLAN 3804 and 3805). These EPGs "consume" boot LUN access
provided by EPGs "lif-iSCSI-a" and "lif-iSCSI-b".
Solution Design
69
Figure 54 Foundation Tenant Application Profile iSCSI
Bridge Domains: While all the EPGs in a tenant can theoretically use the same bridge domain, there
are some additional considerations for determining the number of bridge domains required. A bridge
domain in ACI is equivalent to a broadcast layer-2 domain in traditional Ethernet networks. When the
bridge domain contains endpoints belonging to different VLANs (outside of ACI fabric), a unique
MAC address is required for every unique endpoint.
NetApp controllers use the same MAC address for an interface group and all the VLANs defined for that
interface group. This results in all the LIFs sharing a single MAC address. As shown in Figure 55, the
"Foundation" tenant connects to two iSCSI LIFs and one NFS LIF for the infrastructure SVM. Since these
three LIFs share the same MAC address, a separate BD is required for each type of LIF. The "Foundation"
tenant is comprised of three bridge domains: BD_iSCSI-a, BD_iSCSI-b and BD_Internal:
BD_iSCSI-a is configured to host EPGs configured for iSCSI-A traffic
BD_iSCSI-b is configured to host EPGs configured for iSCSI-B traffic
BD_Internal is configured to host EPGs configured for NFS as well as EPGs related to application
traffic
Figure 55 Foundation Tenant Bridge Domains
Solution Design
70
As of the current release, the ACI fabric allows up to 8 IP addresses to be mapped to a single MAC ad-
dress. In a FlexPod environment, this is a useful consideration when multiple LIFs share the same interface
VLAN (ifgroup-VLAN) on the NetApp controller. When designing the storage architecture, it is important to
consider the scale of the proposed tenant and the number of LIFs required.
Contracts: In order to enable communication between VMkernel ports and the NetApp controller
LIFs, contracts are defined between the appropriate EPGs.
L4-L7 Services: Utilizing F5 BIG-IP LTM to provide application delivery controller (ADC) functionality
for the Microsoft Exchange workload. Offers the intranet clients load balance function to the Client
Access Server role of Microsoft Exchange servers.
Exchange Server Sizing
Exchange server need to be sized for the projected workloads and service level agreements. Whether
Exchange is running in virtual machines or on physical servers, the Microsoft Exchange 2013 Server
Requirements Calculator is and essential tool for planning and sizing the Exchange deployment. This
Exchange sizing tool is available at the link: https://gallery.technet.microsoft.com/Exchange-2013-Server-
Role-c81ac1cf
The deployment in this document is designed to support 5000 2GB mailboxes with a 150 message per
mailbox per day action profile. Each Exchange server will be deployed in a virtual machine run the Mailbox
and CAS combined roles. The Exchange servers are located in a single data center and have a requirement
for high availability. To address the high availability, two copies of every mailbox database will be deployed.
Exchange 2013 Server Requirements Calculator Inputs
The following inputs are configured in the Exchange 2013 Server Requirements for this deployment.
Table 2 Requirements for Exchange Environment Configuration
Exchange Environment Configuration Value
Global catalog server architecture 64-bit
Server multi-role configuration (MBX+CAS) Yes
Server role virtualization Yes
High availability deployment Yes
Number of mailbox servers hosting active
mailboxes/ DAG
4
Number of databases availability groups 1
Solution Design
71
Table 3 Requirements for Mailbox Database Copy Configuration
Mailbox Database Copy Configuration Value
Total number of HA database copy instance
(includes active copy) within DAG
2
Total number of lagged database copy
instances within DAG
0
Table 4 Requirements for Tier-1 User Mailbox Configuration
Tier-1 User Mailbox Configuration Value
Total number of Tier-1 user mailboxes/
environment
5000
Projected mailbox number growth percentage 0%
Total send/ receive capability/ mailbox/ day 150 messages
Average message size (KB) 75
Mailbox size limit (MB) 2048
Personal archive mailbox size limit (MB) 0
Deleted item retention window (days) 14
Single item recovery Enabled
Calendar version storage Enabled
Multiplication factor user percentage 100%
IOPS multiplication factor 1.00
Megacycles multiplication factor 1.00
Desktop search engines enabled (for online
mode clients)
No
Predict IOPS value Yes
Cisco UCS B200 M4 host servers with dual Intel Xeon E5-2660v2 processors will be running the Exchange
virtual machines. The SPECint 2006 Rate value is 745 for this server and processor combination.
Solution Design
72
Table 5 SPECint Test Details
System Results Baseline #Cores #Chips
Cisco UCS B200M3
(Intel Xeon E5-2660 v2,
2.20 GHz)
745 720 20 2
Each Exchange server virtual machine is allocated 12 vCPUs in order to allow other virtual machines to also
run the same host. CPU over subscription is not recommended for Exchange server virtual machines. This
means that all virtual machines that run on the same host that runs the Exchange virtual machines must not
exceed a 1-to-1 vCPU to CPU core allocation.
Table 6 Requirements for Server Configuration
Server Configuration
Processor
Cores/Serve
r
SPECint
2006 Rate
Value
Mailbox server guest machines 12 446
The default 10% hypervisor overhead factor is used to account for the hypervisor overhead.
Table 7 Requirements for Processor Configuration
Processor Configuration Value
Hypervisor CPU adjustment factor 10%
The maximum database size is set to 2TB per Microsoft and storage vendor guidance.
Table 8 Requirements for Database Configuration
Database Configuration Value
Maximum database size configuration Custom
Maximum database size (GB) 2048
Automatically calculate number of unique
databases/ DAG
Yes
Calculate number of unique databases/ DAG for
symmetrical distribution
No
Exchange 2013 Server Requirements Calculator Output
The Requirements tab shows the following parameters for this deployment.
Two Active Directory Domain Controller CPU cores are required for this deployment. This Exchange
n the input tab.
Multiple domain controllers must be deployed for high availability.
Solution Design
73
Table 9 Process Core Ratio Requirements
Process Core Ratio Requirements /Primary Data Center
Recommended minimum number of global
catalog cores
2
Table 10 Requirements for Environment Configuration
Environment Configuration /Primary Data Center
Recommended minimum number of dedicated
client access servers
-
Number of DAGs -
Number of active mailboxes (normal run time) 5000
Number of mailbox servers/DAG 4
Number of lagged copy servers/DAG 0
Total number of servers/DAG 4
Table 11 Requirements for User Mailbox Configuration
User Mailbox Configuration Tier-1
Number of user mailboxes/environment 5000
Number of mailboxes/databases 625
User mailbox size within database 2299MB
Transaction logs generated/mailbox/day 30
IOPS profile/mailbox 0.10
Read:Write ratio/mailbox 3:2
Table 12 Requirements for Database Copy Instance Configuration
Database Copy Instance Configuration /Primary Data Center
Number of HA databases copy instances/DAG 2
Number of lagged database copy
instances/DAG
0
Solution Design
74
Database Copy Instance Configuration /Primary Data Center
Total number of database copy instances 2
Table 13 Requirements for Database Configuration
Database Configuration Value
Number of databases/DAG 8
Recommended number of mailboxes/database 625
Available database cache/mailbox 13.11MB
Table 14 Requirements for Database Copy Configuration
Database Copy Configuration /Server /DAG
Number of database copies 4 16
Each Exchange virtual machine requires 96GB of RAM physical RAM. Do not use dynamic memory.
The projected peak CPU utilization for each virtual machine running on the Cisco UCS B200 M4 host is 74
percent. This value adheres to Microsoft guidelines.
Table 15 Requirements for Server Configuration
Server Configuration
/Primary Data Center
Server (Single Failure)
Recommended RAM configuration 96GB
Number of processor cores utilized 9
Server CPU utilization 74%
Server CPU megacycle requirements 19492
Server total available adjusted megacycles 26430
Possible storage architecture RAID
Recommended transport database location System disk
The following disk space and IOPS requirements information is used for sizing the storage array. NetApp has
Exchange 2013 specific sizing tool that uses this information to identify the storage configuration
requirements.
Solution Design
75
Table 16 Details of Disk Space Requirement
Disk Space Requirement /Database /Server /DAG
Transport database
space required
- 322GB 1287GB
Database space
required
1403GB 5613GB 22451GB
Log space required 69GB 276GB 1103GB
Database volume space
required
- 8271GB 33086GB
Log volume space
required
- 290GB 1161GB
Restore volume space
required
- 1550GB 6199GB
Table 17 Details of Host IO and Throughput Requirement
Host IO and Throughput
Requirements /Database /Server /DAG
Total database required
IOPS
75 302 1209
Total log required IOPS 17 65 259
Database read I/O
percentage
60% - -
Background database
maintenance throughout
requirements
1.0MB/s 4MB/s 16MB/s
The transport database maximum size is expected to be 322GB. This database will be stored on the system
disk.
Table 18 Transport Calculation Details
Transport Calculations Value
Calculated safety net hold time (days) 7
Messages/day 750000
Solution Design
76
Transport Calculations Value
Messages/day/server 250000
Shadow effect 500000
Safety net effect 4500000
Transport database size/server 322GB
Transport queue location (primary data center) System disk
Exchange and Domain Controller Virtual Machines
The Exchange and Active Directory domain controllers are configured with the following parameters. The
boot disk is sized to accommodate a 32,778 MB swap file and the transport database, in addition to the
Windows system files and Exchange application files.
Microsoft guidelines for Exchange 2013 SP1 swap file requirements can be found at the following link:
http://blogs.technet.com/b/exchange/archive/2014/04/03/ask-the-perf-guy-sizing-guidance-updates-
for-exchange-2013-sp1.aspx
Table 19 Requirements for Exchange Servers
Exchange Servers
VM count 4
vCPU count 12
RAM (no dynamic memory) 96GB
NIC1 Mail network
NIC2 DAG network
Boot disk size 450GB
Table 20 Requirements for AD Domain Controller Servers
AD Domain Controller Servers
VM count 2
vCPU count 2
Solution Design
77
AD Domain Controller Servers
RAM (no dynamic memory) 16GB
NIC1 AD network
Boot disk size 60GB
The Exchange server virtual machines must be spread across different hosts in order to avoid a host failure
from affecting multiple Exchange virtual machine. This rule also applies to domain controllers. However, a
domain controller virtual machine and an Exchange virtual machine can run on the same host.
Namespace and Network Load Balancing
The name space and network load balancing options were discussed earlier in this document. The
deployment described here uses Layer 7 Single Namespace without Session Affinity.
NetApp Storage Design
The FlexPod storage design supports a variety of NetApp FAS controllers such as the FAS2500 and
FAS8000 products as well as legacy NetApp storage. This Cisco Validated Design leverages NetApp
FAS8040 controllers, deployed with clustered Data ONTAP.
In the clustered Data ONTAP architecture, all data is accessed through secure virtual storage partitions
known as storage virtual machines (SVMs). It is possible to have a single SVM that represents the resources
of the entire cluster or multiple SVMs that are assigned specific subsets of cluster resources for given
applications, tenants or workloads. In the current implementation of ACI, the SVM serves as the storage
basis for each application with ESXi hosts booted from SAN by using iSCSI and for application data
presented as iSCSI, CIFS or NFS traffic.
For more information about the FAS8000 product family, see: http://www.netapp.com/us/products/storage-
systems/fas8000/
For more information about the FAS2500 product family, see: http://www.netapp.com/us/products/storage-
systems/fas2500/index.aspx
For more information about the clustered Data ONTAP, see: http://www.netapp.com/us/products/platform-
os/data-ontap-8/index.aspx
Network and Storage Physical Connectivity
The NetApp FAS8000 storage controllers are configured with two port channels that are connected to the
Cisco Nexus 9000 leaf switches. These port channels carry all the ingress and egress data traffic for the
NetApp controllers. This validated design uses two physical ports from each NetApp controller, configured
as an LACP interface group (ifgrp). The number of ports used can be easily modified depending on the
application requirements.
A clustered Data ONTAP storage solution includes the following fundamental connections or network types:
Solution Design
78
HA interconnect - A dedicated interconnect between two nodes to form HA pairs. These pairs are
also known as storage failover pairs.
Cluster interconnect - A dedicated high-speed, low-latency, private network used for
communication between nodes. This network can be implemented through the deployment of a
switchless cluster or by leveraging dedicated cluster interconnect switches.
NetApp switchless cluster is only appropriate for two node clusters.
Management network - A network used for the administration of nodes, clusters, and SVMs.
Data network - A network used by clients to access data.
Ports - A physical port such as e0a, e1a, or a logical port such as a virtual LAN (VLAN) or an
interface group.
Interface groups - A collection of physical ports to create one logical port. The NetApp interface
group is a link aggregation technology that can be deployed in single (active/passive), multiple
("always on"), or dynamic (active LACP) mode.
The recommendation is to use only dynamic interface groups to take advantage of LACP-based load dis-
tribution and link failure detection.
This validation uses two storage nodes configured as a two-node storage failover pair through an HA
interconnect direct connection. The FlexPod design uses the following port and interface assignments:
Ethernet ports e3a and e4a on each node are members of a multimode LACP interface group for
Ethernet data. This design leverages an interface group that has LIFs associated with it to support
NFS and iSCSI traffic.
Ports e0M on each node support a LIF dedicated to node management. Port e0b is defined as a
failover port sup-porting the "node_mgmt" role.
Port e0a supports cluster management data traffic through the cluster management LIF. This port
and LIF allow for administration of the cluster from the failover port and LIF if necessary.
HA interconnect ports are internal to the chassis.
Ports e1a, and e2a are cluster interconnect ports for data traffic. These ports connect to each of the
Cisco Nexus 5596 cluster interconnect switches.
The Cisco Nexus Cluster Interconnect switches support a single port channel (Po1).
Out of Band Network Connectivity
FlexPod leverages out-of-band management networking. Ports e0M on each node support a LIF dedicated
to node management. Port e0b is defined as a failover port supporting the "node_mgmt" role. This role
allows access to tools such as NetApp VSC. To support out of band management connectivity, the NetApp
controllers are directly connected to out of band management switches as shown in Figure 56.
Solution Design
79
The FAS8040 controllers are sold in a single-chassis, dual-controller option only. Figure 56 represents the
NetApp storage controllers as a dual chassis dual controllers deployment. Figure 56 shows two FAS8040
controllers for visual purposes.
Figure 56 Out of Band Management Network
NetApp FAS I/O Connectivity
One of the main benefits of FlexPod is that it gives customers the ability to right-size their deployment. This
effort can include the selection of the appropriate protocol for their workload as well as the performance
capabilities of various transport protocols. The FAS8000 product family supports FC, FCoE, iSCSI, NFS,
pNFS, CIFS/SMB, HTTP, and FTP. The FAS8000 comes standard with onboard UTA2, 10GbE, 1GbE, and
SAS ports. Furthermore, the FAS8000 offers up to 24 PCIe expansion ports per HA pair.
Figure 57 highlights the rear of the FAS8040 chassis. The FAS8040 is configured in single HA enclosure,
that is, two controllers are housed in a single chassis. External disk shelves are connected through onboard
SAS ports, data is accessed through the onboard UTA2 ports, cluster interconnect traffic is over the onboard
Figure 57 NetApp FAS8000 Storage Controller
Solution Design
80
Clustered Data ONTAP and Storage Virtual Machines Overview
Clustered Data ONTAP allows the logical partitioning of storage resources in the form of SVMs. The
following components comprise an SVM:
Logical interfaces: All SVM networking is done through logical interfaces (LIFs) that are created
within the SVM. As logical constructs, LIFs are abstracted from the physical networking ports on
which they reside.
Flexible volumes: A flexible volume is the basic unit of storage for an SVM. An SVM has a root
volume and can have one or more data volumes. Data volumes can be created in any aggregate that
has been delegated by the cluster administrator for use by the SVM. Depending on the data
protocols used by the SVM, volumes can contain either LUNs for use with block protocols, files for
use with NAS protocols, or both concurrently.
Namespace: Each SVM has a distinct namespace through which all of the NAS data shared from that
SVM can be accessed. This namespace can be thought of as a map to all of the junctioned volumes
for the SVM, no matter on which node or aggregate they might physically reside. Volumes may be
junctioned at the root of the namespace or beneath other volumes that are part of the namespace
hierarchy.
Storage QoS: Storage QoS (Quality of Service) can help manage the risks involved in meeting
performance objectives. You can use storage QoS to limit the throughput to workloads and to
monitor workload performance. You can reactively limit workloads to address performance problems
and you can proactively limit workloads to prevent performance problems. You can also limit
workloads to support SLAs with customers. Workloads can be limited on either a workload IOPs or
bandwidth in MB/s basis.
Storage QoS is supported on clusters that have up to eight nodes.
A workload represents the input/output (I/O) operations to one of the following storage objects:
A SVM with flexible volumes
A flexible volume
A LUN
A file (typically represents a VM)
In the ACI architecture, because an SVM is usually associated with an application, a QoS policy group would
normally be applied to the SVM, setting up an overall storage rate limit for the workload. Storage QoS is
administered by the cluster administrator.
Storage objects are assigned to a QoS policy group to control and monitor a workload. You can monitor
workloads without controlling them in order to size the workload and determine appropriate limits within the
storage cluster.
For more information about managing wor
Solution Design
81
Clustered Data ONTAP Logical Topology
Figure 58 details the logical configuration of the clustered Data ONTAP environment used for validation of
the FlexPod solution. The physical cluster consists of two NetApp storage controllers (nodes) configured as
an HA pair and two cluster interconnect switches.
Figure 58 NetApp Storage Controller Clustered Data ONTAP
The following key components to allow connectivity to data on a per application basis:
LIF - A logical interface that is associated to a physical port, interface group, or VLAN interface.
More than one LIF may be associated to a physical port at the same time. There are three types of
LIFs:
— NFS LIF
— iSCSI LIF
— FC LIF
— LIFs are logical network entities having the same characteristics as physical network devices but
are not tied to physical objects. LIFs used for Ethernet traffic are assigned specific Ethernet-based
Solution Design
82
details such as IP addresses and iSCSI-qualified names and then are associated with a specific
physical port capable of supporting Ethernet traffic. NAS LIFs can be non-disruptively migrated to
any other physical network port throughout the entire cluster at any time, either manually or
automatically (by using policies).
In this Cisco Validated Design, LIFs are layered on top of the physical interface groups and are associated
with a given VLAN interface. LIFs are then consumed by the SVMs and are typically associated with a given
protocol and datastore.
SVMs - An SVM is a secure virtual storage server that contains data volumes and one or more LIFs
through which it serves data to the clients. An SVM securely isolates the shared virtualized data
storage and network and appears as a single dedicated server to its clients. Each SVM has a
separate administrator authentication domain and can be managed independently by an SVM
administrator.
Clustered Data ONTAP Configuration for vSphere
This solution defines a single infrastructure SVM to own and export the data necessary to run the VMware
vSphere infrastructure. This SVM specifically owns the following flexible volumes:
Root volume - A flexible volume that contains the root of the SVM namespace.
Root volume load-sharing mirrors - Mirrored volumes of the root volume accelerate read throughput.
In this in-stance, they are labeled root_vol_m01 and root_vol_m02.
Boot volume - A flexible volume that contains ESXi boot LUNs. These ESXi boot LUNs are exported
through iSCSI to the Cisco UCS servers.
Infrastructure datastore volume - A flexible volume that is exported through NFS to the ESXi host and
is used as the infrastructure NFS datastore to store VM files.
Infrastructure swap volume - A flexible volume that is exported through NFS to each ESXi host and
used to store VM swap data.
The NFS datastores are mounted on each VMware ESXi host in the VMware cluster and are provided by
NetApp clustered Data ONTAP through NFS over the 10GbE network.
The SVM has a minimum of one LIF per protocol per node to maintain volume availability across the cluster
nodes. The LIFs use failover groups, which are network polices defining the ports or interface groups
available to support a single LIF migration or a group of LIFs migrating within or across nodes in a cluster.
Multiple LIFs may be associated with a network port or interface group. In addition to failover groups, the
clustered Data ONTAP system uses failover policies. Failover polices define the order in which the ports in
the failover group are prioritized. Failover policies define migration policy in the event of port failures, port
recoveries, or user-initiated requests.
The most basic possible storage failover scenarios in this cluster are as follows:
Node 1 fails, and Node 2 takes over Node 1's storage.
Node 2 fails, and Node 1 takes over Node 2's storage.
The remaining node network connectivity failures are addressed through the redundant port, interface
groups, and logical interface abstractions afforded by the clustered Data ONTAP system.
Solution Design
83
Storage Virtual Machine Layout
Figure 59 highlights the storage topology that shows the SVM and associated LIFs. There are two storage
nodes and the SVMs are layered across both controller nodes. Each SVM has its own LIFs configured to
support SVM specific storage protocols. Each of these LIFs is mapped to end point groups on the ACI fabric.
Figure 59 Sample Storage Topology
Storage Configurations
84
Storage Configurations
When planning content storage for Microsoft Exchange 2013, you must choose a suitable storage
architecture.
Figure 60 Logical Storage layout
Aggregate, Volume, and LUN Sizing
The aggregate contains all of the physical disks for a workload and for Microsoft Exchange 2013. All flexible
volumes that are created inside a 64-bit aggregate span across all of the data drives in the aggregate to
provide more disk spindles for the I/O activity on the flexible volumes.
NetApp recommends having at least 10 percent free space available in an aggregate that is hosting
Exchange data to allow optimal performance of the storage system.
A volume is generally sized at 90 percent of the aggregate size, housing both the actual LUNs and the
Snapshot copies of those LUNs. This sizing takes into account the content database, the transaction logs,
the growth factor, and 20 percent of free disk space.
Storage Layout
Figure 61 shows two aggregates (Aggregate 1 and Aggregate 2) for the Microsoft Exchange 2013 data. The
SVM created for Microsoft Exchange is configured to support the iSCSI and NFS protocols and holds the
specific volumes and LUN for the Exchange application. Space in the Microsoft Exchange 2013 aggregate
and volumes is allocated based on the load distribution across the FAS8040 controllers.
Storage Configurations
85
Exchange 2013 Database and Log LUNs
Storage for the Microsoft Exchange Server 2013 database is provisioned on separate LUNs for databases
and logs. Disks are configured with NetApp RAID DP® data protection technology, and database EDB and
log files are placed on separate LUNs. iSCSI is used as the transport protocol for the storage subsystem.
Figure 61 Separate LUNs Provisioned for Search Index Files for Exchange Server - Part1
Figure 62 Separate LUNs Provisioned for Search Index Files for Exchange Server - Part2
Validation
86
Validation
Validating the Storage Subsystem with JetStress
JetStress 2013 is a tool that simulates the Exchange 2013 database I/O. It does not require Exchange to be
installed, but does use the same ESE libraries that an Exchange 2013 Server deployment uses. JetStress can
be configured to simulate the database I/O load that is expected for the target deployment profile.
JetStress was configured to simulate two of the four Exchange servers being offline. In this case, the
remaining three Exchange servers have three active databases, each with 625 active 2GB mailboxes per
database. The 150 messages per mailbox per day action profile is expected to generate 0.10 IOPS per
mailbox. 20 percent IOP overhead factor is added to account for peak IOP states. The test is run for a 22
hour period.
The following passing JetStress results were obtained on each server:
Validation
87
Validating the Storage Subsystem with Exchange 2013 LoadGen
Exchange 2013 LoadGen simulates Exchange Clients It requires the Exchange 2013 to be installed and
configured and allows the entire Exchange server deployment to be tested under load. Exchange 2013
LoadGen is configured to simulate 5000 mailbox clients. The load generation was installed on 4 virtual
machines and the client simulation was evenly distributed across these virtual machines. Each LoadGen
virtual machine simulated 1250 Outlook 2007 clients. Performance monitor was used to track key Exchange
Server 2013 performance counters during the LoadGen simulation. Detailed results can be found in the
companion deployment guide for this document.
Conclusion
88
Conclusion
As Businesses necessitates require higher agility in the data center, and the demand for application -centric
automation and virtualization of both hardware and software infrastructure is constantly increasing. Cisco ACI
establishes the critical link between business-based requirements for applications and the infrastructure that
supports them.
F5 BIG-IP LTM and Cisco ACI integration provide seamless L4-L7 service insertion into the ACI fabric,
bringing the richness of F5 BIG-IP LTM functionality into ACI environment.
With the Microsoft Exchange 2013 with FlexPod on ACI model plus F5 BIG-IP LTM provides a unified data
center framework that delivers outstanding performance for virtualized business applications. The ACI model
accelerates IT transformation by enabling faster deployments, greater flexibility of choices, higher efficiency,
and lower- risk operations.
References
Cisco Unified Computing System:
http://www.cisco.com/en/US/products/ps10265/index.html
Cisco UCS 6200 Series Fabric Interconnects:
http://www.cisco.com/en/US/products/ps11544/index.html
Cisco UCS 5100 Series Blade Server Chassis:
http://www.cisco.com/en/US/products/ps10279/index.html
Cisco UCS B-Series Blade Servers:
http://www.cisco.com/en/US/partner/products/ps10280/index.html
Cisco UCS Adapters:
http://www.cisco.com/en/US/products/ps10277/prod_module_series_home.html
Cisco UCS Manager:
http://www.cisco.com/en/US/products/ps10281/index.html
Cisco Nexus 9000 Series Switches:
http://www.cisco.com/c/en/us/support/switches/nexus-9000-series-switches/tsd-products-support-
series-home.html
Cisco Application Centric Infrastructure:
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/index.html
Conclusion
89
VMware vCenter Server:
http://www.vmware.com/products/vcenter-server/overview.html
VMware vSphere:
http://www.vmware.com/products/datacenter-virtualization/vsphere/index.html
NetApp Data ONTAP:
http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx.
NetApp FAS8000:
http://www.netapp.com/us/products/storage-systems/fas8000/
NetApp OnCommand:
http://www.netapp.com/us/products/management-software/
NetApp VSC:
http://www.netapp.com/us/products/management-software/vsc/
FlexPod Data Center with VMware vSphere 5.5 Update 2 and Cisco Nexus 9000 Application Centric
Infrastructure (ACI) Design Guide:
http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi51u1_n9k
_aci_design.pdf
Interoperability Matrixes
Cisco UCS Hardware Compatibility Matrix:
http://www.cisco.com/c/en/us/support/servers-unified-computing/unified-computing-system/products-
technical-reference-list.html
VMware and Cisco Unified Computing System:
http://www.vmware.com/resources/compatibility
NetApp Interoperability Matrix Tool:
http://support.netapp.com/matrix/mtx/login.do
About the Authors
90
About the Authors
Mike Mankovsky, Solutions Architect, CSPG UCS Product Mgmt and DC Solutions, Cisco Systems
Mike Mankovsky is a Cisco Unified Computing System architect, focusing on Microsoft solutions with
extensive experience in Hyper-V, storage systems, and Microsoft Exchange Server. He has expert product
knowledge of Microsoft Windows storage technologies and data protection technologies.
Acknowledgements
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, the
authors would like to thank:
Chris O' Brien, Cisco Systems, Inc.
Haseeb Niazi, Cisco Systems, Inc.
Lisa DeRuyter, Cisco Systems, Inc.
Chris Reno, NetApp Systems
John George, NetApp Systems
Lindsey Street, NetApp Systems
Alex Rosetto, NetApp Systems