+ All Categories
Home > Documents > Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is...

Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is...

Date post: 27-May-2020
Category:
Upload: others
View: 62 times
Download: 0 times
Share this document with a friend
38
Executive Summary Cisco HyperFlex with Red Hat OpenShift Container Platform 3.11 on VMware vSphere Last Updated: January 21, 2019
Transcript
Page 1: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Executive Summary

Cisco HyperFlex with Red Hat OpenShift

Container Platform 3.11 on VMware

vSphere

Last Updated: January 21, 2019

Page 2: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Executive Summary

About the Cisco Validated Design Program

The Cisco Validated Design (CVD) program consists of systems and solutions designed, tested, and

documented to facilitate faster, more reliable, and more predictable customer deployments. For more

information, visit:

http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS

(COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND

ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM

A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE

LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,

WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR

INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE

POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER

PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR

OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON

FACTORS NOT TESTED BY CISCO.

CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco

WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We

Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,

Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the

Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the

Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers,

Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS

Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000

Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco

MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step,

Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,

LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking

Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet,

Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and

the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and

certain other countries.

All other trademarks mentioned in this document or website are the property of their respective owners. The

use of the word partner does not imply a partnership relationship between Cisco and any other company.

(0809R)

© 2019 Cisco Systems, Inc. All rights reserved.

Page 3: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Executive Summary

Table of Contents

Executive Summary ............................................................................................................................................................. 5

Solution Overview ................................................................................................................................................................ 6

Introduction ....................................................................................................................................................................... 6

Audience ........................................................................................................................................................................... 6

Purpose of this Document ................................................................................................................................................. 7

............................................................................................................................................... 7

Technology Overview .......................................................................................................................................................... 8

HyperFlex Data platform 3.5 All-Flash Storage Platform ................................................................................................. 8

Architecture .................................................................................................................................................................... 9

Data Distribution ........................................................................................................................................................... 10

Data Operations ........................................................................................................................................................... 11

Data Optimization ......................................................................................................................................................... 12

Inline Compression ....................................................................................................................................................... 13

Thin Provisioning .......................................................................................................................................................... 13

Data Rebalancing ......................................................................................................................................................... 13

HyperFlex FlexVolume for Kubernetes .......................................................................................................................... 13

Physical Infrastructure ..................................................................................................................................................... 14

Cisco Unified Computing System ................................................................................................................................. 14

Cisco UCS Fabric Interconnect ..................................................................................................................................... 15

Cisco HyperFlex HX-Series Nodes ............................................................................................................................... 15

Cisco UCS C240M5 Rack-Mount Server ...................................................................................................................... 16

Cisco VIC Interface Cards ............................................................................................................................................. 16

Cisco Nexus 9000 Switches ......................................................................................................................................... 17

Intel® Scalable Processor Family .................................................................................................................................. 17

....................................................................................................................... 18

Red Hat OpenShift Container Platform ............................................................................................................................ 19

Kubernetes Infrastructure ............................................................................................................................................. 19

Red Hat OpenShift Integrated Container Registry ......................................................................................................... 19

Docker ......................................................................................................................................................................... 19

Kubernetes ................................................................................................................................................................... 19

Etcd .............................................................................................................................................................................. 20

Open vSwitch ............................................................................................................................................................... 20

HAProxy ....................................................................................................................................................................... 20

Red Hat Ansible Automation ......................................................................................................................................... 20

Persistent Storage to Kubernetes - HyperFlex FlexVolume .............................................................................................. 20

Page 4: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Executive Summary

HyperFlex FlexVolume Dynamic Provisioning ................................................................................................................ 22

Solution Design .................................................................................................................................................................. 24

Architectural Overview .................................................................................................................................................... 24

Bastion Node ................................................................................................................................................................ 24

Kubernetes Infrastructure ............................................................................................................................................. 25

OpenShift Master Nodes .............................................................................................................................................. 25

OpenShift Infrastructure Nodes .................................................................................................................................... 26

OpenShift Application Nodes ........................................................................................................................................ 26

HAProxy Load Balancer ................................................................................................................................................ 27

KeepAlived ................................................................................................................................................................... 27

OpenShift Networking ..................................................................................................................................................... 27

OpenShift SDN ............................................................................................................................................................. 28

Network Isolation .......................................................................................................................................................... 28

OpenShift Container Platform DNS ............................................................................................................................... 28

HyperFlex Multi-VIC Feature ......................................................................................................................................... 28

Reference Architecture ................................................................................................................................................... 29

Physical Topology ........................................................................................................................................................ 29

Network Layout ............................................................................................................................................................ 31

Node Placement Details ............................................................................................................................................... 32

Hardware and Software Revisions ................................................................................................................................... 34

Solution Components ...................................................................................................................................................... 35

Conclusion ......................................................................................................................................................................... 36

Resources.......................................................................................................................................................................... 37

About the Authors .............................................................................................................................................................. 38

Acknowledgements ........................................................................................................................................................ 38

Page 5: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Executive Summary

Executive Summary

Cisco Validated Designs are the foundation of systems design and the centerpiece of facilitating complex

customer deployments. The validated designs incorporate products and technologies into a broad portfolio

of Enterprise, Service Provider, and Commercial systems that are designed, tested, and fully documented to

help ensure faster, reliable, consistent, and more predictable customer deployments.

Cisco HyperFlex -defined networking

and computing with the next-generation Cisco HyperFlex Data Platform. Engineered on the Cisco Unified

HyperFlex Systems deliver the operational requirements for

agility, scalability, and pay-as-you-grow economics of the cloud with the benefits of on-premises

infrastructure. With a hybrid or all-flash-memory storage configurations and a choice of management tools,

Cisco HyperFlex Systems deliver a pre-integrated cluster with a unified pool of resources that you can

quickly deploy, adapt, scale, and manage to efficiently power your applications and your business.

With the latest All-Flash storage configurations, a low latency, high performing hyperconverged storage

platform has become a reality. This makes the Cisco hyperconverged platform most ideal and optimal to

provide compute, network and storage resources to application container workloads at scale with RedHat®

OpenShift® Container Platform.

This solution design is aimed at providing a best-in-class solution for container workloa

OpenShift Container Platform 3.11 and VMware vSphere on Cisco HyperFlex Data Platform. The Reference

Architecture discussed in this design guide provides a methodology to deploy a highly available Red Hat

OpenShift Container Platform on VMware® vSphere environment

platform, Cisco HyperFlex.

Page 6: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Overview

Solution Overview

Introduction

Deployment-centric application platform and DevOps initiatives are driving benefits for organizations in their

digital transformation journey. Though still early in maturity, Docker formatted container packaging and

Kubernetes container orchestration are emerging to cater to the rapid digital transformation. Docker format

container packaging and Kubernetes container orchestration are emerging as industry standards for state-

of-the-art PaaS solutions.

Red Hat® OpenShift® Container Platform provides a set of container-based open source tools enabling digital

transformation, which accelerates application development while making optimal use of infrastructure. Red

Hat OpenShift Container Platform helps organizations use the cloud delivery model and simplify continuous

delivery of applications and services on Red Hat OpenShift Container Platform, the cloud-native way. Built on

proven open source technologies, Red Hat OpenShift Container Platform also provides development teams

multiple modernization options to enable a smooth transition to microservices architecture and the cloud for

existing traditional applications. Red Hat OpenShift Container Platform, providing a Platform as a Service

(PaaS) solution, allows the development, deployment, and management of container-based applications

while standing on top of a privately-owned cloud by leveraging VMware vSphere as an Infrastructure as a

Service (IaaS) platform.

Cisco HyperFlex -to-

end software-defined infrastructure, combining software-defined computing in the form of Cisco Unified

Computing System (UCS) servers; software-defined storage with the powerful Cisco HX Data Platform and

software-defined networking with the Cisco UCS fabric. Together with a single point of connectivity and

hardware management, these technologies deliver a pre-integrated and adaptable cluster that is ready to

provide a unified pool of resources to power applications as your business needs dictate. This cisco

validated design uses Cisco HyperFlex System consisting of Cisco UCS M5 servers that can serve both

compute and storage needs. These servers are Xeon Scalable processors that

are unified yet modular, scalable, high-performing, built on infrastructure-as-code for powerful integrations

and continuous delivery of distributed applications.

Cisco, Intel and Red Hat have joined hands to develop a best-in-class solution for delivering PaaS solution to

the enterprise on a hyperconverged infrastructure platform. And also, to provide the ability to develop,

deploy, and manage containers in an on-premises, Private/ Public cloud environments by

bringing automation to the table with a robust platform such as Red Hat OpenShift Container Platform on

VMware vSphere. This solution design focuses on providing an optimal and a highly-available platform for

running containerized workloads on HyperFlex infrastructure with Red Hat OpenShift Container Platform

configured for VMware vSphere. Red Hat OpenShift Container Platform rely on HyperFlex FlexVol driver for

persistent storage needs of application pods. Persistent storage requirement gets served by HyperFlex Data

Platform.

Audience

The audience for this document includes, but is not limited to, sales engineers, field consultants, professional

services, IT managers, partner engineers, IT architects, and customers who want to take advantage of an

Page 7: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Overview

infrastructure that is built to deliver IT efficiency and enable IT innovation. The reader of this document is

expected to have the necessary training and background to install and configure Red Hat Enterprise Linux,

Cisco HyperFlex, Cisco Unified Computing System, and Cisco Nexus Switches, Enterprise storage sub-

systems, VMware vSphere. Furthermore, knowledge of container platform preferably Red Hat OpenShift

Container Platform is required. External references are provided where applicable and familiarity with these

documents is highly recommended.

Purpose of this Document

This solution design aims at providing an optimal and highly-available Red Hat OpenShift Container Platform

by allowing the development, deployment, and management of container-based applications while standing

on top of a privately-owned cloud by leveraging VMware vSphere as an Infrastructure as a Service (IaaS) on

Cisco HyperFlex Infrastructure.

in this Release

Cisco HyperFlex System as the hyperconverged infrastructure platform providing compute, storage

and network resources to Red Hat OpenShift Container Platform

Cisco HyperFlex Data Platform data management for infrastructure as well as application

environment

HyperFlex FlexVolume Storage Integration for Kubernetes to address persistent storage needs of

stateful application container pods

Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere

Red Hat OpenShift Container platform nodes such as master, infrastructure, and application nodes

are running in VMware vSphere virtualized environment while leveraging VMware HA cluster

Page 8: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Technology Overview

This section provides a brief introduction of the various hardware/ software components used in this

solution.

HyperFlex Data Platform 3.5 Software-Defined Storage Platform

Cisco HyperFlex Systems are designed with an end-to-end software-defined infrastructure that eliminates

the compromises found in first-generation products. Cisco HyperFlex Systems combine software-defined

computing in the form of Cisco UCS® servers, software-defined storage with the powerful Cisco HyperFlex

HX Data Platform Software, and software-defined networking (SDN) with the Cisco® unified fabric that

integrates seamlessly Cisco HyperFlex Systems

deliver a pre-integrated cluster that is up and running in less than an hour and provides independent

resource scalability to meet always changing application requirements.

The Cisco HyperFlex Data Platform includes:

Enterprise-class data management features that are required for complete lifecycle management and

enhanced data protection in distributed storage environments including replication, always on inline

deduplication, always on inline compression, thin provisioning, instantaneous space efficient clones,

and snapshots.

Simplified data management that integrates storage functions into existing management tools,

allowing instant provisioning, cloning, and pointer-based snapshots of applications for dramatically

simplified daily operations.

Improved control with advanced automation and orchestration capabilities and robust reporting and

analytics features that deliver improved visibility and insight into IT operations.

Independent scaling of the computing and capacity tiers, giving you the flexibility to scale out the

environment based on evolving business needs for predictable, pay-as-you-grow efficiency. As you

add resources, data is automatically rebalanced across the cluster, without disruption, to take

advantage of the new resources.

Continuous data optimization with inline data deduplication and compression that increases resource

utilization with more headroom for data scaling.

Dynamic data placement optimizes performance and resilience by making it possible for all cluster

resources to participate in I/O responsiveness. All-Flash nodes use SSD drives for caching layer as

well as capacity layer. This approach helps eliminate storage hotspots and makes the performance

capabilities of the cluster available to every virtual machine. If a drive fails, reconstruction can

proceed quickly as the aggregate bandwidth of the remaining components in the cluster can be used

to access data.

Enterprise data protection with a highly-available, self-healing architecture that supports non-

disruptive, rolling upgrades and offers call-home and onsite 24x7 support options

Page 9: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Cisco Intersight is the latest visionary cloud-based management tool, designed to provide a

centralized off-site management, monitoring and reporting tool for all of your Cisco UCS based

solutions including HyperFlex.

Architecture

In Cisco HyperFlex Systems, the data platform spans three or more Cisco HyperFlex HX-Series nodes to

create a highly available cluster. Each node includes a Cisco HyperFlex HX Data Platform controller that

implements the scale-out and distributed file system using internal (local) drives to store data. The

controllers communicate with each other over 10 or 40 Gigabit Ethernet to present a single pool of storage

that span the nodes in the cluster. Nodes access data through NFS data layer via files. As nodes are added,

the cluster scales linearly to deliver compute capability, storage capacity, and I/O performance.

Distributed Cisco HyperFlex System Figure 1

In the VMware vSphere environment, the controller occupies a virtual machine with a dedicated number of

processor cores and amount of memory, allowing it to deliver consistent performance and not affect the

performance of the other virtual machines on the cluster. The controller can access all storage without

hypervisor intervention through the VMware VM_DIRECT_PATH feature. The controller uses a specific SSD

drive on the local Hyperflex node as a dedicated write log (cache), while the remaining physical drives are

aggregated together to provide storage capacity to the cluster. The controller integrates the data platform

into VMware software through the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs):

IO Visor: This VIB provides a network file system (NFS) mount point so that the VMware ESXi

hypervisor can access the virtual disk drives that are attached to individual virtual machines. From the

VMware vStorage API for Array Integration (VAAI): This storage offload API allows VMWare vSphere

to request advanced file system operations such as snapshots and cloning. The controller causes

these operations to occur through manipulation of metadata rather than actual data copying, providing

rapid response, and thus rapid deployment of new application environments.

Page 10: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Inside HX Data Platform Node Figure 2

Data Distribution

Incoming data is distributed across all nodes in the cluster to optimize performance using the caching tier.

Effective data distribution is achieved by mapping incoming data to stripe units that are stored evenly across

all nodes, with the number of data replicas determined by the HyperFlex cluster replication factor. When an

application writes data, the data is sent to the appropriate node based on the stripe unit, which includes the

relevant block of information. This data distribution approach in combination with the capability to have

multiple streams writing at the same time avoids both network and storage hot spots, delivers the same I/O

performance regardless of virtual machine location, and gives you more flexibility in workload placement.

This contrasts with other architectures that use a data locality approach that does not fully use available

networking and I/O resources and is vulnerable to hot spots.

Data is Striped Across Nodes in the Cluster Figure 3

Page 11: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

When moving a virtual machine to a new location using tools such as VMware vMotion, the Cisco HyperFlex

HX Data Platform does not require data to be moved. This approach significantly reduces the impact and

cost of moving virtual machines among systems.

Data Operations

The data platform implements a distributed, log-structured file system that changes how it handles caching

and storage capacity depending on the node configuration.

In the All-Flash configuration, the data platform uses a caching layer in SSDs to accelerate write responses,

and it implements the capacity layer in SSDs. Read requests are fulfilled directly from data obtained from the

SSDs in the capacity layer. A dedicated read cache is not required to accelerate read operations.

Incoming data is striped across the number of nodes required to satisfy availability requirements usually two

or three nodes based on the replication factor set at installation. Incoming write operations are

acknowledged as persistent after they are replicated to the SSD drives in other nodes in the cluster. This

approach reduces the likelihood of data loss due to SSD or node failures. The write operations are then de-

staged to SSDs in the capacity layer in the All-Flash memory configuration for long-term storage.

The log-structured file system writes sequentially to one of two write logs (three in case of RF=3) until it is

full. It then switches to the other write log while de-staging data from the first to the capacity tier. When

existing data is (logically) overwritten, the log-structured approach simply appends a new block and updates

the metadata. This layout benefits SSD configurations in which seek operations are not time consuming. It

reduces the write amplification levels of SSDs and the total number of writes the flash media experiences

due to incoming writes and random overwrite operations of the data.

Page 12: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Data Write Operation Flow Through the Cisco HyperFlex HX Data Platform Figure 4

Data Optimization

The Cisco HyperFlex HX Data Platform provides finely detailed inline deduplication and variable block inline

compression that is always on for objects in the cache (SSD and memory) and capacity (SSD or HDD) layers.

Unlike other solutions, which require you to turn off these features to maintain performance, the

deduplication and compression capabilities in the Cisco data platform are designed to sustain and enhance

performance and significantly reduce physical storage capacity requirements.

Page 13: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Inline Compression

The Cisco HyperFlex Data Platform uses high-performance inline compression on data sets to save storage

capacity. Although other products offer compression capabilities, many negatively affect performance. In

contrast, the Cisco data platform uses CPU-offload instructions to reduce the performance impact of

compression operations. In addition, the log-structured distributed-objects layer has no effect on

modifications (write operations) to previously compressed data. Instead, incoming modifications are

compressed and written to a new location, and the existing (old) data is marked for deletion, unless the data

needs to be retained in a snapshot.

The data that is being modified does not need to be read prior to the write operation. This feature avoids

typical read-modify-write penalties and significantly improves write performance.

Thin Provisioning

The platform makes efficient use of storage by eliminating the need to forecast, purchase, and install disk

capacity that may remain unused for a long time. Virtual data containers can present any amount of logical

space to applications, whereas the amount of physical storage space that is needed is determined by the

data that is written. You can expand storage on existing nodes and expand your cluster by adding more

storage-intensive nodes as your business requirements dictate, eliminating the need to purchase large

amounts of storage before you need it.

Data Rebalancing

A distributed file system requires a robust data rebalancing capability. In the Cisco HyperFlex Data Platform,

no overhead is associated with metadata access, and rebalancing is extremely efficient. Rebalancing is a

non-disruptive online process that occurs in both the caching and persistent layers, and data is moved at a

fine level of specificity to improve the use of storage capacity. The platform automatically rebalances existing

data when nodes and drives are added or removed or when they fail. When a new node is added to the

cluster, its capacity and performance is made available to new and existing data. The rebalancing engine

distributes existing data to the new node and helps ensure that all nodes in the cluster are used uniformly

from capacity and performance perspectives. If a node fails or is removed from the cluster, the rebalancing

engine rebuilds and distributes copies of the data from the failed or removed node to available nodes in the

clusters.

HyperFlex FlexVolume Storage Integration for Kubernetes

FlexVolume is an out-of-tree plugin framework provided by Kubernetes which uses an exec-based model to

interface with drivers. Kubernetes performs volume operations by executing pre-defined commands in the

FlexVolume plugin against the driver on the host. A Kubernetes Volume plugin extends the Kubernetes volume

interface to support a block and/or file storage system. FlexVolume out-of-tree interface enables storage

vendors to create custom storage plugins without adding them to the Kubernetes repository. HyperFlex

FlexVolume driver supports persistent storage for Kubernetes managed containers, enabling development and

deployment of cloud-native applications with Red Hat OpenShift Container Platform.

Page 14: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Physical Infrastructure

Cisco Unified Computing System

The Cisco Unified Computing System (Cisco UCS) is a next-generation data center platform that unites

compute, network and storage access. The platform, optimized for virtual environments, is designed using

open industry-standard technologies and aims to reduce the total cost of ownership (TCO) and increase the

business agility. The system integrates a low-latency; lossless 10 or 40 Gigabit Ethernet unified network

fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi-chassis platform in

which all resources participate in a unified management domain.

The Cisco Unified Computing System consists of the following components:

Compute - The system is based on an entirely new class of computing system that incorporates rack

mount and blade servers based on Intel® Xeon® scalable processors product family.

Network - The system is integrated onto a low-latency, lossless, 40-Gbps unified network fabric. This

high-performance computing networks which are separate networks today. The unified fabric lowers

costs by reducing the number of network adapters, switches, and cables, and by decreasing the

power and cooling requirements.

Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability,

performance, and operational control of virtual environments. Cisco security, policy enforcement, and

diagnostic features are now extended into virtualized environments to better support changing

business and IT requirements.

Storage access - The system provides consolidated access to both SAN storage and Network

Attached Storage (NAS) over the unified fabric. It is also an ideal system for Software defined Storage

(SDS). Combining the benefits of single framework to manage both the compute and Storage servers

in a single pane, Quality of Service (QOS) can be implemented if needed to inject IO throttling in the

system. In addition, the server administrators can pre-assign storage-access policies to storage

resources, for simplified storage connectivity and management leading to increased productivity. In

addition to external storage, both rack and blade servers have internal storage which can be accessed

through built-in hardware RAID controllers. With storage profile and disk configuration policy

configured in Cisco UCS Manager, storage needs for the host OS and application data gets fulfilled by

user defined RAID groups for high availability and better performance.

Management - the system uniquely integrates all system components to enable the entire solution to

be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive

graphical user interface (GUI), a command-line interface (CLI), and a powerful scripting library module

for Microsoft PowerShell built on a robust application programming interface (API) to manage all

system configuration and operations.

The Cisco Unified Computing System is designed to deliver:

A reduced Total Cost of Ownership and increased business agility.

Increased IT staff productivity through just-in-time provisioning and mobility support.

Page 15: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

A cohesive, integrated system which unifies the technology in the data center. The system is

managed, services and tested as a whole.

Scalability through a design for hundreds of discrete servers and thousands of virtual machines and

the capability to scale I/O bandwidth to match the demand.

Industry standard supported by a partner ecosystem of industry leaders.

Cisco UCS Fabric Interconnect

The Cisco UCS Fabric Interconnect (FI) is a core part of the Cisco Unified Computing System, providing both

network connectivity and management capabilities for the system. Depending on the model chosen, the

Cisco UCS Fabric Interconnect offers line-rate, low-latency, lossless 10 Gigabit or 40 Gigabit Ethernet, Fibre

Channel over Ethernet (FCoE) and Fibre Channel connectivity. Cisco UCS Fabric Interconnects provide the

management and communication backbone for the Cisco UCS C-Series, S-Series and HX-Series Rack-

Mount Servers , Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All

servers and chassis, and therefore all blades, attached to the Cisco UCS Fabric Interconnects become part

of a single, highly available management domain. In addition, by supporting unified fabrics, the Cisco UCS

Fabric Interconnects provide both the LAN and SAN connectivity for all servers within its domain.

From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting

deterministic, low latency, line rate 10 Gigabit Ethernet on all ports, up to 1.92 Tbps switching capacity and

160 Gbps bandwidth per chassis, independent of packet size and enabled services. The product family

supports Cisco low - latency, lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase

the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple

traffic classes over the Ethernet fabric from the servers to the uplinks. Significant TCO savings come from an

FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables,

and switches can be consolidated.

The Cisco UCS 6300 Series offers the same features while supporting even higher performance, low

latency, lossless, line rate 40 Gigabit Ethernet, with up to 2.56 Tbps of switching capacity . Backward

compatibility and scalability are assured with the ability to configure 40 Gbps quad SFP (QSFP) ports as

breakout ports using 4x10GbE breakout cables. Existing Cisco UCS servers with 10GbE interfaces can be

connected in this manner, although Cisco HyperFlex nodes must use a 40GbE VIC adapter in order to

connect to a Cisco UCS 6300 Series Fabric Interconnect.

Cisco HyperFlex HX-Series Nodes

A HyperFlex cluster requires a minimum of three HX-Series c nodes (with disk storage). Data is

replicated across at least two of these nodes, and a third node is required for continuous operation in the

event of a single-node failure. Each node that has disk storage is equipped with at least one high-

performance SSD drive for data caching and rapid acknowledgment of write requests. Each node also is

equipped with additional disks, up to limit, for long term storage and capacity.

A variety of HX-Series converged All-Flash nodes are supported with HyperFlex. The list below provides the

supported HX-Series All-Flash converged nodes.

Cisco HyperFlex HXAF240c-M5SX All-Flash Node

Cisco HyperFlex HXAF240c-M5SX All-Flash Node

Page 16: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Cisco HyperFlex HXAF240c-M4S All-Flash Node

Cisco HyperFlex HXAF240c-M4SX All-Flash Node

For specifications of the listed products, refer - https://www.cisco.com/c/en/us/products/hyperconverged-

infrastructure/HyperFlex-hx-series/index.html#~stickynav=2

Cisco UCS C240M5 Rack-Mount Server

The Cisco UCS C240 M5 Rack Server is a 2-socket, 2-Rack-Unit (2RU) rack server offering industry-leading

performance and expandability. It supports a wide range of storage and I/O-intensive infrastructure

workloads, from big data and analytics to collaboration. Cisco UCS C-Series Rack Servers can be deployed

as standalone servers or as part of a Cisco Unified Co

-

Total Cost of Ownership (TCO) and increase their business agility.

In response to ever-increasing computing and data-intensive real-time workloads, the enterprise-class

Cisco UCS C240 M5 server extends the capabilities of the Cisco UCS portfolio in a 2RU form factor. It

incorporates the Intel® Xeon® Scalable processors, supporting up to 20 percent more cores per socket,

twice the memory capacity, and five times more

Non-Volatile Memory Express (NVMe) PCI Express (PCIe) Solid-State Disks (SSDs) compared to the

previous generation of servers. These improvements deliver significant performance and efficiency gains

that will improve your application performance. The C240 M5 delivers outstanding levels of storage

expandability with exceptional performance, with:

Latest Intel Xeon Scalable CPUs with up to 28 cores per socket

Up to 24 DDR4 DIMMs for improved performance

Up to 26 hot-swappable Small-Form-Factor (SFF) 2.5-inch drives, including 2 rear hot-swappable

SFF drives (up to 10 support NVMe PCIe SSDs on the NVMe-optimized chassis version), or 12 Large-

Form-Factor (LFF) 3.5-inch drives plus 2 rear hot-swappable SFF drives

Support for 12-Gbps SAS modular RAID controller in a dedicated slot, leaving the remaining PCIe

Generation 3.0 slots available for other expansion cards

Modular LAN-On-Motherboard (mLOM) slot that can be used to install a Cisco UCS Virtual Interface

Card (VIC) without consuming a PCIe slot, supporting dual 10- or 40-Gbps network connectivity

Dual embedded Intel x550 10GBASE-T LAN-On-Motherboard (LOM) ports

Modular M.2 or Secure Digital (SD) cards that can be used for boot

Cisco VIC Interface Cards

The Cisco UCS Virtual Interface Card 1385 improves flexibility, performance, and bandwidth for Cisco UCS

C-Series Rack Servers. It offers dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP+) 40 Gigabit

Ethernet and Fibre Channel over Ethernet (FCoE) in a half-height PCI Express (PCIe) adapter. The 1385 card

works with Cisco Nexus 40 Gigabit Ethernet (GE) and 10 GE switches for high-performance applications.

The Cisco VIC 1385 implements the Cisco Data Center Virtual Machine Fabric Extender (VM-FEX), which

unifies virtual and physical networking into a single infrastructure. The extender provides virtual-machine

Page 17: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

visibility from the physical network and a consistent network operations model for physical and virtual

servers.

The Cisco UCS VIC 1387 Card is a dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP+) 40-

Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-

motherboard (mLOM) adapter installed in the Cisco UCS HX-Series Rack Servers. The VIC 1387 is used in

conjunction with the Cisco UCS 6332 or 6332-16UP model Fabric Interconnects.

The mLOM slot can be used to install a Cisco VIC without consuming a PCIe slot, which provides greater I/O

expandability. It incorporates next-generation converged network adapter (CNA) technology from Cisco,

providing investment protection for future feature releases. The card enables a policy-based, stateless, agile

server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host, each

dynamically configured as either a network interface card (NICs) or host bus adapter (HBA). The personality

of the interfaces is set programmatically using the service profile associated with the server. The number,

type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, adapter settings,

bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all specified using the service

profile.

For more information, see:

https://www.cisco.com/c/en/us/products/interfaces-modules/ucs-virtual-interface-card-1385/index.html

https://www.cisco.com/c/en/us/products/interfaces-modules/ucs-virtual-interface-card-1387/index.html

Cisco Nexus 9000 Switches

The Cisco Nexus 9000 Series delivers proven high performance and density, low latency, and exceptional

power efficiency in a broad range of compact form factors. Operating in Cisco NX-OS Software mode or in

Application Centric Infrastructure (ACI) mode, these switches are ideal for traditional or fully automated data

center deployments.

The Cisco Nexus 9000 Series Switches offer both modular and fixed 10/40/100 Gigabit Ethernet switch

configurations with scalability up to 30 Tbps of non-blocking performance with less than five-microsecond

latency, 1152 x 10 Gbps or 288 x 40 Gbps non-blocking Layer 2 and Layer 3 Ethernet ports and wire speed

VXLAN gateway, bridging, and routing.

Cisco UCS Nexus 9396PX Figure 5

For more information, see: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-

series-switches/datasheet-c78-736967.html.

Intel® Scalable Processor Family

Intel® Xeon® Scalable processors provide a new foundation for secure, agile, multi-cloud data centers. This

platform provides businesses with breakthrough performance to handle system demands ranging from

entry-level cloud servers to compute-hungry tasks including real-time analytics, virtualized infrastructure,

Page 18: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

and high performance computing. This processor family includes technologies for accelerating and securing

specific workloads.

Intel® Xeon® Scalable processors are now available in four feature configurations:

Intel® Xeon® Bronze Processors with affordable performance for small business and basic storage.

Intel® Xeon® Silver Processors with essential performance and power efficiency

Intel® Xeon® Gold Processors with workload-optimized performance, advanced reliability

Intel® Xeon® Platinum Processors for demanding, mission-critical AI, analytics, hybrid-cloud workloads

Intel® Xeon® Scalable Processor Family Figure 6

Intel® Optane™ SSD DC P4800X Series

With an industry-leading combination of high throughput, low latency, high QoS and ultra-high endurance,

this innovative solution is optimized to break through data access bottlenecks by providing a new data

storage tier. The DC P4800X accelerates applications for fast caching and fast storage to increase scale per

server and reduce transaction costs for latency sensitive workloads. In addition, the DC P4800X enables

data centers to deploy bigger and more affordable datasets to gain new insights from large memory pools.

Data centers can explore two key use cases for the DC P4800X: fast storage or cache, or extended memory.

Fast storage or cache refers to the tiering and layering which enable a better memory-to-storage hierarchy.

The DC P4800X provides a new storage tier that breaks through the bottlenecks of traditional NAND storage

to accelerate applications and enable more work to get done per server.

Intel® SSD DCP4800X Figure 7

Page 19: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Cisco HyperFlex platform uses 40c

platform to help improve the system efficiency. The Optane SSD is an ideal caching solution for an HCI solution

e-

manding storage IO workload, and at the same time deliver consistent performance to applications. The caching

layer SSDs have to simultaneously manage incoming application IO, while at the same time responding to appli-

cation read requests quickly and efficiently. The caching SSD must also deliver data to the storage tier SSD layer

at the same time, all without slowing things down. The Intel Optane SSD, due to its unique capabilities of Optane

memory media, can meet the demands of such a workload.

Red Hat OpenShift Container Platform

form that brings together Docker and

Kubernetes and provides an API to manage these services. OpenShift Container Platform allows you to

create and manage containers. Containers are standalone processes that run within their own environment,

independent of operating system and the underlying infrastructure. OpenShift helps developing, deploying,

and managing container-based applications. It provides a self-service platform to create, modify, and

deploy applications on demand, thus enabling faster development and release life cycles. OpenShift

Container Platform has a microservices-based architecture of smaller, decoupled units that work together. It

runs on top of a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value

store.

Kubernetes Infrastructure

Within OpenShift Container Platform, Kubernetes manages containerized applications across a set of Docker

runtime hosts and provides mechanisms for deployment, maintenance, and application-scaling. The Docker

service packages, instantiates, and runs containerized applications.

A Kubernetes cluster consists of one or more masters and a set of nodes. This solution design includes HA

functionality at the hardware as well as the software stack. Kubernetes cluster is designed to run in HA mode

with 3 master nodes and 2 Infra nodes to help ensure that the cluster has no single point of failure.

Red Hat OpenShift Integrated Container Registry

OpenShift Container Platform provides an integrated container registry called OpenShift Container

Registry (OCR) that adds the ability to automatically provision new image repositories on demand. This

provides users with a built-in location for their application builds to push the resulting images. Whenever a

new image is pushed to OCR, the registry notifies OpenShift Container Platform about the new image,

passing along all the information about it, such as the namespace, name, and image metadata. Different

pieces of OpenShift Container Platform react to new images, creating new builds and deployments.

Docker

Red Hat OpenShift Container Platform uses Docker runtime engine for containers.

Kubernetes

Red Hat OpenShift Container Platform is a complete container application platform that natively integrates

technologies like Docker and Kubernetes; a powerful container cluster management and orchestration

system.

Page 20: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

Etcd

Etcd is a key-value store used in OpenShift Container Platform cluster. Etcd data store provides complete

cluster and endpoint states to the OpenShift API servers. Etcd data store furnishes information to API servers

about node status, network configurations, secrets, etc.

Open vSwitch

Open vSwitch is an open-source implementation of a distributed virtual multilayer switch. It is designed to

enable effective network automation through programmatic extensions, while supporting standard

management interfaces and protocols such as 802.1ag, SPAN, LACP, and NetFlow. Open vSwitch provides

software-defined networking (SDN)-specific functions in the OpenShift Container Platform environment.

HAProxy

HAProxy is open source software that provides a high availability load balancer and proxy server for TCP and

HTTP-based applications that spreads requests across multiple servers. In this solution, HAProxy is

deployed in virtual machine in VMware HA cluster which provides routing and load-balancing functions for

Red Hat OpenShift applications. Other instance of HAProxy acts as an ingress router for all applications

deployed in Red Hat OpenShift cluster.

Red Hat Ansible Automation

Red Hat Ansible Automation is a powerful IT automation tool. It is capable of provisioning numerous types of

resources and deploying applications. It can configure and manage devices and operating system

components. Due to the simplicity, extensibility, and portability, this OpenShift solution deployment is based

largely on Ansible Playbooks.

Ansible is mainly used for installation and management of the Red Hat OpenShift Container Platform

deployment.

Persistent Storage for Kubernetes - HyperFlex FlexVolume

Containers usually do not run as full operating instances, but instead as standalone processes within an

isolated namespace from the host environment. Processes within containers run with a predefined, limited

slice of resources - such as CPU time, memory, storage and network bandwidth.

Typically, each container provides a single service to the rest of the cluster (such as a web server, a cache

or a database). These services can discover each other using a directory service.

The persistent storage need of pods are provided as Volumes, which are visible to the containers as mount-

points inside their namespace. These Volumes reside on the HyperFlex Data Platform and are presented to

Kubernetes cluster by the HyperFlex FlexVolume plugin which allows application containers to take

advantage of the resiliency, performance and data management features that HyperFlex enables for other

virtualized workloads. The Cisco HyperFlex Storage Integration for Kubernetes allows HyperFlex to

dynamically provide persistent storage to Kubernetes Pods running on HyperFlex. The integration enables

orchestration of the entire persistent Volume object lifecycle to be offloaded and orchestrated by the

HyperFlex FlexVolume Storage Integration for Kubernetes. End users including developers can leverage

HyperFlex for their Kubernetes persistent storage requirements without the need for administrating backend

storage arrays.

Page 21: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

With the HyperFlex Storage Integration for enterprise grade Kubernetes packaged for OpenShift Container

Platform, each Persistent Volume object in OpenShift is represented by an iSCSI-based LUN residing in the

HyperFlex storage subsystem. Each iSCSI LUN is presented to virtual machines by the scvmclient service

running on the local ESXi host where the VMs reside. The VMs themselves are running the software iSCSI

initiator service which allows them to mount the iSCSI LUNs provided by the iSCSI target. Each ESXi host is

configured with a host-only vSwitch, where all iSCSI-based traffic between the iSCSI target (scvmclient

service in ESXi) and the iSCSI initiators (OpenShift node VMs running locally on that ESXi host) resides. The

figure below shows the FlexVolume driver high-level architecture.

HyperFlex FlexVolume High-Level Architecture Figure 8

HyperFlex FlexVolume driver workflow and design details are illustrated in the figure below.

HyperFlex FlexVolume Plugin Design Details Figure 9

Page 22: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

HyperFlex FlexVolume Plugin shown in the above figure is referred to as HyperFlex FlexVolume and this

provides persistent storage to Kubernetes Pods on the fly that are running on HyperFlex system.

HyperFlex FlexVolume Dynamic Provisioning

The implementation details of dynamic storage provisioning in HyperFlex system is discussed in this sub-

section. There are two components that make up the Cisco HyperFlex FlexVolume Storage Integration for

Kubernetes. Both components work in tandem to dynamically provision storage in HyperFlex and, ultimately,

provide that storage to the appropriate Kubernetes Pod as a Persistent Volume object.

HyperFlex FlexVolume Plug-in

HyperFlex Kubernetes Storage Provisioner

HyperFlex FlexVolume Plug-in

The HyperFlex FlexVolume Plug-in is a plug-in that is developed by Cisco Systems by leveraging the

out-of- HyperFlex FlexVolume plugin manages connections to

the HyperFlex cluster from the Kubernetes nodes and makes storage volumes available to containers

through the Kubernetes FlexVolume interface.

HyperFlex Kubernetes Storage Provisioner

The HyperFlex Kubernetes Storage Provisioner is a container that is developed by Cisco Systems. This

container is deployed within the target Kubernetes cluster and serves as the storage provisioning

orchestration point for Persistent Volumes from HyperFlex. Developers submit their application storage

requirements using Persistent Volume Claims and reference a specific StorageClass associated with

HyperFlex. Kubernetes passes the required storage request to the HyperFlex Kubernetes Storage Provisioner

configured in the StorageClass.

HyperFlex FlexVolume Storage Provisioner Design Figure 10

Page 23: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Technology Overview

HyperFlex FlexVolume Storage Provisioner Workflow Figure 11

Page 24: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Solution Design

This section provides an overview of the design factors to be considered in order to make the system work

as a single, highly available solution as well as hardware and software components used in this solution.

Architectural Overview

Red Hat OpenShift Container Platform is managed by the Kubernetes container orchestrator, which manages

containerized applications across a cluster of systems running the Docker container runtime. The physical

configuration of Red Hat OpenShift Container Platform is based on the Kubernetes cluster architecture.

OpenShift is a layered system designed to expose underlying Docker-formatted container image and

Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a

developer. For example, install Ruby, push code, and add MySQL. The concept of an application as a

separate object is removed in favor of more flexible composition of "services", allowing two web containers

to reuse a database or expose a database directly to the edge of the network.

Architectural Overview Figure 12

This Red Hat OpenShift Container Platform reference architecture contains five types of nodes: bastion,

master, infrastructure, storage, and application.

Bastion Node

This is a dedicated node that serves as the main deployment and management server for the Red Hat

OpenShift cluster. It is used as the logon node for the cluster administrators to perform the system

deployment and management operations, such as running the Ansible OpenShift deployment Playbooks and

Page 25: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

performing scale-out operations. Also, Bastion node runs DNS services for the OpenShift Cluster nodes. The

bastion node runs Red Hat Enterprise Linux 7.5 can either be on bare metal or a VM running on existing

vSphere environment. In this design guide proposes

This design guide proposes to use existing VMWare vSphere environment to host 2 VMs, running Bastion

Node and HX Installer OVA services for the container platform and HyperFlex cluster respectively. It is also

expected that the environment is managed through working vCenter host. Rest of the VMware vSphere in-

frastructure for the OpenShift Container Platform is managed through HX Installer, HX Connect and

vCenter instance via separate datacenter and resource pool.

Kubernetes Infrastructure

Within OpenShift Container Platform, Kubernetes manages containerized applications across a set of Docker

runtime hosts and provides mechanisms for deployment, maintenance, and application-scaling. The Docker

service packages, instantiates, and runs containerized applications.

A Kubernetes cluster consists of one or more masters and a set of nodes. This solution design includes HA

functionality at the hardware as well as the software stack. Kubernetes cluster is designed to run in HA mode

with 3 master nodes and 2 Infra nodes to ensure that the cluster has no single point of failure.

OpenShift Master Nodes

The OpenShift Container Platform master is a server that performs control functions for the whole cluster

environment. It is responsible for the creation, scheduling, and management of all objects specific to Red

Hat OpenShift. It includes API, controller manager, and scheduler capabilities in one OpenShift binary. It is

also a common practice to install an etcd key-value store on OpenShift masters to achieve a low-latency link

between etcd and OpenShift masters. It is recommended that you run both Red Hat OpenShift masters and

etcd in highly available environments. This can be achieved by running multiple OpenShift masters in

conjunction with an external active-passive load balancer and the clustering functions of etcd. The master

manages nodes in the Kubernetes cluster and schedules the pods (single/group of containers) to run on the

application nodes. The OpenShift master node runs Red Hat Enterprise Linux 7.5.

Master Components Table 1

Components Description

API Server The Kubernetes API server validates and configures the data for pods, services, and

replication controllers. It also assigns pods to nodes and synchronizes pod

information with service configuration. Can be run as a standalone process.

etcd etcd stores the persistent master state while other components watch etcd for

changes to bring themselves into the desired state. etcd can be optionally

configured for high availability, typically deployed with 2n+1 peer services.

Controller Manager

Server

The controller manager server watches etcd for changes to replication controller

objects and then uses the API to enforce the desired state. Can be run as a

standalone process. Several such processes create a cluster with one active leader

at a time.

Page 26: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

To mitigate concerns about availability of the master, the solution design uses a high-availability solution to

configure masters and ensure that the cluster has no single point of failure. When using the native HA

method with HAProxy, master components have the following availability:

Availability Matrix with HAProxy Table 2

Roles Style Description

etcd Active-active Fully redundant deployment with load balancing

API Server Active-active Managed by HAProxy

Controller

Manager Server

Active-passive One instance is elected as a cluster leader at a time

HAProxy Active-passive Balances load between API master endpoints

OpenShift Infrastructure Nodes

The OpenShift infrastructure node runs infrastructure specific services: Docker Registry, HAProxy router and

monitoring services like Metrics, logging and Prometheus. Docker Registry stores application images in the

form of containers. The HAProxy router provides routing functions for Red Hat OpenShift Container

application pods. It currently supports HTTP(S) traffic and TLS-enabled traffic via Server Name Indication

(SNI). Additional applications and services can be deployed on OpenShift infrastructure nodes. The

OpenShift infrastructure node runs Red Hat Enterprise Linux 7.5.

OpenShift Application Nodes

The OpenShift application nodes run containerized applications created and deployed by developers. An

OpenShift application node contains the OpenShift node components combined into a single binary, which

can be used by OpenShift masters to schedule and control containers. A Red Hat OpenShift application node

runs Red Hat Enterprise Linux 7.5.

Table lists the functions and roles for each class of node in this solution for the OpenShift Container Platform.

Type of Nodes in OpenShift Container Platform Cluster and their Roles Table 3

Node Roles

Bastion Node - System deployment and Management Operations

- Runs Ansible playbooks for automated cluster deployment and operations

- DNS services for the OpenShift Container Platform

Master Nodes - Kubernetes services

- Etcd data store

- Controller Manager and Scheduler

- API services

Page 27: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Infrastructure

Nodes

- Container Registry*

- HAProxy Router

- Monitoring Services Metrics/Logging/Prometheus

* In this solution design container registry is backed with HX FlexVolume

Application Nodes - Application Containers PODs

- Docker Runtime

HA Nodes - Load Balancing services

- KeepAlived Service

HAProxy Load Balancer

In the reference architecture, HAProxy load balancer is used. However, on premise existing load balancer

can also be utilized. HAProxy is the entry point for many Red Hat OpenShift Container Platform components.

OpenShift Container Platform console is accessible via the master nodes, which is spread across multiple

instances to provide load balancing as well as high availability.

Application traffic passes through the Red Hat OpenShift Container Platform Router on its way to the

container processes. The Red Hat OpenShift Container Platform Router is a reverse proxy service container

that multiplexes the traffic to multiple containers making up a scaled application running inside Red Hat

OpenShift Container Platform. The load balancer used by infra nodes acts as the public view for the Red Hat

OpenShift Container Platform applications.

The destination for the master and application traffic must be set in the load balancer configuration after

each instance is created, the floating IP address is assigned and before the installation. A

single HAProxy load balancer can forward both sets of traffic to different destinations.

In this solution, HAProxy provides routing and load-balancing functions for Red Hat OpenShift applications.

Other instance of HAProxy acts as an ingress router for all applications deployed in Red Hat OpenShift

cluster. Both instances are replicated to every infrastructure node and managed by additional components

(keepalived, OpenShift services) to provide redundancy and high availability.

KeepAlived

KeepAlived is routing software which provides simple and robust facilities for load balancing and high-

availability to Linux based infrastructure. In this solution, KeepAlived is used to provide virtual IP

management for HAProxy instances to ensure highly available OpenShift Container Platform cluster.

Keepalived is deployed into infrastructure nodes as they also act as HAProxy load balancers. In case of a

failure of one infrastructure node Keepalived automatically moves Virtual IPs to second node that acts as a

backup. With Keepalived Red Hat OpenShift infrastructure becomes highly available and resistant to failures.

OpenShift Networking

OpenShift Container Platform supports the Kubernetes Container Network Interface (CNI) as the interface

between the OpenShift Container Platform and Kubernetes. Software defined network (SDN) plug-ins are a

powerful and flexible way to match network capabilities for networking needs.

Page 28: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

The solution design relies on Kubernetes to ensure the pods are able to network with each other, and get an

IP address from an internal network for each pod. This ensures all containers within the pod behave as if

they were on the same host. Giving each pod its own IP address means that pods can be treated like

physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load

balancing, application configuration, and migration. Creating links between pods is not necessary, and it is

not recommended as pods communicate with each other directly using the IP address. In order to interact

with the pods, services are created and using services, communication patterns between the pods can be

defined.

OpenShift SDN

OpenShift Container Platform deploys a software-defined networking (SDN) approach for connecting pods in

an OpenShift Container Platform cluster. The OpenShift SDN connects all pods across all node hosts,

providing a unified cluster network. OpenShift SDN is installed and configured by default as part of the

Ansible-based installation procedure.

OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified

cluster network that enables communication between pods across the OpenShift Container Platform cluster.

This pod network is established and maintained by the OpenShift SDN, which configures an overlay network

using Open vSwitch (OVS).

This CVD design guide focuses on use cases associated with project-level isolation through ovs-

multitenant SDN plugin.

Network Isolation

The ovs-multitenant plug-in provides network isolation. When a packet exits a pod assigned to a non-

OVS bridge only allows the packet to be

delivered to the destination pod if the VNIDs match.

OpenShift Container Platform DNS

OpenShift Container Platform has a built-in DNS so the services can be reached by the service DNS as well

as the service IP/port. OpenShift Container Platform supports split DNS by running SkyDNS on the master

that answers DNS queries for services. The master listens to port 53 by default. DNS services are required

for the following use cases:

Multiple services such as frontend and backend services running multiple pods, where frontend pods

need to communicate with backend pods

Services are deleted and re-created again, new set of IP address can be assigned but service

names should remain un-changed, in order to communicate between the service types, without

having a need to change the code. Inter service communication can take place with names rather

than ip addresses

HyperFlex Multi-VIC Feature

Cisco HyperFlex platform support multi-vic on hyperconverged nodes for maximum network design flexibility

and physical redundancy with auto-failover. As OpenShift Container Platform on VMWare vSphere provides

Page 29: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

an adaptive application container platform for the enterprise to meet changing business requirements, Cisco

HyperFlex system along with Multi-VIC feature support achieves same goals from the infrastructure

perspective. In the event of VIC failure, all services seamlessly fail over on the other VIC with full 80Gbps

bandwidth available for the management and data control plane.

Multi-VIC Support on HyperFlex Figure 13

Reference Architecture

Physical Topology

In this solution design the reference architecture proposed, uses UCS Managed HyperFlex hyperconverged

servers for compute, storage, network and data management resources for deploying Red Hat OpenShift

Container Platform. A standard four node cluster with C240 HX All-Flash nodes are used. These cluster

nodes act as ESXi host and also provide persistent storage to the virtual infrastructure and the application

pods. A total of twelve VMs are hosted for deploying the different types of OpenShift nodes on these four

ESXi nodes as shown in the physical topology below.

Datastore backed by HX Data Platform serves the purpose of storing VM virtual disks. It also provides

storage for Docker Engine, etcd data store and OpenShift Container Platform images. Following figure shows

the physical architecture used in this reference design.

Page 30: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Red Hat OpenShift Platform Architectural Diagram Figure 14

Page 31: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Back Plane Connectivity Diagram Figure 15

Network Layout

The following figure shows logical network connectivity with details on the type of network each OpenShift

VMs and the entire solution ecosystem uses.

Page 32: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Network Layout Diagram Figure 16

Node Placement Details

OpenShift Container Platform on Cisco HyperFlex system with VMware vSphere runs application containers

on the virtual machines configured to run as OpenShift nodes. Each of the OpenShift Nodes have their roles

assigned during the deployment. Physical resource allocation to the virtual machines depends on the roles

they serve. Virtual machine configuration details as outlined above is a critical requirement in order to

optimize resource utilization for infrastructure services leaving out rest for the actual application containers.

OpenShift cluster node placement is important to make sure to distribute VM workload equally between all 4

HX cluster nodes. This design guide proposes virtual machine sizing after considering the following points:

Total CPU and Memory resources available across the HX Cluster after reserving capacity for HyperFlex

Storage Controller VMs and Hypervisor itself

Co-location of etcd data store services with OpenShift Master Node services

Overall scale limits like Pods/VM, VM/HX Node from FlexVolume/HyperFlex platform. OpenShift

Container Platform scale limits like Pods/Node

DR/vMotion scenarios in the event of HX Node failures

Following diagram shows a high-level architecture for OpenShift Container Platform on HyperFlex System.

Below table lists the suggested VM requirements for each node.

Virtual Machine Configurations Table 4

Node Type CPUs Memory Disk 1 Disk 2 Disk 3 Disk 4

Page 33: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

Master

Nodes (3

VMs)

6 vCPU 32GB RAM 1 x 60GB - OS

RHEL 7.5

1 x 50GB -

Docker

volume

1 x 50Gb -

EmptyDir

volume

1 x 50GB -

ETCD volume

HA Nodes

(2 VMs)

2 vCPU 8GB RAM 1 x 60GB - OS

RHEL 7.5

1 x 50GB -

Docker

volume

1 x 50Gb -

EmptyDir

volume

Infra Nodes

(3 VMs)

2 vCPU 8GB RAM 1 x 60GB - OS

RHEL 7.5

1 x 50GB -

Docker

volume

1 x 50Gb -

EmptyDir

volume

App Nodes

(4 VMs)

8 vCPU 64GB RAM 1 x 60GB - OS

RHEL 7.5

1 x 50GB -

Docker

volume

1 x 50Gb -

EmptyDir

volume

Bastion 2 vCPU 2GB RAM 1 x 60GB - OS

RHEL 7.5

OpenShift Node details on a HyperFlex Cluster Figure 17

It is recommended to configure anti-affinity rule for master VMs after they are installed. This is to ensure

Master node VMs always runs on separate HX cluster nodes. The following diagram shows how the master,

infrastructure, and application nodes should be placed in HyperFlex ESXi cluster nodes.

Page 34: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

OpenShift Node Virtual Machine Placement on HX Cluster Figure 18

Hardware and Software Revisions

Table below lists the firmware versions validated in this RHOCP solution.

Hardware Revisions Table 5

Hardware Firmware Versions

Cisco UCS Manager 4.0(1b)

Cisco UCS Fabric Interconnects

6332UP

5.0(3)N2(4.01b)

Cisco HXAF240c M5SX HyperFlex

System

4.0(1b)C

For information about the OS version and system type, see Cisco Hardware and Software Compatibility.

Table below lists the software versions validated in this RHOCP solution.

Software Versions Table 6

Software Versions

Cisco HyperFlex Data Platform 3.5.1a-31118

Red Hat Enterprise Linux 7.5

Red Hat OpenShift Container Platform 3.11

Page 35: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Solution Design

VMware vSphere 6.5.0 8935087

Kubernetes 1.11

Docker 1.13.1

Red Hat Ansible Engine 2.6.7

Etcd 3.2.22

Open vSwitch 3.10

HyperFlex FlexVol 1.0.284

Solution Components

This solution is validated comprised of following components.

Solution Components Table 7

Component Model Quantity

ESXi hosts for ha, master,

infra, and application node

VMs

Cisco HXAF240c M5SX

HyperFlex Servers

4 Nodes, each with:

CPU 2 x [email protected],18

Cores each

Memory 12 x 32GB 2666

DIMM total of 384G

SSDs

1x240GB M.2 6G SATA SSD

1x 2.5in U.2 375GB Intel

Optane 3DXpoint SSD DC

P4800X NVMe Med.

Performance

6x960GB 2.5 inch Enterprise

Value 6G SATA SSD

Network Card 1xCisco UCS

VIC 1387 for 40Gig network I/O

Raid Controller Cisco 12G

Modular SAS controller

Fabric Interconnects Cisco UCS 6332 16UP Fabric

Interconnects

2

Nexus Switches Cisco Nexus 9396PX 2

Page 36: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Conclusion

Conclusion

This solution design based on Cisco HyperFlex Infrastructure with VMware vSphere for Red Hat OpenShift

Container Platform cluster 3.11 is optimal for production-ready deployment processes with latest best

practices, a stable, highly available environment to run enterprise-grade application containers. Cisco

HyperFlex Data Platform is a hyperconverged software appliance that enables Cisco Servers to act as both

compute and storage nodes, along with the built-in features like Data Distribution, Optimization, Localization

etc. enable application containers to perform better with higher throughput. Cisco HX Data Platform provides

a highly fault-tolerant distributed storage system that preserves data integrity and optimizes performance for

virtual machine (VM) storage workloads. Also, Cisco HyperFlex FlexVolume driver addresses the need of

persistent storage for the stateful application container pods managed by Kubernetes. All-in-all for

enterprise IT, this design is tailor made for running containerized production-ready applications with a quick

and easy integrations to DevOps and CI/CD model for application development to address immediate

business needs and reducing time to market. Enterprises can accelerate on the path to an enterprise-grade

Kubernetes solution with Red Hat OpenShift Container Platform running on Cisco HyperFlex infrastructure

with VMware vSphere.

Page 37: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

Resources

Resources

Cisco UCS Infrastructure for Red Hat OpenShift Container Platform Deployment Guide:

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_openshift.html

Deploying and Managing OpenShift 3.9 on VMware vSphere: https://access.redhat.com/documentation/en-

us/reference_architectures/2018/html-

single/deploying_and_managing_openshift_3.9_on_vmware_vsphere/index

FlexPod Datacenter with VMware vSphere 6.5 Design Guide:

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi65design.html

OpenShift: https://www.openshift.com/

Red Hat OpenShift Container Platform 3.11 Architecture: https://access.redhat.com/documentation/en-

us/openshift_container_platform/3.11/html/architecture/

Day Two Operations: https://access.redhat.com/documentation/en-

us/openshift_container_platform/3.11/html/day_two_operations_guide/

CLI Reference: https://access.redhat.com/documentation/en-

us/openshift_container_platform/3.11/html/cli_reference/

Red Hat OpenShift: https://www.redhat.com/en/technologies/cloud-computing/openshift

Page 38: Cisco HyperFlex with Red Hat OpenShift Container …Red Hat OpenShift Container Platform v3.11 is validated on Cisco HyperFlex with VMware vSphere Red Hat OpenShift Container platform

About the Authors

About the Authors

Rajesh Kharya, Technical Leader, Cisco UCS Solutions Engineering, Cisco Systems Inc.

Rajesh Kharya is Tech lead with Solutions Engineering team, Computing System Product Group. His focus

includes Open Source Technologies, Cloud, Software Defined Storage, Containers and Automation.

Acknowledgements

Vishwanath Jakka, Cisco Systems, Inc.

Babu Mahadevan, Cisco Systems, Inc.

Antonios Dakopoulos, Red Hat

Chris Morgan, Red Hat

Lukasz Sztachanski, Intel

Lukasz Luczaj, Intel

Seema Mehta, Intel


Recommended