+ All Categories
Home > Documents > A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network...

A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network...

Date post: 10-Apr-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
59
IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2016 A Study of OpenStack Networking Performance PHILIP OLSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION
Transcript
Page 1: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING,SECOND CYCLE, 30 CREDITS

, STOCKHOLM SWEDEN 2016

A Study of OpenStack Networking Performance

PHILIP OLSSON

KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF COMPUTER SCIENCE AND COMMUNICATION

Page 2: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

A Study of OpenStack Networking Performance

PHILIP OLSSON

Master’s Thesis at CSCSupervisor: Dilian GurovExaminer: Johan Håstad

Supervisor at Ericsson AB: Max Shatokhin

June 17, 2016

Page 3: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 4: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

AbstractCloud computing is a fast-growing sector among softwarecompanies. Cloud platforms provide services such as spread-ing out storage and computational power over several geo-graphic locations, on-demand resource allocation and flex-ible payment options. Virtualization is a technology usedin conjunction with cloud technology and o�ers the pos-sibility to share the physical resources of a host machineby hosting several virtual machines on the same physicalmachine. Each virtual machine runs its operating systemwhich makes the virtual machines hardware independent.The cloud and virtualization layers add additional layersof software to the server environments to provide the ser-vices. The additional layers cause an overlay in latencywhich can be problematic for latency sensitive applications.The primary goal of this thesis is to investigate how thenetworking components impact the latency in an Open-Stack cloud compared to a traditional deployment. Thenetworking components were benchmarked under di�erentload scenarios, and the results indicate that the additionallatency added by the networking components is not toosignificant in the used network setup. Instead, a significantperformance degradation could be seen on the applicationsrunning in the virtual machine which caused most of theadded latency in the cloud environment.

Page 5: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

ReferatEn studie av Openstack nätverksprestanda

Molntjänster är en snabbt växande sektor bland mjukva-ruföretag. Molnplattformar tillhandahåller tjänster så somutspridning av lagring och beräkningskraft över olika geo-grafiska områden, resursallokering på begäran och flexib-la betalningsmetoder. Virtualisering är en teknik som an-vänds tillsammans med molnteknologi och erbjuder möj-ligheten att dela de fysiska resurserna hos en värddatormellan olika virtuella maskiner som kör på samma fysiskadator. Varje virtuell maskin kör sitt egna operativsystemvilket gör att de virtuella maskinerna blir hårdvaruobe-roende. Moln och virtualiseringslagret lägger till ytterliga-re mjukvarulager till servermiljöer för att göra teknikernamöjliga. De extra mjukvarulagrerna orsakar ett pålägg påresponstiden vilket kan vara ett problem för applikationersom kräver snabb responstid. Det primära målet i detta ex-amensarbete är att undersöka hur de extra nätverkskompo-nenterna i molnplattformen OpenStack påverkar respons-tiden. Nätverkskomonenterna var utvärderade under olikabelastningsscenarion och resultaten indikerar att den extraresponstiden som orsakades av de extra nätverkskomponen-terna inte har allt för stor betydelse på responstiden i denanvända nätverksinstallationen. En signifikant perstanda-försämring sågs på applikationerna som körde på den virtu-ella maskinen vilket stod för den större delen av den ökaderesponstiden.

Page 6: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Glossary

blade Server computer optimized to minimize power consumption and physicalspace.

IaaS Infrastructure as aService.

IP Internet Protocol.

KVM Kernel-based Virtual Machine.

OS Operating System.

OVS Open vSwitch.

QEMU Quick Emulator.

SLA Service Level Agreement.

VLAN Virtual Local Area Network.

VM Virtual Machine.

Page 7: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Contents

Glossary

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Structure Of The Thesis . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 52.1 Virtualization and Cloud Computing . . . . . . . . . . . . . . . . . . 5

2.1.1 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1.2 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.1 Keystone Identity Service . . . . . . . . . . . . . . . . . . . . 72.2.2 Nova Compute . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.3 Neutron Networking . . . . . . . . . . . . . . . . . . . . . . . 82.2.4 Cinder Block Storage Service . . . . . . . . . . . . . . . . . . 132.2.5 Glance Image Service . . . . . . . . . . . . . . . . . . . . . . 142.2.6 Swift Object Storage . . . . . . . . . . . . . . . . . . . . . . . 14

3 Related work 15

4 Experimental Setup 174.1 Physical Setup For Native And Virtual Deployment . . . . . . . . . 174.2 Native Blade Deployment Architecture . . . . . . . . . . . . . . . . . 194.3 Virtual Deployment Architecture . . . . . . . . . . . . . . . . . . . . 20

5 Method 235.1 Load Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2 Measuring Network Performance . . . . . . . . . . . . . . . . . . . . 245.3 Measuring Server Performance . . . . . . . . . . . . . . . . . . . . . 255.4 Measuring Load Balancer Performance . . . . . . . . . . . . . . . . . 26

Page 8: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

5.5 Measuring Packet Delivery In And Out From VM . . . . . . . . . . . 26

6 Results 276.1 Time Spent In Blade Native Versus Virtual Deployment . . . . . . . 276.2 Time Distribution Native Versus Virtual Deployment . . . . . . . . . 296.3 Network Components Impact On Latency . . . . . . . . . . . . . . . 316.4 Server Performance Impact On Latency . . . . . . . . . . . . . . . . 336.5 Load Balancer Impact On Latency . . . . . . . . . . . . . . . . . . . 346.6 Packet Delivery In And Out From VM Impact On Latency . . . . . 35

7 Discussion and Analysis 377.1 Network Components Performance . . . . . . . . . . . . . . . . . . . 377.2 Load Balancer Performance . . . . . . . . . . . . . . . . . . . . . . . 387.3 Server Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387.4 Packet Delivery In And Out From The VM . . . . . . . . . . . . . . 39

8 Conclusion 41

9 Future Work 43

10 Social, Ethical, Economic and Sustainability aspects 45

Bibliography 47

Page 9: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 10: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 1

Introduction

This chapter introduces the concepts of the thesis, the motivation for the project,the investigated problem and the chosen approach. It also provides the delimitationsof the project and finally describes the structure of the thesis.

1.1 MotivationMigrating applications to a cloud environment has in recent years become a popularstrategy among software companies. To deploy architecture and software in cloudenvironments, such as OpenStack, provide benefits such as spreading out storageand computational power over several geographic locations, on-demand resource al-location, pay-as-you-go services, and small hardware investments [18]. Data centersexploit the use of virtualization techniques which can increase the resource utiliza-tion of physical servers by letting several virtual machines (VMs), isolated fromeach other, run simultaneously on the same physical machine [10, 26]. Virtualiza-tion and cloud techniques provide the benefits of creating systems that are easyto scale horizontally, i.e., adding more servers to the server environment, and canmake maintenance of both software and hardware easier.

Depending on who the stakeholder is, a cloud environment can provide di�er-ent benefits. By using virtualization techniques, standardized hardware can be usedwhich can make it less costly to buy hardware for large data centers [10]. Customerswho want to deploy arbitrary applications inside VMs in the cloud have the oppor-tunity to only pay for the hardware and bandwidth needed for their applicationsto run. Furthermore, customers or companies that use a cloud platform can easilyscale their needs which limits the financial costs and burden of investing in a scaleup or down often associated with it.

The cloud environment adds additional layers of software abstraction to theserver environments to provide the services compared to traditional server environ-ments. The additional layers are for example extra networking components and thehypervisor layer. The hypervisor is responsible for hosting one or several VMs ona physical host. Open vSwitch (OVS) and Linux Bridges are referred to as net-

1

Page 11: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 1. INTRODUCTION

working components, responsible for switching tra�c inside a cloud. It is possibleto configure the networking in a cloud in several di�erent ways, and in this thesis,an OpenStack provider network setup with OVS and Linux Bridges will be studied.OVS and Linux Bridges are commonly used in cloud computing platforms [22]. Theadditional layers in a cloud environment cause an overlay in latency which can becrucial for latency sensitive applications. Therefore, it is important to understandwhere the additional latency derives from to be able to prevent it, if possible.

Currently, Ericsson is using many di�erent specialized hardware componentswhich their systems run on. By migrating products to a cloud environment, it ispossible to use standardized hardware and doing so, possibly lower the costs forinvestments in new hardware and maintenance of both hardware and software.

However, reducing the costs by using standardized hardware is sometimes re-ferred to as a myth. Even though that cheaper standardized hardware could beused, it might not lower the costs since specialized hardware can have other char-acteristics that provide benefits that standardized hardware does not have. Thiscan for example be, better performance per price unit, lower power consumption,and improved cooling technology. By being able to run systems on both standard-ized and specialized hardware customers are o�ered the option to choose what theywant.

On traditional deployment, also referred to as native deployment, both upgrad-ing of software and scaling up is complicated. The real benefits of using virtualiza-tion and cloud technologies for Ericsson is that it gives the ability to horizontallyscale up the system when needed and let the maintenance of software to be easierwhich implies lower costs.

When Ericsson migrates their MTAS product to run in the cloud, also referredto as virtual deployment, they are experiencing higher latency from their systemin comparison to native deployment. The MTAS product is for example respon-sible for setting up di�erent kinds of calls between subscribers and handover ofcalls to di�erent subsystems when subscribers are moving between various networkzones, such as moving from a 4G to a 3G or 2G network zone. There are well-defined requirements on latency and on native deployment the latency is well belowthe requirements. In a virtualized deployment the latency is moving closer to thethreshold of what can be tolerated. The latency is not allowed to be higher thanthe requirements to provide the best possible user experience.

1.2 Problem statement

The main question to answer in this thesis is:Given a provider network setup in virtual deployment, how much impact on the la-tency have the added networking components in a virtual deployment in comparisonto latency on native deployment?

2

Page 12: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

1.3. APPROACH

1.3 ApproachThe networking components in virtual deployment were benchmarked under dif-ferent load scenarios using TCP as the transport protocol with a payload of 1000or 2000 bytes. Identical tests were performed on virtual and native deploymentwhere the latency for native deployment gives a baseline reference for latency. Thebenchmarking was done by calculating how much time on average the TCP packetsspent between the di�erent networking components in virtual deployment under thedi�erent scenarios. In chapter 5 a detailed description of the method is provided.

1.4 ContributionsThe focus of this thesis is to determine how much latency a set of commonly usednetworking components (OVS and Linux Bridges) add to a virtualized environment.Other components in the system were also benchmarked on latency to investigatehow much of the added latency they contributed to. More specifically the otherinvestigated components were an echo server, a load balancer and the process ofpassing IP packets in and out from the VM. A full description of this can be foundin chapter 5.

1.5 DelimitationsThe latency of importance is how much longer time a packet spends inside a physicalnode in virtual deployment compared to a physical node in native deployment. Thefocus of the thesis is to determine how much latency the networking componentsadd in virtual deployment. The experimental setup consists of a single computenode responding to requests, also referred to as a payload. Other aspects couldbe considered such as CPU consumption, memory usage, disk I/O performance ormaximum network throughput but it is out of the scope of this project. The loadbalancer is optimized for a cluster containing two or more payloads. Therefore, itis not possible to guarantee that the profiling results of the load balancer will bethe same in a production cluster.

1.6 Structure Of The ThesisThis thesis report consists of ten chapters. Chapter 1 introduced the concepts andthe research goal of this thesis. Chapter 2 brings up the necessary backgroundknowledge. Chapter 3 presents previous related work. Chapter 4 presents the usedtestbed in both native and virtual deployment. Chapter 5 gives a description ofthe method used in this thesis. Chapter 6 present the results from the conductedexperiments. Chapter 7 discusses and analyzes the obtained results from the exper-iments. Chapter 8 present conclusions from the obtained results. Chapter 9 suggest

3

Page 13: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 1. INTRODUCTION

some topics for further research related to cloud computing and virtualization. Fi-nally, in chapter 10 social, ethical, economic and sustainability aspects related tothis project are discussed.

4

Page 14: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 2

Background

This chapter brings up the necessary background knowledge needed to answer theresearch question. In particular, the chapter introduces the concepts of virtualiza-tion, cloud and gives a brief description of the OpenStack cloud platform.

2.1 Virtualization and Cloud Computing

2.1.1 VirtualizationVirtualization in computer science refers to creating virtual versions of computercomponents including, but not limited to, network and storage devices, hardwareplatforms, operating systems etc. A virtual machine (VM) is an emulation of acomputer system running inside another computer system. A VM is often referredto being a guest running inside a host. A hypervisor is a software abstractionof hardware responsible for hosting one or several guest operating systems (OSs)simultaneously on a single physical machine [29]. In Figure 2.1 a high-level overviewof the traditional computer architecture is shown. The operating system runs ontop of the hardware, and the applications are running on the operating system.

Figure 2.1. Traditional computer architecture with the application, OS and hard-ware layer.

There are two types of hypervisors, often referred to as Type 1 and Type 2 hy-pervisors [5]. In Figure 2.2 an overview of a virtual computer architecture is shown

5

Page 15: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 2. BACKGROUND

illustrating the two types of hypervisor architecture.

A Type 1 hypervisor, often referred to as a bare-metal hypervisor, runs directlyon the hardware of the host. Usually, a chosen guest system is responsible forthe management and supervision of new guests on the hypervisor [5]. KVM andVMware ESXi are examples of bare-metal hypervisors[25].

A Type 2 hypervisor requires an OS first to be installed on the computer. Theinstalled OS is referred to as the host OS. A Type 2 hypervisor runs on top of thehost OS and each guest OS run as a normal process on the host OS. [5]. An exampleof a Type 2 hypervisor is Oracle Virtual Box.

Figure 2.2. Overview of the computer architecture with a Type 1 and Type 2hypervisor.

2.1.2 Cloud Computing

Cloud computing is a technology that utilizes the outcome of the virtualizationtechnology. It is a service that delivers a platform to manage virtualized resourcessuch as hardware and VMs by letting end users add or remove VM instances in acloud cluster, configure IP infrastructure and provide monitoring of a service levelagreements (SLAs). SLAs make sure that only the agreed resources are used, anda cloud platform should o�er the possibility of resource extension or contractionto scale easily up or down dynamically. OpenStack is a free open source cloudcomputing platform used by hundreds of the world’s largest companies to run theirbusinesses [12].

6

Page 16: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

2.2. OPENSTACK

2.2 OpenStackOpenStack is an operating system consisting of a set of open source software toolsallowing their users to create private and public clouds [12, 17, 26]. The Open-Stack operating system manages resources of compute, storage and networking poolswhich are configurable via both a web interface and command line interface [17].OpenStack clouds are powered by modular components called OpenStack projects[17]. Each OpenStack project has its area of responsibility and it is possible to addany number of projects to an OpenStack cloud to satisfy the requirements that needto be met. Together the projects build up a complete Infrastructure as a Service(IaaS) platform. The main components are the six core services of OpenStack whichare:

• Swift - Object storage

• Keystone - Authentication and authorization service

• Nova - Compute

• Neutron - Networking

• Cinder - Block storage

• Glance - Image service

A typical optional service to include in an OpenStack cloud is Horizon. Horizonis a web-based interface to let end users to manage and configure the cloud. Atypical OpenStack cloud consists of several nodes where each node host one orseveral services, for example:

• A controller node - manages the cloud

• A network node - providing network services to the cloud

• One or several compute nodes - running the virtual machines

• One or several storage nodes - responsible for storing data and virtual machineimages

2.2.1 Keystone Identity ServiceKeystone is the OpenStack service for authentication and authorization. Keystoneis used to authenticate and authorize people and API calls from other OpenStackservices.

7

Page 17: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 2. BACKGROUND

2.2.2 Nova Compute

The Nova compute service manages running instances in the OpenStack cloud. Novainteracts with several other services such as: Keystone to perform authenticationand authorization, Horizon to provide an administrative web interface and Glanceto provide images. The compute service has the responsibility to boot the VMs withthe virtual machine images provided by Glance, schedule the VMs and connect themto virtual networks inside the cloud.

Nova consists of several components, and the most important ones are the APIserver, the scheduler, and the messaging queue. The API server allows end usersand other OpenStack services to communicate with the cloud controller [9]. Thecollection of compute components is called cloud controller and represents the globalstate of the cloud. The cloud controller communicates with other OpenStack ser-vices through the messaging queue [11]. The scheduler’s task is to allocate physicalresources to virtual resources by identifying the most suitable compute node. Themessaging queue provides communication between processes in OpenStack [15].

Nova supports several hypervisors by providing an abstraction layer for computedrivers. The QEMU/KVM with libvirt is the default and the only fully supportedhypervisor by OpenStack [13, 14]. Other supported hypervisors by Nova are forexample, Hyper-V, VMware, Xen-Server and Xen via libvirt. The support for allhypervisor are not equal and all does not support the same features [2, 14]. Kernel-based Virtual Machine (KVM) is a kernel module that provides virtualization in-frastructure for Linux machines using x86 hardware. QEMU is a generic machineemulator and virtualizer. When QEMU is used as an emulator it can run programsand operating systems made for a specific machine on another machine, for examplerunning software made for an ARM board on a personal desktop. To run QEMUas a virtualizer, it has to run together with Xen or KVM. When used together withKVM it can be used as a virtualization software for x86 architecture capable ofachieving close to native performance by letting the guest code execute directly onthe physical host’s CPU. [19]

2.2.3 Neutron Networking

The Neutron networking service provides networking as a service for other Open-Stack services, e.g., OpenStack Compute. Neutron uses Keystone for authentica-tion and authorization for all API requests. Neutron handles the virtual networkinginfrastructure which includes creation and management of networks, switches, sub-nets, routers, firewalls and virtual private networks (VPNs). When a new VM iscreated the Nova compute API communicates with the Neutron API to connectthe VM correctly to the specified networks. It does so by plugging in the virtualnetwork interface cards (vNICs) of the VM into the particular virtual networks withthe use of Open vSwitch (OVS) bridges. OVS is an open source virtual switch usedto bridge tra�c between VMs and external networks by connecting interfaces ofVMs and physical network interface cards. OVS is intended to be used in multi

8

Page 18: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

2.2. OPENSTACK

node virtualization deployments for which the Linux Bridge is not well suited [27].

Tenant Networks

OpenStack supports multitenancy, i.e., a group of OpenStack users [11]. Eachtenant in the cloud requires its logical network to isolate access to compute resources.In OpenStack, this is provided by network isolation. Neutron provides support forfour di�erent types of network isolation and overlay technologies to isolate apps andtenants from each other in a cloud environment. [16]

FlatAll hosts and VMs exist on the same network. There is no Virtual LocalArea Network (VLAN) tagging or network segregation taking place, makingit possible for two VMs belonging to di�erent tenants to see each other’stra�c.

VLANVLAN allows separation of provider or tenant network tra�c by using VLANIDs that maps to real VLANs in a data center. Neutron enables users tocreate multiple provider or tenant networks that correspond to the physicalnetwork in a data center. [4, 16]

GRE and VXLANGeneric Routing Encapsulation (GRE) and Virtual Extensible LAN (VXLAN)are protocols used for encapsulation to create overlay networks to increase thescalability of large computing deployments. The techniques provide the pos-sibility to create a layer-2 network on top of a layer-3 network to provide andcontrol communication between VMs across di�erent networks. The sourceand destination switches are then allowed to act as if they have a virtual pointto point connection between them.

Network Deployment scenario

There are many di�erent ways to configure the Neutron networking service. Onescenario is to use the Neutron ML2 layer plugin with Open vSwitch with the providernetwork. Provider networks map to existing physical networks in a data center [16].The advantages of using provider networks are simplicity, better performance, andreliability with the drawback of less flexibility. The networking software componentshandling layer-3 operations are the ones that impact the performance and reliabil-ity the most. Better performance and reliability are archived by moving layer-3operations to the physical network infrastructure. [22]

To send tra�c between VMs and the external network in a provider networkscenario the minimum requirements are to have one controller node and one com-pute node. The controller node requires two network interfaces, management, andprovider. The physical network infrastructure switches/routes tra�c to external

9

Page 19: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 2. BACKGROUND

Figure 2.3. Overview of a provider network layout where all nodes connect directlyto the physical network infrastructure. [22]

networks from a generic network to which the provider interface is connected.The di�erence between the provider network and the external network, is that theprovider network is available to instances and the external network is only availablevia a router. A provider network is a specific VLAN, and a generic network is anetwork providing one or more VLANs. [22]

The compute node also need the management and provider interfaces. Theprovider interface also connects to a generic network that the physical networkinfrastructure routes to external networks. A general overview of the provider net-work layout can be seen in Figure 2.3. As seen all the nodes connect to the physicalnetwork infrastructure that takes care of the switching and routing. All nodes runswitch services to provide connectivity to the VMs within the nodes. The controllernode runs the Dynamic Host Configuration Protocol (DHCP) service.

Figure 2.4 shows the networking components of the controller node. A tap portor tap device is a virtual network kernel device operating with layer-2 Ethernetframes. Hypervisors use tap devices to deliver Ethernet frames to guest operatingsystems. Patch ports are ports which connect OVS bridges. When tra�c is sent

10

Page 20: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

2.2. OPENSTACK

Figure 2.4. Network components of the controller node. [22]

from a VM to the external network, the integration bridge, br-int, adds an internaltag for the provider network and forwards the tra�c to the provider bridge, br-ex.The provider bridge replaces the internal tag with the real VLAN segmentation IDand forwards the tra�c to the physical network. Figure 2.4 contains two providernetworks to illustrate that it is possible to have several provider networks connectedto the same physical network. The controller node has a DHCP agent for eachprovider network that provides the network with DHCP services.

As seen in Figure 2.5 the compute node also has the integration and providerbridge like the controller node. In addition to this, the compute node also has aLinux Bridge to manage security groups for instances due to limitations in OpenvSwitch and iptables [22, 8]. Figure 2.5 also contains two provider networks forillustration purposes.

11

Page 21: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 2. BACKGROUND

Figure 2.5. Network components of the compute node. [22]

Figure 2.6 describes the network tra�c flow and the components involved whenIP packets are being sent between a VM and an external net. In essence, when anIP packet is being sent from a VM to the Internet it goes through the three bridgeson the compute node and gets delivered to the physical network infrastructure. Thephysical network infrastructure does the switching and routing out to the Internet.Each IP packet sent from a VM to the external net is processed by 13 di�erentvirtual or physical network devices before reaching the public Internet.

12

Page 22: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

2.2. OPENSTACK

Figure 2.6. Tra�c flow between virtual a machine and an external network. [22]

2.2.4 Cinder Block Storage Service

Cinder is the Block Storage service for OpenStack. It is designed to provide blockstorage resources to end users through the use of a reference implementation such asLogical Volume Management (LVM) or Network File System (NFS) or other plugindrivers for storage. Cinder provides end users with basic API requests to create,delete and attach volumes to virtual machines as well as more advanced functionssuch as extend, create snapshots or clone a volume. Cinder lets end users requestand consume resources without requiring knowledge of the location of the storageor on what type of device the storage is deployed on. [3]

13

Page 23: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 2. BACKGROUND

2.2.5 Glance Image ServiceGlance is the image service used in OpenStack. Images contain already installedoperating systems. Glance provides functionality to discover, register and retrievevirtual machine images. The Glance API allows users to retrieve both the actualimage and metadata about the VM image. Glance support multiple back end sys-tems that can be used as storage, e.g., simple file systems or object storage systemslike Swift. [6]

2.2.6 Swift Object StorageSwift is built to provide storage for large data sets with a simple API. It scaleswell and uses eventual consistency to provide high availability and durability forthe stored data. [23]

14

Page 24: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 3

Related work

Ristov et al. [21] investigated how the performance of intensive compute and mem-ory web services changed as they were migrated to a cloud environment. Accordingto their study, the performance in a cloud setup could drop by around 73% comparedto when the same hardware setup was used without virtualization.

Xie et al. [28] did a study regarding the maximum speed of database transactionswith 30 users by comparing a bare metal physical computer and a virtual computerlaunched by the VMware hypervisor. The experimental setup was identical on bothof the machines, and the used database was Oracle. Their results revealed thatthe bare metal machine had a performance gain of 12.42% compared to the virtualmachine.

Barker and Shenoy [1] studied how varying background load from other virtualmachines running on the same physical cloud server interfered with the performanceof latency-sensitive tasks. The measurements were done in a Xen-based laboratorycloud, and the background load was systematically introduced to the system levelby level. The background load consisted of CPU and disk jitter from other virtualmachines. According to the results, the throughput could decrease due to the back-ground load from other virtual machines. The CPU throughput was fair when theCPU allocations were capped by the hypervisor. Due to significant disk interfer-ence, up to 75% degradation in disk latency was experienced when the system wasunder sustained background load.

Rathore et al. [20] compared Kernel-based Virtual Machines (KVM) and LinuxContainers (LXC) as techniques to be used for virtual routers. They found thatKVM is a potential bottleneck at high loads due to packet switching between kerneland userspace.

Yamato [30] compared the performance between KVM, Docker containers andbare metal machines. Compared to the bare metal machine, the results showed thatthe Docker containers had a performance degradation with around 75%, and KVMhad a performance degradation with around 60%.

Wang and Ng [7] studied the end to end networking performance in an AmazonEC2 cloud. They observed that when the physical resources are being shared, higher

15

Page 25: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 3. RELATED WORK

latency and unstable throughput of TCP and UDP messages in the instances wereexperienced. Their conclusion was that the virtualization and processor sharingwere causing the unstable network characteristics.

16

Page 26: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 4

Experimental Setup

This chapter describes the architecture of the testbed. It describes the setup forthe physical architecture for both native and virtual deployment. The setup invirtual deployment is similar to the OpenStack reference configuration as describedin section 2.2.3 with a few changes.

4.1 Physical Setup For Native And Virtual DeploymentFigure 4.1 shows the physical architecture for native and virtual deployment. Thephysical testbed used in the experiments consisted of a set of blade servers and tworouters located in the server cabinet. The load machine was connected to a Juniperlab router which forwarded the tra�c to the cabinet.

When the load machine sends tra�c to the blade server, the tra�c passes via theJuniper router and one of the two routers attached to the backplane of the cabinet.To ensure high availability, there are two routers connected to the backplane, oneactive and one passive, both Open Shortest Path First (OSPF) and BidirectionalForwarding Detection (BFD) compatible. OSPF is a routing protocol that calcu-lates the shortest path in a network with Dijkstra’s algorithm. The protocol detectslink failures and recalculates the path if a link goes down. BFD is a low overheadprotocol used to detect link failures in a network and is used in conjunction withOSPF to detect link failures faster. The physical blades in a cabinet are identicaland have 64 GB of RAM and a 10 core 2.40 GHz CPU with hyper-threading, whichmakes 20 cores available for the hypervisor. The backplane of the cabinet is con-nected to the two routers with a 10 Gb link. On both native and virtual deployment,there are two cluster controllers (CC1 and CC2) and one active payload, all runningon separate blades. In virtual deployment, there is one additional active blade forthe cloud controller. For the scope of this project, the cluster controllers only let thepayload boot from them via the network. The payload is the machine that ends upprocessing the tra�c. The experimental setup is a minimal configuration and a realproduction cluster consist of several active payloads. To balance the load betweenthe machines in a production cluster, a set of the payloads are equipped with a load

17

Page 27: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 4. EXPERIMENTAL SETUP

Figure 4.1. A general overview of the physical testbed used in the experiments.

balancing functionality which distributes the tra�c among the payloads in a roundrobin way. The active payload used in the experiments was equipped with the loadbalancer and all tra�c sent in and out from the payload passed through it.

18

Page 28: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

4.2. NATIVE BLADE DEPLOYMENT ARCHITECTURE

4.2 Native Blade Deployment Architecture

On native deployment, there are in total three blades active, the two cluster con-trollers and the payload which serves tra�c. The physical architecture of the nativetestbed is shown in Figure 4.2.

Figure 4.2. The physical architecture of the native deployment testbed.

An overview of the components in a payload blade when an app is running ona native node is shown in Figure 4.3. The physical blade is running Linux SUSEas the OS and the app is running on top of the OS. The load balancer is listeningon the eth0 interface for incoming tra�c and forwards incoming tra�c to the O&Minterface. For outgoing tra�c, the load balancer is listening on the O&M interfaceand forwards outgoing tra�c to the backplane of the cabinet via the eth0 interface.The server running on the node is listening and sending tra�c to the O&M interface.

19

Page 29: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 4. EXPERIMENTAL SETUP

Figure 4.3. The components of the compute blade in the native deployment.

4.3 Virtual Deployment ArchitectureThe physical architecture of the virtual testbed is shown in Figure 4.4. The physicalsetup consists of four identical physical blades installed on one OpenStack installa-tion. One of the blades is running the payload as a VM which serves tra�c, twoblades are running the two cluster controllers (CC1 and CC2) as VMs, the fourthblade is running the cloud controller that manages OpenStack. Note that there isno network work node controlling the routing. As explained in section 2.2.3 thephysical architecture is handling the L-3 operations.

The physical nodes are running Ubuntu 14.04 as host operating system. Thenodes are using QEMU/KVM version 2.0.0 as the hypervisor. The payload is a VMrunning Linux SUSE as OS as in the native case. The virtual hardware used bythe payload consist of 10 virtual CPUs and 58 GB of RAM. The reason why thevirtual payload only has 58 GB of RAM available, compared to 64 GB RAM as inthe native case, is due to that the VM share memory with the hypervisor and hostOS. Mirantis1 7.0 was used to deploy the OpenStack cloud on the nodes.

In Figure 4.5 an overview of the components inside a virtual blade hosting thepayload is shown. The tap device is connected to the Ethernet interface of theVM and the OVS integration bridge (br-int). Br-int and the OVS provider bridge(int-prv) are connected to each other via a patch port. The Linux Bridge (br-Aux)connects the provider bridge with the physical network interface eth0 by sharing theinterface (pe) between the Linux Bridge and the provider bridge. The experimentsconducted in this thesis used OVS version 2.3.1 and Linux Bridge-utils version 1.5.

When an IP packet in a virtual node travels from the physical interface of thenode to the application it has to travel through five additional virtual interfacescompared to native deployment.

1https://www.mirantis.com/

20

Page 30: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

4.3. VIRTUAL DEPLOYMENT ARCHITECTURE

Figure 4.4. The physical architecture of the virtual deployment test set up.

21

Page 31: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 4. EXPERIMENTAL SETUP

Figure 4.5. The components of the compute blade in the virtual deployment.

22

Page 32: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 5

Method

This chapter presents the method and tools used to profile the performance of thecomponents in native and virtual deployment with Ericsson software installed onthe nodes.

In both the native and virtual case a Python client located on the load machinesent concurrent requests to a Java server located on the payload. The server wasmultithreaded to be able to respond to the concurrent requests simultaneously. Anew thread was created by the server for each incoming TCP connection and theoperating system on the node was scheduling the threads on the di�erent cores ofthe CPU.

The nodes used in the experiments had pre-installed Ericsson software. One ofthe installed software components was a load balancer that distributes tra�c overseveral payloads in a production cluster. The load balancer internally uses a tunneltechnology to send packets between di�erent payloads. To measure how much timeit takes for a TCP packet to travel from the physical network interface of the bladeto the O&M interface of the payload only one payload could be active. The reasonfor this is that the load balancer corrupts the network flow so it is not possible todetermine which payload the request will end up in while having several payloadsactive at the same time. By only having one active payload it is also possible tomeasure how much time the load balancer spends on processing an incoming andoutgoing request as well as calculating how long time it took to send packets in andout from the VM in virtual deployment. An important notice is that the result fromthe profiling of the load balancer does not necessarily imply that the performanceof the load balancer will be the same in a production cluster with several payloadsactive since it is optimized for a cluster with several payloads active.

5.1 Load ScenariosIdentical tests were performed on native and virtual deployment. The results fromthe native deployment are used as a baseline for comparison between the two sys-tems, and the latency was measured with respect to how long time a packet spent

23

Page 33: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 5. METHOD

inside the physical blade on both setups.From the load machine, TCP requests carrying a payload of 1000 and 2000

bytes were sent to the server inside the payload. The server responded with anequal amount of bytes to the load machine. The reason for choosing 1000 and 2000bytes of payload in a TCP packet was to investigate if segmentation of TCP packetsa�ected the performance. The experiments were carried out by varying the loadsent from the client to the server. The load is defined by how many requests/secondwere sent from the load machine. In the experiments, the load varied from 100 upto 1000 requests/second. The client on the load machine gradually increased theload from 100 to 1000 requests/second by letting each load size run for 10 secondsand then increase the load with 100 requests/seconds up to 1000 requests/second.To be able to track the correct TCP packets a specific session ID was inserted intothe payload of each TCP packet. By doing so, it was possible to track how longtime a particular packet spent between di�erent interfaces and how long time ittook for the server to process a specific request.

The results are presented as the average times for the di�erent componentsunder the di�erent load scenarios. There are other metrics of how latency can bemeasured, such as the percentile of the measured times, but it was not considered inthis thesis. In addition to this a relative cost is calculated, showing how much timea specific component added in latency in virtual deployment compared to nativedeployment. The relative cost is presented in percent and is defined as below inequation 5.1.

cost = time

measured

latency

diff

ú 100 (5.1)

time

measured

is the time taken by component that was measured and latency

diff

is the latency di�erence between virtual and native deployment.

5.2 Measuring Network PerformanceTo measure the amount of time that the packets spent inside the node on the nativeand virtual deployment and to profile where time was being lost on the virtual nodetcpdump1 was used. Tcpdump is a Unix-based tool used to sni� network tra�c on agiven network interface and let the user know at what time a packet arrived at theinterface. The provided timestamp by tcpdump reflects the time when the kernelapplied the timestamp to the packet and the time is as accurate as the kernel’s clock[24]. The clock source used by the system was tsc.

Tcpdump was used on a set of the network interfaces that the packets traveledthrough in a node on both deployments. By sni�ng at network interfaces, it waspossible to calculate how much time the intermediate networking steps took in the

1http://www.tcpdump.org/

24

Page 34: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

5.3. MEASURING SERVER PERFORMANCE

virtual setup. It was also possible to calculate the amount of time that the loadbalancer took to process tra�c, the amount of time taken to send packets in andout from the VM and the total amount of time a packet spent inside the physicalblade.

More specifically tcpdump was used the following network interfaces in virtualdeployment:

• eth0 - the physical interface of the blade to which the incoming TCP packetsfirst arrive

• pe - the shared interface between the Linux Bridge, br-aux, and the OVSbridge br-prv

• tap - the tap device used by the hypervisor to inject the packets to the networkstack of the virtual machine

• eth2 - the interface of the payload to which the incoming TCP packets firstarrive before being processed by the load balancer

and for native deployment, tcpdump was used on the following interface:

• eth0 - the physical interface of the blade to which the incoming TCP packetsfirst arrive

The time spent between two network components in virtual deployment is cal-culated as the average time that was spent between two interfaces on the way inand out from the blade. With that said, the times calculated were the average timesit took for a packet to travel:

• from eth0 to the interface pe and vice versa, and

• from pe to the tap device and vice versa

5.3 Measuring Server PerformanceFor each request processed by the server on native and virtual deployment the pro-cessing time was measured. This was measured to investigate if there was a di�er-ence between the processing time on native and virtual setup. The server acceptedincoming requests and then created a new thread of the type java.lang.Thread inwhich the rest of the computational work was done. The thread read all bytesfrom the payload of the TCP packet and responded with an equal amount of bytes.The timing was started when a new thread was created and was stopped after allbytes had been written to the output stream. To get the processing time the JavaSystem.nanoTime library was used.

25

Page 35: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 5. METHOD

5.4 Measuring Load Balancer PerformanceThe average amount of time taken by the load balancer to process packets, time

LB

,is calculated as shown in equation 5.2:

time

LB

= (eth2out

≠ eth2in

) ≠ serverProcessingT ime (5.2)

where eth2out

is the time when a packet reached the eth2 interface on the wayout from the VM, eth2

in

is the time when a packet reached the eth2 interface onthe way into the VM, and serverProcessingT ime is the amount of time that theserver spent on processing the packet.

5.5 Measuring Packet Delivery In And Out From VMThe average time taken to send packets from the tap device to the virtual interfaceof the VM and vice versa, time

tap≠vNIC

, is calculated as shown in equation 5.3:

time

tap≠vNIC

= (tap

out

≠ tap

in

) ≠ (eth2out

≠ eth2in

) (5.3)

where tap

out

is the time of when a packet reached the tap device on the wayout from the VM, tap

in

is the time of when a packet reached the tap device beforegoing into the VM, eth2

out

is the time of when a packet reached the eth2 interfaceon the way out from the VM, eth2

in

is the time of when a packet reached the eth2network interface when going into the VM. The reason why the time di�erence forthis case is not calculated in the same manner as for the networking componentsas described in Section 5.2 is due to the fact that the tap device, and the vNICeth2 are located on di�erent machines. Even though they are located on the samephysical machine, their relative clocks are out of sync and therefore the time cannot be calculated in the same way.

26

Page 36: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 6

Results

This chapter presents the results of the conducted measurements in native andvirtual deployment. The results consist of a comparison between how much timea packet spends inside a native and virtual blade, how the time is distributed overdi�erent components in the blade and how the networking components impact thelatency in virtual deployment. The performance of the load balancer and the serveris also presented here. Finally, the impact on the latency of packet passing in andout from the VM is presented here.

6.1 Time Spent In Blade Native Versus VirtualDeployment

Figure 6.1 shows the average time a packet spent inside the blade on native andvirtual deployment with varying load.

Figure 6.1. Average time packets spent inside a blade on native and virtual deploy-ment for di�erent loads and payload sizes.

27

Page 37: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 6. RESULTS

Table 6.1 details the average factor of much longer time a packet spent inside ablade on virtual deployment compared to a blade on native deployment for di�erentloads and payload sizes. The results indicate that adding a cloud layer increasesthe latency by a factor of at least 2.7 and a maximum factor of 5.2. The resultssuggest that as the load increases the latency between virtual and native deploymentis increasing as well. In eight out of the ten cases the factor between virtual andnative latency was higher with a payload of 2000 bytes.

Load(Requests/second)

FactorVirtual/Native 1000B

FactorVirtual/Native 2000B

100 2.7 3.8200 3.2 3.5300 3.8 3.5400 3.2 4.0500 3.2 4.9600 3.6 5.2700 3.5 4.5800 4.8 4.7900 3.9 4.11000 5.0 5.2

Table 6.1. The average factor of how much longer a packet spent inside a blade onvirtual deployment compared to a blade on native deployment for di�erent loads andpayload sizes.

The added latency for the di�erent load scenarios and payload sizes in the virtualdeployment in comparison to native deployment are detailed in Table 6.2.

Load (Requests/second) 1000 bytesTime ms

2000 bytesTime ms

100 0.89 1.2200 0.77 0.85300 0.89 0.79400 0.66 0.91500 0.61 1.2600 0.59 1.2700 0.57 0.96800 0.85 0.95900 0.71 0.841000 0.89 1.1

Table 6.2. Di�erence in latency in milliseconds between virtual and native deploy-ment for various loads and payload sizes.

28

Page 38: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

6.2. TIME DISTRIBUTION NATIVE VERSUS VIRTUAL DEPLOYMENT

6.2 Time Distribution Native Versus Virtual Deployment

This section presents the results of how the time was distributed over the di�erentcomponents inside a blade on native and virtual deployment.

The Figures 6.2 and 6.3 show the average time distribution for the di�erentcomponents inside the blade on native deployment. In the figures, the line Loadbalancer + O&M in/out avg is the average amount of time that was spent by theload balancer to process the packet and send it to the server via the O&M interfaceand vice versa for di�erent loads and payload sizes. The line Server processing timeavg is the average amount of time that it took for the server to process a requestfor di�erent loads and payload sizes.

Figure 6.2. Results of the average time distribution when packets traveled betweenthe components on the native blade with a TCP payload size of 1000 bytes.

Figure 6.3. Results of the average time distribution when packets traveled betweenthe components on the native blade with a TCP payload size of 2000 bytes.

29

Page 39: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 6. RESULTS

The figures 6.4 and 6.5 show the average time distribution for the di�erentcomponents inside a blade on virtual deployment for di�erent loads and payloadsizes. The line eth from/to prv avg is the average amount of time it took for apacket to travel from the pNIC eth0 of the blade to the vNIC pe and vice versa.The line prv from/to tap is the average time it took for a packet to travel from thepNIC pe to the tap device of the host via the OVS integration bridge and vice versa.The line VM in/out is the average time it took for a packet to travel from the tapdevice of the host to the eth2 vNIC of the VM and vice versa.

Figure 6.4. Results of the average time distribution when packets traveled betweenthe components on the virtual blade with a TCP payload size of 1000 bytes.

Figure 6.5. Results of the average time distribution when packets traveled betweenthe components on the virtual blade with a TCP payload size of 2000 bytes.

30

Page 40: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

6.3. NETWORK COMPONENTS IMPACT ON LATENCY

6.3 Network Components Impact On Latency

The extra networking components added in the virtual deployment is the LinuxBridge (br-aux) and the two OVS bridges, the provider bridge (br-prv) and theintegration bridge (br-int). Table 6.3 shows the average amount of time and therelative cost of the total added latency in virtual deployment when a packet traveledbetween the interfaces eth0 and pe on the way in and out of the from the bladeunder di�erent loads and payload sizes.

1000 bytes 2000 bytes

Load (Requests/second) Time ms Relative cost ofadded latency in % Time ms Relative cost of

added latency in %100 0.057 6.4 0.055 4.4200 0.045 5.8 0.042 4.9300 0.041 4.6 0.035 4.4400 0.035 5.2 0.041 4.5500 0.029 4.7 0.025 2.1600 0.029 5.0 0.023 2.0700 0.023 4.0 0.022 2.3800 0.021 2.5 0.027 2.8900 0.022 3.0 0.020 2.31000 0.021 2.4 0.021 2.0

Table 6.3. Average amount of time and the relative cost of the total added latencyin virtual deployment when a packet traveled between the interfaces eth0 and pe onthe way in and out of the from the blade under di�erent loads and payload sizes.

Table 6.4 shows the average amount of time and the relative cost of the totaladded latency in virtual deployment when a packet traveled between the interfacepe and the tap device of the host on the way in and out of the from the blade underdi�erent loads and payload sizes.

31

Page 41: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 6. RESULTS

1000 bytes 2000 bytes

Load (Requests/second) Time ms Relative cost ofadded latency in % Time ms Relative cost of

added latency in %100 0.0062 0.70 0.0067 0.54200 0.0056 0.73 0.0055 0.64300 0.0055 0.62 0.0050 0.63400 0.0051 0.77 0.0048 0.52500 0.0048 0.78 0.0037 0.31600 0.0046 0.79 0.0033 0.28700 0.0034 0.59 0.0031 0.32800 0.0028 0.33 0.0032 0.34900 0.0027 0.38 0.0029 0.351000 0.0028 0.31 0.0031 0.28

Table 6.4. Average amount of time and the relative cost of the total added latencyin virtual deployment when a packet traveled between the interface pe and the tapdevice on the way in and out of the from the blade under di�erent loads and payloadsizes.

Table 6.5 shows the average amount of time and the relative cost of the to-tal added latency in the virtual deployment that the networking components con-tributed to under di�erent loads and payload sizes.

1000 bytes 2000 bytes

Load (Requests/second) Time ms Relative cost ofadded latency in % Time ms Relative cost of

added latency in %100 0.063 7.1 0.062 5.0200 0.051 6.5 0.047 5.5300 0.046 5.2 0.040 5.0400 0.040 6.0 0.046 5.1500 0.034 5.5 0.029 2.4600 0.034 5.7 0.027 2.3700 0.027 4.6 0.025 2.6800 0.024 2.9 0.030 3.2900 0.024 3.4 0.023 2.71000 0.024 2.7 0.024 2.2

Table 6.5. The total amount of time and the relative cost of the total added latencyin virtual deployment the networking components contributed to.

32

Page 42: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

6.4. SERVER PERFORMANCE IMPACT ON LATENCY

6.4 Server Performance Impact On Latency

Figure 6.6 details the relationship between the server performance on native andvirtual deployment under di�erent loads and payload sizes. The results show a sig-nificant increase in processing time of the server in virtual deployment in comparisonto native deployment.

Figure 6.6. Average server performance for di�erent loads and payloads.

Table 6.6 shows a more detailed relationship between native and virtual serverperformance and the impact the server had on the total added latency in virtualdeployment.

1000 bytes 2000 bytesLoad

(Requests/second)Factor

Virtual/nativeRelative cost of

added latency in %Factor

Virtual/nativeRelative cost of

added latency in %100 2.3 56 2.4 46200 2.4 54 2.2 45300 2.6 50 2.4 52400 2.4 54 2.4 44500 2.3 48 2.5 33600 2.6 50 3.3 46700 2.6 49 2.8 43800 3.0 40 3.1 47900 2.7 45 2.5 401000 3.4 44 3.1 41

Table 6.6. The relationship between virtual and native server performance and therelative cost of the added latency in virtual deployment the server contributed to.

33

Page 43: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 6. RESULTS

6.5 Load Balancer Impact On LatencyFigure 6.7 shows an overview of the performance of the load balancer related tolatency in native and virtual deployment. As in the case of the server performancein section 6.4 the results show that there is an increase of processing time in virtualdeployment. Table 6.7 details the relationship related to the processing time ofthe load balancer in virtual and native deployment. It also shows how much ofthe added latency in virtual deployment the load balancer contributed with underdi�erent loads and payload sizes.

Figure 6.7. Average load balancer performance for di�erent loads and payloads.

1000 bytes 2000 bytesLoad

(Requests/second)Factor

Virtual/nativeRelative cost of

added latency in %Factor

Virtual/nativeRelative cost of

added latency in %100 3.0 26 12 42200 5.6 30 21 41300 8.6 37 14 36400 5.3 31 16 45500 5.7 38 23 60600 5.4 36 14 48700 5.2 37 11 49800 9.2 51 10 44900 6.6 45 8.9 521000 8.2 49 16 52

Table 6.7. The relationship between virtual and native load balancer performanceand the relative cost of the added latency in virtual deployment the load balancercontributed to.

34

Page 44: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

6.6. PACKET DELIVERY IN AND OUT FROM VM IMPACT ON LATENCY

6.6 Packet Delivery In And Out From VM Impact OnLatency

In virtual deployment the payload is running as a VM inside of the host machineand because of that, when a TCP packet is sent from outside the VM to the VM,the packets has to injected onto the network stack of the VM from the host machine.The tap device is a software component located on the host machine responsiblefor injecting the packets from the host machine to the VM. This is also the deviceof the physical host that first receives the packets when they are sent out from theVM.

Table 6.8 shows the average amount of time it took to send packets from the tapdevice of the host to the eth2 network interface of the VM and vice versa for di�erentloads and payload sizes. It also displays the average relative cost that this processcontributed to in subject to the total added latency for the virtual deployment.

1000 bytes 2000 bytesLoad

(Requests/second) Time ms Relative cost ofadded latency in % Time ms Relative cost of

added latency in %100 0.093 10 0.087 7.0200 0.075 9.7 0.068 7.9300 0.068 7.6 0.061 7.6400 0.059 8.9 0.054 5.9500 0.052 8.6 0.052 4.4600 0.051 8.6 0.052 4.4700 0.050 8.7 0.050 5.1800 0.048 5.6 0.050 5.3900 0.047 6.5 0.045 5.41000 0.045 5.1 0.046 4.3

Table 6.8. The average time taken for a packet to be delivered from the tap deviceto the eth2 interface of the VM and vice versa for di�erent loads and payload sizes.The relative cost of the total amount of added latency in the virtual deployment thatthis process contributed with is shown in column three and four.

35

Page 45: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 46: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 7

Discussion and Analysis

This chapter presents a discussion and an analysis of the observed results. Theperformance of the networking components is evaluated and a discussion of theother investigated components is presented.

7.1 Network Components PerformanceThe load scenarios tested in this thesis are realistic and even exceed the normallimit of what a set of payloads would serve in a production environment. A set ofpayloads in a production cluster usually serve a maximum of 300 requests/secondwith real tra�c data. Even though the payload in the TCP packets was not realtelephony data, it does not impact how the networking components perform sincethey are not responsible for handling the payload but only to forward packets. Thesizes of the payload in the TCP packets used in the experiments are also realisticsince the sizes of the packets that are being sent within a production cluster andcome from the external network to to the cluster, have the size of one to two MTUsizes, i.e., 1500 to 3000 bytes. The goal of the thesis was to investigate how theimpact of the added networking components a�ected the latency. The results showthat the networking components are responsible for 2.2 % up to 7.1 % of the totalamount of added latency in virtual deployment. This corresponds to an addition oflatency between 0.023 and 0.063 milliseconds. The results from the impact of thenetworking components are regarded as good and an addition of maximum 0.063milliseconds is not regarded significant. However, theoretically, if a lot of tra�c ispassed between VMs internally in the cluster the significance of this overlay canincrease.

The addition of latency between the interface pe and the tap device is reducedby a factor of 10 compared to latency added between the interfaces eth0 and pe.From Table 6.3 and 6.4, in the time columns, there is a trend showing that theaverage travel time between the eth0 and pe interfaces and between the pe andtap device is reduced as the load increases. This is most probably due to cachingmechanisms. The same trend can be seen in Table 6.5 where the aggregated times

37

Page 47: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 7. DISCUSSION AND ANALYSIS

for the networking components are calculated. In addition to this, the impact thatthe networking components have on the total added latency in virtual deployment isalso reduced which can be seen in the percentage columns in the Tables 6.3, 6.4 and6.5. The results also indicate that there is no significant di�erence in performanceof the networking components when the TCP payload varies between 1000 and 2000bytes.

As stated in the OpenStack documentation [22] networking with a providernetwork setup has better performance compared to the classic scenario. The classicscenario requires one or more network nodes, typically located on another blade,which tra�c has to go through when traveling from and to an external network. Ifthis setup is used, then the addition in latency is most likely to increase even morecompared to the setup used in this thesis.

7.2 Load Balancer PerformanceThe performance of the load balancer in virtual deployment varies a lot in com-parison to native deployment as shown in Figure 6.7. As the load increases therelative cost of the added latency that the load balancer is responsible for increases.In virtual deployment, an increasing TCP payload size also seems to degrade theperformance of the load balancer.

As mentioned in chapter 5, by just having one active payload in the clusterserving tra�c is not a realistic scenario in a production cluster. The load balanceris optimized to give the best performance when there are several payloads activein the cluster, and therefore it can not be stated that it performs from 3.0 to 23times worse in virtual deployment as the results suggest in Table 6.7. To properlyinvestigate how the load balancer performs in virtual deployment versus nativedeployment several payloads have to be active and serve tra�c. However, thiswas not studied in this thesis since the goal was to investigate how the networkingcomponents a�ected the latency. It is still interesting to see that the hypervisorchanges the performance characteristics of the load balancer in a single node caseeven though it cannot be confirmed from a full production cluster point of view.

7.3 Server PerformanceThe server and the load balancer were the two largest contributors to increasedlatency in virtual deployment. The processing time of the server was 2.2 to 3.4times longer in virtual deployment compared to native deployment which impliedthat the server was responsible for 40% up to 56% of the added latency in virtualdeployment.

As the load increased the factor between virtual and native server performanceincreased, but the relative cost of added latency decreased. The average processingtime for di�erent loads and payload sizes has a decreasing trend in both virtual andnative deployment which probably is a result of caching mechanisms. The increasing

38

Page 48: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

7.4. PACKET DELIVERY IN AND OUT FROM THE VM

factor between virtual and native server is most likely caused by the overhead thatthe QEMU/KVM hypervisor provides in a virtual deployment.

The amount of work done on the server side was minimal. Theoretically, asthe server executes more complex code resulting in longer execution times, it ispresumably the component that will be the most significant component that addsthe most latency to a virtualized deployment. The server component is the onlycomponent in comparison to the other investigated components that is dynamic inits execution time when the system is further developed.

QEMU/KVM [19] state that when QEMU is used as a virtualizer it, achievesclose to native performance. However, the results in this thesis show that the serverhas 2.27 up to 3.44 longer execution time in virtual deployment which can notbe considered as close to native performance concerning latency sensitive applica-tions. However, Java, which is the language that the server was written in, mightnot be optimal, and the choice of another language might change the performancecharacteristics.

The results indicate that the hypervisor layer decreases the performance of theload balancer and server significantly. The hypervisor layer is most probably themain bottleneck in the system since those two components together were responsiblefor 82.46% up to 93.45% of the total added latency in virtual deployment.

7.4 Packet Delivery In And Out From The VMThe average additional latency message passing in and out from the VM contributedto in virtual deployment was between 0.045 and 0.093 milliseconds which corre-sponded to a relative cost between 4.3 % and 10 % of the total added latency.An addition of maximum 0.093 milliseconds in latency is not regarded a significantincrease in latency in the system. However, as in the case with the networking com-ponents, if a lot of tra�c is sent between the VMs inside the cluster the significanceof this overlay can increase.

39

Page 49: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 50: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 8

Conclusion

The question to answer in this thesis was: Given a provider network setup in virtualdeployment, how much impact on the latency have the added networking componentsin a virtual deployment in comparison to latency on native deployment?

The experiments were carried out by sending TCP requests from a client locatedon a load machine to a server located on a payload. The client sent the requestswith a frequency ranging from 100 up to 1000 requests per second with a payloadsize of 1000 and 2000 bytes.

The results indicate that networking components add between 0.023 and 0.063milliseconds in latency, corresponding to between 2.2 % and 7.1 % of the total addedlatency in virtual deployment compared to native deployment. These results can beconsidered as good by not providing a significant increase of latency and indicatesthat the chosen network setup is good for the virtual deployment.

In addition to this, the server and the load balancer were benchmarked withrespect to latency in native and virtual deployment. The amount of time it took topass packets in and out from the VM to the host was also benchmarked.

The message passing in and out from the VM added a latency of 0.045 up to0.093 milliseconds which corresponded to a relative cost of between 4.3 % and 10 %of the total added latency in virtual deployment. This is not considered as a toosignificant increase in latency either.

The average processing time on the server was extended by a factor of 2.2 to 3.4in virtual deployment which corresponded to 40 % up to 56 % of the total addedlatency.

The performance of the load balancer significantly changed in virtual deploy-ment. Even if the results of the load balancer performance can not be directlyapplied to how it will perform in a production cluster, the results suggest that theperformance of it in a virtual environment is decreased. However, a more detailedstudy of the load balancer has to be done by having several active payloads in thecluster to determine the performance.

The results suggest that the QEMU/KVM hypervisor layer is the main bot-tleneck in the system due to the significant increase in computational time on the

41

Page 51: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

CHAPTER 8. CONCLUSION

server and the load balancer.

42

Page 52: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 9

Future Work

The server running inside the VM was used as a minimal tool to be able to timedi�erent components in the system and can not be considered as a complete bench-mark of how applications inside a VM with the QEMU/KVM virtualizer perform.Further investigations should be done related to how the performance of an appli-cation is changed as it is moved to run inside a virtual machine instead of a hostmachine. For a complete understanding of how migration of applications into avirtualized environment a�ects the performance, other metrics than just latencyshould be considered. This can for example be how virtualization a�ects the CPUand memory utilization of the host. The performance of reading and writing to diskare also aspects that should be considered.

How the relationship between the virtual hardware and physical hardware a�ectsthe performance of an application running inside a virtual machine is a topic thatalso could be of interest.

There are other alternatives of hypervisors that OpenStack supports and thechosen hypervisor for this thesis is necessarily not the best performing one. A furtherinvestigation of the performance of other hypervisors can be of great interest.

To completely understand how the performance of the load balancer is a�ectedin a virtualized environment a more complete study has to be done on both nativeand virtual deployment with several active payloads to simulate a real productioncluster. This thesis only investigated how the networking components a�ected thelatency in one particular case by using provider network setup. Another possiblecase that could be investigated is how the classic network setup a�ects the latencyin an OpenStack cloud.

43

Page 53: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 54: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Chapter 10

Social, Ethical, Economic andSustainability aspects

Virtualization generally o�ers the possibility to run several virtual machines on thesame physical compute resource. By running several virtual machines on the samephysical host it is possible to reduce the number of active machines in a data centerand by doing so, reduce power consumption. Shared resources imply both lowercosts for power consumption as well as minimizing the emission of carbon dioxide.Virtualization provides the benefit of making the virtual machines hardware inde-pendent which could imply that providers of large data centers could save moneyby using standardized hardware.

OpenStack makes it possible to deploy servers located across all over the worldand sometimes the users do not know where the resources are located. When itcomes to sharing compute and storage resources all countries do not have the samelaws and regulations related to ownership of data, which can be problematic. Byusing shared resources also opens up for the possibility for intruders to get accessto information they do not own and exploit this. In theory, it could also be possiblefor an intruder to make a denial of service attack on the shared resources, eventhough the service level agreement is supposed to prevent this. If critical systemsare running on public cloud infrastructure it could be possible in theory for anintruder take out systems for private persons, companies or even nations.

45

Page 55: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly
Page 56: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

Bibliography

[1] Sean Kenneth Barker and Prashant Shenoy. “Empirical evaluation of latency-sensitive application performance in the cloud”. In: Proceedings of the firstannual ACM SIGMM conference on Multimedia systems. ACM. 2010, pp. 35–46.

[2] Meenakshi Bist, Manoj Wariya, and Abhishek Agarwal. “Comparing delta,open stack and Xen Cloud Platforms: A survey on open source IaaS”. In:Advance Computing Conference (IACC), 2013 IEEE 3rd International. IEEE.2013, pp. 96–100.

[3] Cinder wiki. https://wiki.openstack.org/wiki/Cinder. Accessed: 2016-03-06.

[4] James Denton. Rackspace Developer. https://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks/. Accessed: 2016-03-15.

[5] Michael Fenn et al. “An evaluation of KVM for use in cloud computing”.In: Proc. 2nd International Conference on the Virtual Computing Initiative,RTP, NC, USA. 2008.

[6] Glance. http://docs.openstack.org/developer/glance/. Accessed: 2016-02-25.

[7] Wang Guohui and Eugene Ng T.S. “The impact of virtualization on networkperformance of Amazon EC2 data center”. In: INFOCOM, 2010 ProceedingsIEEE. IEEE. 2010, pp. 1–9.

[8] Open vSwitch OpenStack Docs. http : / / openvswitch . org / openstack /documentation/. Accessed: 2016-03-06.

[9] OpenStack Compute architecture. http://docs.openstack.org/admin-guide-cloud/compute_arch.html. Accessed: 2016-02-29.

[10] OpenStack Documentation. http://docs.openstack.org/icehouse/training-guides/content/operator-getting-started.html. Accessed: 2015-11-04.

[11] OpenStack glossary. http://docs.openstack.org/admin-guide-cloud/common/glossary.html. Accessed: 2016-02-29.

[12] OpenStack Home page. http://www.openstack.org/. Accessed: 2016-02-29.

47

Page 57: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

BIBLIOGRAPHY

[13] OpenStack Hypervisor. http://docs.openstack.org/kilo/config-reference/content/hypervisor-configuration-basics.html. Accessed: 2016-02-29.

[14] OpenStack Hypervisor Support Matrix. https://wiki.openstack.org/wiki/HypervisorSupportMatrix. Accessed: 2016-02-29.

[15] OpenStack messaging. http :/ / docs. openstack .org / security- guide/messaging.html. Accessed: 2016-02-29.

[16] OpenStack Networking Overview. http://docs.openstack.org/mitaka/networking - guide / intro - os - networking - overview . html. Accessed:2016-05-12.

[17] OpenStack Software. http://www.openstack.org/software/. Accessed:2016-02-29.

[18] Simon Ostermann et al. “A performance analysis of EC2 cloud computing ser-vices for scientific computing”. In: Cloud computing. Springer, 2009, pp. 115–131.

[19] QEMU and KVM. http://wiki.qemu.org/Main_Page. Accessed: 2016-02-29.

[20] Muhammad Siraj Rathore, Markus Hidell, and Peter Sjödin. “KVM vs. LXC:comparing performance and isolation of hardware-assisted virtual routers”.In: American Journal of Networks and Communications 2.4 (2013), pp. 88–96.

[21] Ristov Sasko et al. “Compute and memory intensive web service performancein the cloud”. In: ICT Innovations 2012. Springer, 2013, pp. 215–224.

[22] Scenario: Provider networks with Open vSwitch. http://docs.openstack.org / liberty / networking - guide / scenario - provider - ovs . html. Ac-cessed: 2016-03-06.

[23] Swift documentation. http://docs.openstack.org/developer/swift/.Accessed: 2016-03-06.

[24] Tcpdump man page. http://www.tcpdump.org/manpages/tcpdump.1.html.Accessed: 2016-04-25.

[25] VMware ESXi. http://www.vmware.com/se/products/esxi-and-esx/overview. Accessed: 2016-04-18.

[26] Xiaolong Wen et al. “Comparison of open-source cloud management plat-forms: OpenStack and OpenNebula”. In: Fuzzy Systems and Knowledge Dis-covery (FSKD), 2012 9th International Conference on. IEEE. 2012, pp. 2457–2461.

[27] Why Open vSwitch. https://github.com/openvswitch/ovs/blob/master/WHY-OVS.md. Accessed: 2016-03-02.

[28] Jun Xie et al. “Bare metal provisioning to OpenStack using xCAT”. In: Jour-nal of Computers 8.7 (2013), pp. 1691–1695.

48

Page 58: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

BIBLIOGRAPHY

[29] Sonali Yadav. “Comparative Study on Open Source Software for Cloud Com-puting Platform: Eucalyptus, Openstack and Opennebula.” In: InternationalJournal Of Engineering And Science (2013).

[30] Yoji Yamato. “OpenStack hypervisor, container and Baremetal servers perfor-mance comparison”. In: IEICE Communications Express 4.7 (2015), pp. 228–232.

49

Page 59: A Study of OpenStack Networking Performance954536/FULLTEXT01.pdf · an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly

www.kth.se


Recommended