+ All Categories
Home > Technology > OpenFlow as a Service from research institute

OpenFlow as a Service from research institute

Date post: 13-Aug-2015
Category:
Upload: vijayaguru-jayaram
View: 85 times
Download: 3 times
Share this document with a friend
Popular Tags:
9
OpenFlow as a Service Fred Hsu, M. Salman Malik, Soudeh Ghorbani {fredhsu2,mmalik10,ghorban2}@illinois.edu Abstract—By providing a logically centralized controller that runs the management applica- tions that directly control the packet-handling functionality in the underlying switches, newly introduced paradigm of Software Defined Net- working (SDN) paves the way for network man- agement. Although there has been an extensive excitement in the networking community about SDN and OpenFlow which has led to various proposals for building OpenFlow controllers such as NOX and FloodLight [1], [2]; and despite recent advances in cloud computing that have resulted in development of reliable open source tools for managing clouds like OpenStack (an extensively used open source Infrastructure as a Service (IaaS) cloud computing project [3]), these two parts have not been integrated yet. In this work, we plan to bridge this gap, by providing a scalable OpenFlow controller as a plugin for OpenStack’s “network connectivity as a service” project (Quantum [4]) that avoids a considerable shortcomings of its currently available OpenFlow controller: Scalability. I. I NTRODUCTION Cloud computing is rapidly increasing in popularity [5]. Elasticity and dynamic service provisioning offered by cloud has attracted a lot of attention. The pay-as-you-go model has effectively turned cloud computing into a utility, and has made it accessible even to startups with limited budget. Given the large monetary bene- fits that cloud computing has to offer, more and more corporates are now migrating to cloud. It is not only industry. Cloud has received great attention from researchers as it poses many in- teresting challenges [6]. Innovation in the cloud has become easier with the advent of Open- Stack project [3]. OpenStack is an open source project that enables anyone to run and manage a production or experimental cloud infrastructure. It is a powerful arhcitecuture that can be used to provide Infrastructure as a Service (IaaS) to users. Traditionally, OpenStack has comprised of in- stance management project (Nova), object stor- age project (Swift) and image repository project (Glance). Previously, networking has not re- ceived much attention of OpenStack community and the network management responsibility was delegated to Nova services, which can provide flat network configuration or VLAN segmented networks [7]. This basic networking capability on one hand makes it difficult for tenants to setup multi-tier networks (in flat networking mode) and suffers from scalability issues (in VLAN mode) on the other hand [8]. Fortunately, OpenStack community has been cognizant of these limitations and has taken an initiative to enhance the networking capabilities of OpenStack. The new OpenStack project, Quantum [4], is designed to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Open- stack services (e.g., Nova [9]) [4] (See Sec- tion II for details). Essentially, Quantum would enable the tenants to create virtual networks with great ease. The modular architecture and standardized API can be leveraged to provide plugins for firewalls and ACLs etc. [4]. Even in this short span of time, since Quantum’s inception, multiple plugins have been developed to work with Quantum service. The one partic- ularly relevant to this work is a plugin for an OpenFlow Controller called Ryu [10]. Although Ryu project seems to be an at- tempt to integrate advantages of OpenFlow with OpenStack, it suffers from lack of a very fundamental requirement of cloud com- puting infrastructure: Scalability (more details in Section III). In this work we try to address this shortcoming by providing a more scalable OpenFlow plugin for Quantum project. More- over, as a proof of concept of management
Transcript

OpenFlow as a ServiceFred Hsu, M. Salman Malik, Soudeh Ghorbani

{fredhsu2,mmalik10,ghorban2}@illinois.edu

Abstract—By providing a logically centralizedcontroller that runs the management applica-tions that directly control the packet-handlingfunctionality in the underlying switches, newlyintroduced paradigm of Software Defined Net-working (SDN) paves the way for network man-agement. Although there has been an extensiveexcitement in the networking community aboutSDN and OpenFlow which has led to variousproposals for building OpenFlow controllers suchas NOX and FloodLight [1], [2]; and despiterecent advances in cloud computing that haveresulted in development of reliable open sourcetools for managing clouds like OpenStack (anextensively used open source Infrastructure as aService (IaaS) cloud computing project [3]), thesetwo parts have not been integrated yet. In thiswork, we plan to bridge this gap, by providinga scalable OpenFlow controller as a plugin forOpenStack’s “network connectivity as a service”project (Quantum [4]) that avoids a considerableshortcomings of its currently available OpenFlowcontroller: Scalability.

I. INTRODUCTION

Cloud computing is rapidly increasing inpopularity [5]. Elasticity and dynamic serviceprovisioning offered by cloud has attracted alot of attention. The pay-as-you-go model haseffectively turned cloud computing into a utility,and has made it accessible even to startups withlimited budget. Given the large monetary bene-fits that cloud computing has to offer, more andmore corporates are now migrating to cloud.It is not only industry. Cloud has received greatattention from researchers as it poses many in-teresting challenges [6]. Innovation in the cloudhas become easier with the advent of Open-Stack project [3]. OpenStack is an open sourceproject that enables anyone to run and manage aproduction or experimental cloud infrastructure.It is a powerful arhcitecuture that can be usedto provide Infrastructure as a Service (IaaS) tousers.

Traditionally, OpenStack has comprised of in-stance management project (Nova), object stor-age project (Swift) and image repository project(Glance). Previously, networking has not re-ceived much attention of OpenStack communityand the network management responsibility wasdelegated to Nova services, which can provideflat network configuration or VLAN segmentednetworks [7]. This basic networking capabilityon one hand makes it difficult for tenants tosetup multi-tier networks (in flat networkingmode) and suffers from scalability issues (inVLAN mode) on the other hand [8].Fortunately, OpenStack community has beencognizant of these limitations and has taken aninitiative to enhance the networking capabilitiesof OpenStack. The new OpenStack project,Quantum [4], is designed to provide “networkconnectivity as a service” between interfacedevices (e.g., vNICs) managed by other Open-stack services (e.g., Nova [9]) [4] (See Sec-tion II for details). Essentially, Quantum wouldenable the tenants to create virtual networkswith great ease. The modular architecture andstandardized API can be leveraged to provideplugins for firewalls and ACLs etc. [4]. Evenin this short span of time, since Quantum’sinception, multiple plugins have been developedto work with Quantum service. The one partic-ularly relevant to this work is a plugin for anOpenFlow Controller called Ryu [10].

Although Ryu project seems to be an at-tempt to integrate advantages of OpenFlowwith OpenStack, it suffers from lack of avery fundamental requirement of cloud com-puting infrastructure: Scalability (more detailsin Section III). In this work we try to addressthis shortcoming by providing a more scalableOpenFlow plugin for Quantum project. More-over, as a proof of concept of management

applications that largely benefit from beingrun by a logically-centralized controller, wedemonstrate the promising performance of anapplication for virtual machine migration thatcould run on top of our controller.

The rest of the paper is organized as follows:In Section II, we provide a brief overview ofOpenStack project and its different parts. InSection III, we present our approach for ad-dressing the shortcomings of already availableOpenFlow controller plugin and current statusof the project. In Section V we explain the VMmigration application and present its results.Finally, we provide related work in VI. Thepaper concludes in VII.

II. BACKGROUND

Since we are extensively using OpenStack,Nova, Quantum and Open vSwitch, we providea brief overview of them in this section

A. OpenStack

OpenStack is an open source cloud manage-ment system (CMS) [8]. It is comprised of fivecore projects namely Nova, Swift, Glance, key-stone and Horizon. Quantum is another projectthat will be added to the core of upcomingreleases of OpenStack. Before the introductionof Quantum, the networking functionality wasthe responsibility of Nova (which was mainlydesigned to provide instances on demand).

B. Nova

As pointed out earlier, the primary responsi-bility of Nova was to provide a tenant facingAPI that a tenant can use to request new in-stances in the infrastructure. These requests arethen channeled by the nova-api to the Asyn-chronous Message Passing Queue (AMPQ) tothe scheduler which in turn assigns the taskof instantiating VM to one of the compute-workers. Not only this, Nova was also responsi-ble for setting up the network configuration ofthese instances. The three modes of networkingprovided by Nova are: Flat networking, FlatDHCP and VLAN networking. Cloud operatorcan select one of them by choosing the appro-priate networking manager [7].

C. Quantum

Quantum is a “virtual network service” thataims to provide a powerful API to define thenetwork connectivity between interface devicesimplemented by other OpenStack services (e.g.,vNICs from Nova virtual servers) [4].

It provides many advantages for cloud tenantsby providing them with an API to build richnetworking topologies, such as creating multi-tier web application topologies, and configuringadvanced network capabilities in the cloud likeproviding end-to-end QoS guarantees. More-over, it provides a greatly extensible frameworkfor building different plugins. This has facili-tated the development of some highly utilizedplugins like Open vSwitch or Nicira NetworkVirtualization Platform (NVP) Plugin [11], [12].Originally, Quantum is focused on providing L2connectivity between interfaces. However, itsextensibility provides an avenue for innovationby allowing new functionality to be providedvia plugins. We leverage this chance to developa “controller” as one of such plugins. Generally,the role of a Quantum plugin is to translatelogical network modifications received from theQuantum Service API and map them to specificoperations on the switching infrastructure. Plu-gins are able to expose advanced capabilitiesbeyond L2 connectivity using API extensions.

D. Open vSwitch

Open vSwitch (OVS) is a software switchthat resides in the hypervisor and can provideconnectivity between the guests that reside inthat hypervisor. It is also capable of speakingto an OpenFlow controller, that can be locally orremotely located on another host. OVS is handyas it allows the network state associated witha VM to be transfered along with the VM onits migration and thus reduces the configurationburden of operators.

E. Floodlight

Floodlight [2] is a Java based OpenFlowcontroller that has forked from one of the twopioneering OpenFlow controllers (the secondone being NOX) developed at Stanford calledBeacon controller [13]. Our choice of using

2

Floodlight is attributed to the simplicity andyet high performance of the controller but webelieve that other controllers [14] can serve asreasonably good alternatives.

F. RyuThe closest work related to our work is Ryu

which is an open-sourced Network OperatingSystem for OpenStack project which providesa logically centralized control and an API thatmakes it easy for operators to create newnetwork management and control applications.It supports OpenFlow protocol to modify thebehavior of network devices [10].Ryu manages the network L2 segregation oftenants without using VLAN. It creates in-dividual flows for inter-VM communicationsand it has been shown in the literature thatsuch approaches do not scale to data centernetworks since they exhaust switch memoryquite fast [15], [16].

III. OUR APPROACH

We provide an OpenFlow Plugin for Quan-tum that leverages Floodlight controller to pro-vide better scalability. Among different Open-Flow Controllers, we decided to use FloodLightwhich is designed to be an enterprise gradeand high performance controller [2]. Althoughwe provide our plugin using FloodLight as theproof of concept, we believe that it should beeasy to extend our approach for other standardOpenFlow controllers, in case the providers andtenants of data centers prefer to deploy othercontrollers. We leave detailed explanation of ap-plicability of our approach to other controllersfor future work.Our Plugin takes requests from the QuantumAPI for creation, updating, and deletion ofnetwork resources and implements them on theunderlying network. In addition to the plugin,an agent is loaded on each Nova VM, thathandles the creation of Virtual Interfaces for theVM and attaches them to the network providedby Quantum. Our solution will leverage OpenvSwitch as an OpenFlow based virtual switchto provide the underlying network to Quantum,and configure the vSwitch via the FloodlightOpenFlow controller.

A. Challenges

The main challenge for providing OpenFlowcontrollers for Quanum is scalability. The Ryuplugin that currently exists takes the approachof creating flows for all inter-VM traffic. Thiswill not scale as the number of flows ex-ceeds the Ternary Content Addressable Memory(TCAM) capabilities of the OpenFlow switches.

A detailed discussion of such scalability is-sues of OpenFlow is provided in [15] wherethe authors show that the required number offlow entries in data centers and high perfor-mance networks (where an average ToR switchmight have roughly 78,000 flow rules if therule timeout is 60 second) exceeds the TCAMmemory of commodity switches used for Open-Flow rules (they claim a typical model supportsaround 1,500 OpenFlow rules).

As an alternative approach, we implementtenant network separation with VLANs, whichallows for a more scalable solution. We ac-knowledge that VLANs also have scaling limi-tations. Hence, a possible extension of our workwould be to use some form of encapsulation toscale even further.

B. Architecture

Our Quantum plugin is responsible for takingnetwork creation requests and translating thenetwork ID given by Quantum to a VLAN,and storing these translations in a database. Theplugin handles the creation of an Open vSwitchbridge and keeps track of the logical networkmodel. The agent and plugin keep track ofthe interfaces that are plugged into the virtualnetwork, and contact Floodlight for new trafficthat enters the network. Traffic is tagged with aVLAN ID by OpenFlow and Floodlight, basedon the network that the port is assigned to andport source MAC address. Once tagged, thenetwork traffic is forwarded using a learningswitch configured on Floodlight to control thevSwitch. As a result we will have VM trafficisolated on a per tenant basis through VLANtagging and OpenFlow control.

Figure 1 shows the architecture of our plu-gin. As shown, tenants pass on the commands

3

Fig. 1. Plugin Architecture.

to Quantum manager using nova-client. Quan-tum manager relays these calls to the Flood-light plugin that actually implements the cre-ate/read/update/delete (CRUD) functionalites.The plugin realizes these functions by creatinga mapping between each tenant’s network IDand VLAN ID, which is stored in the database.Whenever there is a new port attached to thequantum network, the plugin adds a correspod-ing port to the OVS-bridge and stores mappingbetween the port, and VLAN ID in the database.Finally, quantum agent which runs as a daemonon each hypervisor keeps polling the databaseand OVS bridge for any changes and whenany change is observed it is communicated toFloodlight client. This client then uses REST-ful API to talk with the Floodlight controllermodule. This way controller would know aboutthe port,network ID and VLAN ID mappings.Whenever a new packet arrives at the OVS forwhich it does not have any entry, the packetis sent to the controller for decision. Controllerthen pushes the rules in the OVS telling it whichVLAN ID to use to tag the packets as wellas to encapsulate the packet with physical host

addresses. Furthermore, controller will also addentry to each physical switch with the action topass the packet through normal packet process-ing pipeline, and the packet would be forwardedbased on simple learning switch mechanism.Thus, number of entries in TCAM of eachphysical switch would be directly proportionalto the distinct VLANs that pass through thatswitch.

IV. ANALYSIS

In this section we provide an analyis ofour approach compared to Ryu in terms ofnumber of flow entries that we can expect tosee at the switches. Like Tavakoli et al. [17],we assume that there are 20 VMs per servereach haveing 10 (5 incoming and 5 outgoing)concurrent flows. In such a set up, a VM-VMflow based approach like Ryu will not scale.Figure 2 shows comparison between the Ryuand our approach. Here we calculate the numberof flow table entries based on the number offields of flow matching rules that are specified(unspecified fields are wildcarded) when push-ing such rules in the OpenFlow switches. Incase of Ryu the match rules are based on thesource and destination MAC addresses of theVMs (assuming rest of the fields as wildcar)and hence the top of rack (ToR) switch wouldcontain 20 servers/rack x 20 VMs/server x 10concurrent flows/VM = 4000 entries to be keptin ToR’s TCAM. Whereas in our approach weare aggregating the flow entries based on theVLAN tag of the packet i.e., we will have onematching rule for each VM, at the physicalswitches (this is the worst case scenario thatassumes each VM on the server belongs to adifferent tenant). Thus, number of flow tableentries that we need to store in ToR’s TCAMsis 10 times lesser than that of Ryu.

V. AN EXAMPLE OF MANAGEMENTAPPLICATIONS: VM MIGRATION

OpenFlow and our plugin (as a realisation ofit for OpenStack) could simplify operating man-agement operations by providing global view ofthe network and direct control over forwardingbehavior. In this section, we provide an example

4

Fig. 2. Comparison of expected flow table entries.

of such applications: VM migration application,an application that is in charge of migratingvirtual machines of tenants. We explain whysuch operation is useful, what the challengesare, and how our plugin can help VM migrationby presenting some results.

New advances in technologies for high-speedand seamless migration of VMs turns VM mi-gration into a promising and efficient meansfor load balancing, configuration, power saving,attaining a better resource utilization by real-locating VMs, cost management, etc. in datacenters [18]. Despite these numerous benefits,VM migration is still a challenging task forproviders, since moving VMs requires updateof network state, which consequently could leadto inconsistencies, outages, creation of loopsand violations of service level (SLA) agreementrequirements [19]. Many applications today likefinancial services, social networking, recom-mendation systems, and web search cannot tol-erate such problems or degradation of service[15], [20].

On the positive side, SDN provides a power-ful tool for tackling these challenging problems:The ability to run algorithms in a logicallycentralized location, and precisely manipulatethe forwarding layer of switches creates a newopportunity for transitioning the network be-tween two states.

In particular, in this section, we seek the an-swer of the following question: given a startingnetwork, and a goal network, each consisting ofa set of switches each with a set of forwardingrules, can we come up with a sequence of Open-

Flow instructions, to manipulate the startingnetwork into the goal network, while preservingdesired correctness conditions such as freedomof loops, and bandwidth guarantees? This prob-lem boils down to solving the following twosub-problems: determining the ordering of VMmigrations, or the sequence planning; and foreach VM that should be migrated, determin-ing the ordering of OpenFlow instructions thatshould be installed or discarded.

To perform the transition while preservingcorrectness guarantees, we test the performanceof the optimal algorithm, i.e., the algorithmsthat, among all the possible orderings for per-forming the migrations, determines the orderingthat results in the minimum number of viola-tions. In particular, given the network topology,SLA requirements, and set of VMs that are tobe migrated along with their new locations; thisalgorithm outputs an ordered sequence of VMsto migrate, and a set of forwarding state changes1. This algorithm runs in the SDN controller toorchestrate these changes within the network.To evaluate its performance of our design, wesimulated its performance using realistic datacenter and virtual network topologies (as willbe explained later). We find that, for a widespectrum of workloads, this algorithm signif-icantly improves the performance of randomlyordering the migrations (up to 80%), in terms ofthe number of VMs that it can migrate withoutviolating SLAs 2.

Allocating virtual networks on a shared phys-ical data center has been extensively studiedbefore [21]–[23]. Both for the physical under-lying network and for the VNs, we borrow thetopologies and settings used in these works.More specifically, for underlying topology, wetest the algorithms for random graph, tree, fat-tree, D-Cell and B-Cube. For VNs, we use star,tree, 3-tier graph which is common for webservice applications [21]–[23]. Furthermore, forinitially allocating VNs before migrations, we

1We leave improving this algorithm like optimizing it torun faster to future work.

2For preserving SLA requirements while migrating, asthe proof of concept, in this work we focus on avoidingbandwidth violation.

5

use SecondNet’s algorithm [22], because of itslow time complexity, achieving high utilizationand support of arbitrary topologies.

We select random virtual nodes to migrate,and pick their destinations from set of all sub-strate nodes with free capacity randomly. Weacknowledge that diverse scenarios for whichmigrations are performed might require differ-ent mechanisms for node or destination selec-tions and such selections might impact the per-formance of algorithms. We leave explorationof such mechanisms and the performance of ourheuristic over them to future work.

Our experiments are performed on a IntelCore i7-2600K machine with 16GB memory.

Figure 3 depicts shows the results for 10round of experiments over a 200-node treewhere each substrate node has capacity 2, sub-strate links have bandwidth 500MB, and VNsare in form of 9-node trees with links with10MB bandwidth requirement. As it shows, thefraction of violations remains under 30% withapplying optimal algorithm, while it can getclose to 100% with some random ordering. 3.

Fig. 3. Fraction of migrations that would lead toviolation of link capacities with different algorithms.

VI. RELATED WORK

In the following, we review the approachestaken to scale the multi-tenant cloud network.

3Rising trends in fraction of violations for random plan-ning is due to the fact that as the number of migrationsincreases, more and more VMs will be replaced fromthe original place specified by the allocation algorithm.This sub-optimal allocation of nodes of VNs makes thefeasibility of a random migration less likely, e.g., it ismore likely to encounter violation while migrating the 10thVM than the 1th. It is interesting to note that even withquite large number of migrations, the fraction of violationsencountered by optimal soultion remains almost constant.

These are relevant since any of the followingapproaches could be incorporated as anunderlying mechanism of communication forQuantum plugin. Thus an understanding ofpros and cons of them is useful for plugindesign.Traditionally, VLANs (IEEE 802.1q) havebeen used as a mechanism for providingisolation between different tiers of a multi-tierapplications and amongst different tenantsand organizations that coexist in the cloud.Although, VLANs overcome the problem of L2network by dividing the network into isolatedbroadcast domains, it still doesn’t enableagility of services; Number of hosts that agiven VLAN can incorporate is still limited tofew thousand hosts. Thus as [24] reports, anyservice that needs to expand will have to beaccommodated in a different VLAN than whererest of its servers are hosted and hence leadingto fragmentation of service. Furthermore,VLAN configuration is highly error proneand difficult to manage, if done manually.Although, it is possible to configure both theaccess ports and trunk ports automatically withthe help of VLAN management policy server(VMPS) and VLAN trunking protocol (VTP)respectively, it is undesirable to do the latterbecause in this case network admins have todivide switches into VTP domains and eachswitch in a given domain has to participatein all the VLANs within that domain, whichleads to unnecssary overhead (see [25] forfurther discussion). Additionally, since VLANheader provides only a 12-bit VLAN ID, wecan have at most 4096 VLANs in the network.This is relatively low when considering themultipurpose use of VLANs. Virtualizationin data centers has further exacerbated thesituation as more segments need to be created.Virtual eXtensible LANs (VXLANs [26]) is arecent technology that is being standardizedunder IETF. VXLAN aims to eliminate thelimitations of VLAN by introducing a 24-bitVLAN network identifier (VNI), which meansthat with VXLAN it would be possible to have16M VLANs in the network. VXLAN makesuse of Virtual Tunnel Endpoints (VTEPs)

6

Fig. 4. Overview of VXLAN [27].

Fig. 5. VXLAN unknown unicast handling [27].

that lie in the software switch of hypervisors(or can be placed with access switches) andencapsulate packets with the VM’s associatedVNI (see Figure 4). VTEPs use Internet GroupManagement Protocol (IGMP) to join multicastgroups(Figure 5). This helps in eliminating theunknown unicast flood, which is now only sendto VTEPs in the multicast group of sender’sVNI.Limitations: Since there can be 16M VLANS,which exceeds the maximum number ofmulticast groups, it is possible to have manyVLANs, belonging to different VNIs, toshare the same multicast group [27]. Thisis problematic both in terms of security andperformance.

TRansparent Interconnection of Lots ofLinks (TRILL) [28] is another interestingconcept that is being standardized under IETF.It runs IS-IS routing protocol between bridges(called RBridges) so that each RBridge isaware of the network topology. Furthermore,a nickname acquisition protocol is run amongRBridges so that each RBridge can identifyothers. When an ingress bridge receives apacket, it encapsulates the packet in a TRILLheader that conists of the ingress switch’snickname and egress switch’s nickname andan additional source and destination MAC

addresses. These MAC addresses are swappedon each hop (just as done by routers). RBridgesat the edge learn source MAC addresses ofhosts for each incoming packet they encapsulateas an ingress switch and the MAC address ofhosts for every packet they decapsulate as anegress switch. If a destination MAC addressis unknown to ingress switch, that packet isthen sent to all the switches for discovery.Furthermore, TRILL uses a hop count fieldin the header which is decremented on eachhop by the Rbriges and hence prevents anyforwarding loop.Limitations: Although TRILL and RBridgesovercome STP’s limitations, they are notdesigned for scalability [29] [30]. Furthermore,since the TRILL header contains a hop countfield that needs to be decremented at eachhop, Frame Check Sequence (FCS) also needsto be calculated at each hop, which may inturn effect the forwarding performance ofswitches [31].

VII. CONCLUSION

We discussed the benefits of using OpenFlowfor cloud computing, described an open sourceproject for cloud management, and explainedwhy its available OpenFlow plugin does notprovide an acceptable level of scalability. Welayed out our design for an alternative pluginfor Quantum that would side step the scalabilityissue of the current OpenFlow plugin. We alsogave one instance of a data center managementapplication (VM migration) that could benefitfrom our OpenFlow plugin for performing itstask and tested its performance. Finally, weprovided hints about possible future work in thisdirection.

REFERENCES

[1] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado,N. McKeown, and S. Shenker, “NOX: towards anoperating system for networks,” ACM SIGCOMMComputer Communication Review, vol. 38, no. 3, pp.105–110, 2008.

[2] “FloodLight OpenFlow Controller,” http://floodlight.openflowhub.org/.

[3] “Open source software for building private and publicclouds.” http://openstack.org/.

7

[4] “Openstack - quantum wiki,” http://wiki.openstack.org/Quantum.

[5] J. Cappos, I. Beschastnikh, A. Krishnamurthy, andT. Anderson, “Seattle: a platform for educationalcloud computing,” in ACM SIGCSE Bulletin, vol. 41,no. 1. ACM, 2009, pp. 111–115.

[6] Y. Vigfusson and G. Chockler, “Clouds at the cross-roads: research perspectives,” Crossroads, vol. 16,no. 3, pp. 10–13, 2010.

[7] “Openstack compute administration manual - cactus,”http://docs.openstack.org/cactus/openstack-compute/admin/content/networking-options.html.

[8] “Openstack , quantum and open vswitch parti,” http://openvswitch.org/openstack/2011/07/25/openstack-quantum-and-open-vswitch-part-1/.

[9] “Novas documentation,” http://nova.openstack.org/.[10] “Ryu network operating system as openflow

controller,” http://www.osrg.net/ryu/using withopenstack.html.

[11] “Open vSwitch: Production Quality, Multilayer OpenVirtual Switch,” http://openvswitch.org/.

[12] “Nicira Networks,” http://nicira.com/.[13] “Beacon Home,” https://openflow.stanford.edu/

display/Beacon/Home.[14] “List of OpenFlow Software Projects,” http://yuba.

stanford.edu/∼casado/of-sw.html.[15] A. Curtis, J. Mogul, J. Tourrilhes, P. Yalagandula,

P. Sharma, and S. Banerjee, “Devoflow: Scaling flowmanagement for high-performance networks,” in ACMSIGCOMM, 2011.

[16] A. Curtis, W. Kim, and P. Yalagandula, “Mahout:Low-overhead datacenter traffic management usingend-host-based elephant detection,” in INFOCOM,2011 Proceedings IEEE. IEEE, 2011, pp. 1629–1637.

[17] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker,“Applying nox to the datacenter,” Proc. HotNets (Oc-tober 2009), 2009.

[18] V. Shrivastava, P. Zerfos, K. won Lee, H. Jamjoom,Y.-H. Liu, and S. Banerjee, “Application-aware virtualmachine migration in data centers,” in Infocom, 2011.

[19] M. Reitblatt, N. Foster, J. Rexford, and D. Walker,“Consistent updates for software-defined networks:Change you can believe in!”,” in HotNets, 2011.

[20] M. Lee, S. Goldberg, R. R. Kompella, and G. Vargh-ese, “Fine-grained latency and loss measurements inthe presence of reordering,” in SIGMETRICS, 2011.

[21] Y. Zhu and M. H. Ammar, “Algorithms for assigningsubstrate network resources to virtual network com-ponents,” in INFOCOM, 2006.

[22] C. Guo, G. Lu, H. J. Wang, S. Yang, C. Kong, P. Sun,W. Wu, and Y. Zhang, “SecondNet: A data centernetwork virtualization architecture with bandwidthguarantees,” ser. CoNEXT, 2010.

[23] H. Ballani, P. Costa, T. Karagiannis, and A. I. T.Rowstron, “Towards predictable datacenter networks,”in SIGCOMM, 2011.

[24] A. Greenberg, J. Hamilton, N. Jain, S. Kandula,C. Kim, P. Lahiri, D. Maltz, P. Patel, and S. Sengupta,“Vl2: a scalable and flexible data center network,”ACM SIGCOMM Computer Communication Review,vol. 39, no. 4, pp. 51–62, 2009.

[25] M. Yu, J. Rexford, X. Sun, S. Rao, and N. Feamster,“A survey of virtual lan usage in campus networks,”

Fig. 6. Snippet of code from OVS base class that weleveraged.

Communications Magazine, IEEE, vol. 49, no. 7, pp.98–103, 2011.

[26] “VXLAN: A Framework for Overlaying VirtualizedLayer 2 Networks over Layer 3 Networks,” tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-00/.

[27] http://blogs.cisco.com/datacenter/digging-deeper-into-vxlan/l.

[28] “Routing Bridges (RBridges): Base Protocol Specifi-cation,” http://tools.ietf.org/html/rfc6325.

[29] C. Tu, “Cloud-scale data center network architecture,”2011.

[30] “Transparent Interconnection of Lots of Links(TRILL): Problem and Applicability Statement,”tools.ietf.org/html/rfc5556.

[31] R. Niranjan Mysore, A. Pamboris, N. Farrington,N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya,and A. Vahdat, “Portland: a scalable fault-tolerantlayer 2 data center network fabric,” in ACM SIG-COMM Computer Communication Review, vol. 39,no. 4. ACM, 2009, pp. 39–50.

8

Fig. 7. Snippet of the agent daemon that polls databaseand updates controller.

9


Recommended