+ All Categories
Home > Documents > A Novel Automated SDN Architecture and Orchestration...

A Novel Automated SDN Architecture and Orchestration...

Date post: 23-Aug-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
6
A Novel Automated SDN Architecture and Orchestration Framework for Resilient Large-Scale Networks Diogo Oliveira * , Mahsa Pourvali * , Hao Bai * , Nasir Ghani * , Tom Lehman , Xi Yang , Majeed Hayat * Department of Electrical Engineering, University of South Florida Mid-Atlantic Crossroads, University of Maryland Department of Electrical and Computer Engineering, University of New Mexico Abstract—Software-Defined Networking (SDN) is a new technology paradigm that decouples the data and control planes and allows operators to manage networks via an abstraction model. SDN offers dynamism, scalability and flexibility for modern networking environments. However operators still have to design SDN solutions to deploy flow rules in an efficient and automated manner. Hence effective path computation, definition and deployment are some of the key challenges facing SDN operation. Along these lines, this work introduces a novel SDN management and orchestration framework that implements a resilient Next-Generation Path Computation Element (NG- PCE) to compute and deploy resilient protection paths. A testbed setup is also built and three simple testcase scenarios evaluated to verify overall system performance and accuracy. Keywords: Software-Defined Networking, orchestration, large-scale networks, survivability I. I NTRODUCTION Traditional data networks use legacy protocols with their own individual configuration and provisioning requirements. Although these embedded systems facilitate initial deploy- ment, management complexity tends to increase with network expansion and heterogeneity. Hence these vendor-specific technologies generally yield data plane ossification, presenting a barrier for building new or upgraded networking services. In light of this problem, Software-Defined Networking (SDN) [1] technologies have been evolved to deliver dynamism, scalability and flexibility for enterprise and carrier networks, i.e., by decoupling the control and data planes. Namely, this approach allows flow-forwarding policies to be defined via centralized or distributed systems, i.e., SDN controllers. Overall, SDN technologies have been widely studied, re- searched and deployed in the past few years. However, SDN deployment and usage presents its own challenges. Foremost, it removes critical decision-making capability from network nodes, allowing the latter to receive data plane packets, but delegating their control plane capabilities to a controller. Furthermore, the SDN controller imposes its own complexity and load, which increases proportionally to the number of flow policies defined and deployed. For example, at least one flow rule must be defined in the controller for each possible flow. Finally, since route decision-making is outsourced to the SDN controller, network route computation and setup procedures also need to be implemented. Now the most well-known SDN protocol implementation standard is OpenFlow [2]. Although this is a widely-deployed solution, it still has some key limitations in regards to au- tomation. Also, OpenFlow has no established implementation for a Path Computation Element (PCE) entity. This means that SDN deployments in large-scale networks will require network operators to manually compute and define efficient paths for each source and destination pair (or interface with proprietary network management systems). Hence the absence of a PCE compromises network management and service performance and survivability. In turn this makes it very difficult to manage larger networks, mandating the need for a proper orchestration framework. To address these above limitations with OpenFlow, an ex- panded Management and Orchestration (MANO) framework is developed to improve resiliency in large-scale SDN con- trolled networks. The solution consists of three key modules, i.e., an Abstraction module, a Next-Generation PCE (NG- PCE), and a FlowREST driver. Overall, the proposed system abstracts all flow rules definitions and deployments, pre- computes and defines efficient primary paths, and also deploys backup protection paths to handle node/link failures. In par- ticular, the pre-computing schemes (primary and protection) are developed using the concept of shared-risk link group (SRLG) [3] to incorporate probabilistic failure risk awareness. Meanwhile, end-to-end computation survivability has not been adequately addressed in SDN studies. This paper is organized as follows. Section II reviews some existing work on large SDN projects. The MANO framework architecture is then presented in Section III. To evaluate this new framework, a multi-domain network testbed is also de- veloped and tested, as detailed in Section IV. Various testcase scenarios results are then presented in Section V, along with some conclusions in Section VI. II. RELATED WORKS Several efforts have looked at leveraging or improving SDN technology. For example, a number of studies have addressed physical or virtual SDN controller placement [4], 978-1-5386-1539-3/17/$31.00 c 2017 IEEE
Transcript
Page 1: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

A Novel Automated SDN Architecture andOrchestration Framework for Resilient Large-Scale

NetworksDiogo Oliveira∗, Mahsa Pourvali∗, Hao Bai∗, Nasir Ghani∗, Tom Lehman†, Xi Yang†, Majeed Hayat‡

∗Department of Electrical Engineering, University of South Florida†Mid-Atlantic Crossroads, University of Maryland

‡Department of Electrical and Computer Engineering, University of New Mexico

Abstract—Software-Defined Networking (SDN) is a newtechnology paradigm that decouples the data and control planesand allows operators to manage networks via an abstractionmodel. SDN offers dynamism, scalability and flexibility formodern networking environments. However operators still haveto design SDN solutions to deploy flow rules in an efficient andautomated manner. Hence effective path computation, definitionand deployment are some of the key challenges facing SDNoperation. Along these lines, this work introduces a novel SDNmanagement and orchestration framework that implementsa resilient Next-Generation Path Computation Element (NG-PCE) to compute and deploy resilient protection paths. Atestbed setup is also built and three simple testcase scenariosevaluated to verify overall system performance and accuracy.

Keywords: Software-Defined Networking, orchestration, large-scalenetworks, survivability

I. INTRODUCTION

Traditional data networks use legacy protocols with theirown individual configuration and provisioning requirements.Although these embedded systems facilitate initial deploy-ment, management complexity tends to increase with networkexpansion and heterogeneity. Hence these vendor-specifictechnologies generally yield data plane ossification, presentinga barrier for building new or upgraded networking services.In light of this problem, Software-Defined Networking (SDN)[1] technologies have been evolved to deliver dynamism,scalability and flexibility for enterprise and carrier networks,i.e., by decoupling the control and data planes. Namely, thisapproach allows flow-forwarding policies to be defined viacentralized or distributed systems, i.e., SDN controllers.

Overall, SDN technologies have been widely studied, re-searched and deployed in the past few years. However, SDNdeployment and usage presents its own challenges. Foremost,it removes critical decision-making capability from networknodes, allowing the latter to receive data plane packets, butdelegating their control plane capabilities to a controller.Furthermore, the SDN controller imposes its own complexityand load, which increases proportionally to the number of flowpolicies defined and deployed. For example, at least one flowrule must be defined in the controller for each possible flow.Finally, since route decision-making is outsourced to the SDN

controller, network route computation and setup proceduresalso need to be implemented.

Now the most well-known SDN protocol implementationstandard is OpenFlow [2]. Although this is a widely-deployedsolution, it still has some key limitations in regards to au-tomation. Also, OpenFlow has no established implementationfor a Path Computation Element (PCE) entity. This means thatSDN deployments in large-scale networks will require networkoperators to manually compute and define efficient paths foreach source and destination pair (or interface with proprietarynetwork management systems). Hence the absence of a PCEcompromises network management and service performanceand survivability. In turn this makes it very difficult to managelarger networks, mandating the need for a proper orchestrationframework.

To address these above limitations with OpenFlow, an ex-panded Management and Orchestration (MANO) frameworkis developed to improve resiliency in large-scale SDN con-trolled networks. The solution consists of three key modules,i.e., an Abstraction module, a Next-Generation PCE (NG-PCE), and a FlowREST driver. Overall, the proposed systemabstracts all flow rules definitions and deployments, pre-computes and defines efficient primary paths, and also deploysbackup protection paths to handle node/link failures. In par-ticular, the pre-computing schemes (primary and protection)are developed using the concept of shared-risk link group(SRLG) [3] to incorporate probabilistic failure risk awareness.Meanwhile, end-to-end computation survivability has not beenadequately addressed in SDN studies.

This paper is organized as follows. Section II reviews someexisting work on large SDN projects. The MANO frameworkarchitecture is then presented in Section III. To evaluate thisnew framework, a multi-domain network testbed is also de-veloped and tested, as detailed in Section IV. Various testcasescenarios results are then presented in Section V, along withsome conclusions in Section VI.

II. RELATED WORKS

Several efforts have looked at leveraging or improvingSDN technology. For example, a number of studies haveaddressed physical or virtual SDN controller placement [4],

978-1-5386-1539-3/17/$31.00 c©2017 IEEE

Page 2: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

[5]. Meanwhile others have looked at framework designs anddeployments, for example [6] proposes a SDN-based modelcapable of managing wireless 5G heterogeneous infrastructureand resources.

Meanwhile, [7] defines a path computation element forlarge-scale SDN networks. Namely, the authors present ak-maximally disjoint path routing algorithm that optimizespath computation between source and destination pairs. Thisapproach leverages weighted links and takes into accountplural metrics, i.e., bandwidth and link usage. However, im-plementing such an algorithm in realistic networks is verycomplicated since many steps are required to achieve the pathdefinitions themselves, i.e., such as topology abstraction, flowtables definition, link state acquisition. Furthermore, [7] doesnot include any specific provisions for failure regions, such asSRLG, and testbed implementation is not done to verify theapproach.

Finally, [8] presents an inter-domain SDN solution in whichmultiple domains exchange constrained intra-domain informa-tion among each other to support inter-domain routing. Toverify their solution, the authors also interconnect 4 sampledomains and run various inter-domain path computation usecases. However this solution does not compute protectionpaths. Additionally, the authors only consider a very smalltopology, which raises scalability and stability concerns.

Although the above efforts present a good set of contri-butions, end-to-end communication survivability has not beenadequately addressed in SDN studies. Hence this efforts looksto reproduce communication between nodes through reliablepath computation and survivability definitions in SDN-enablednetworks.

III. THE RESILIENT SDN FRAMEWORK ARCHITECTURE

SDN is a new technology that, in a simplistic sense, usesa centralized controller (or multiple controllers) to managecontrol plane tasks for all SDN-enabled network nodes. Over-all, this controller has two key roles: a) to store global flowrules tables, which contain information on how to process eachincoming/outgoing packet, and b) to send (push) flows onto theproper nodes in order to establish specific rules, i.e., analogousto a third-party control plane.

Now clearly, SDN management complexity will be pro-portional to the network size. Therefore having a frameworkthat abstracts network topology information and facilitatesflow tables management is imperative. Hence many Openflow-based network operating systems, such as Open NetworkOperating System (ONOS) [9] and OpenDaylight [10], havebeen developed in order to automate and interface with SDN-based networks via the OpenFlow southbound applicationprogramming interface (API). Particularly, ONOS has beendeveloped by a non-profit consortium, ON.Lab [11], as anopen-source SDN network control system targeting serviceproviders and their mission-critical networks. It offers north-bound abstractions and APIs to enable application develop-ment, as well as southbound abstractions and interfaces tocontrol and provision OpenFlow-ready switches and legacy

Fig. 1. Architecture of the resilient SDN system for large-scale networks

devices. As result, the proposed solution adopts ONOS tointerface with network nodes and perform SDN controllertasks.

Next, consider the definition, conception and deployment ofa novel management and orchestration (MANO) frameworkto improve resiliency in large-scale SDN networks. Herethree key modules are implemented, i.e., Abstraction Module,Next-Generation Path Computation Element (NG-PCE) andFlowREST driver. The resilient SDN system architecture isalso shown in Fig. 1. Respectively, these modules 1) abstractthe network topology and resources (system model) as wellas SRLG information (SRLG service model) using NetworkMarkup Language (NML) [12] schemas in addition to definingnew schemas, 2) pre-compute risk-aware link-disjoint primaryand protection paths between source and destination pairs, andinteract with the FlowREST driver, and 3) build flow rulesaccording to path definitions followed by the implementationof these rules in the respective network nodes, i.e., in order todefine instantiate paths.

Fig 2 shows the overall resilient SDN framework flowchartfor the pre-computed link-disjoint primary/backup path pairscheme. Further details on the three key modules are nowpresented.

A. Abstraction Module

SDN frameworks must keep a track of network resourceinformation in order to manage networks. Therefore the Ab-straction Module uses a pre-defined standardized NML tostore, identify and correlate network topology data. Hencethe NML defines an abstract and generic model which usesclasses, attributes and parameters to describe multi-layer andmulti-domain networks. In addition, the resilient SDN systemabstraction module also describes flow rules for the controlplane. Hence these definitions are capable of describing bothphysical resources and the logical relations between them. Thisstate is then used to drive the system components, i.e., NG-PCE and FlowREST driver.

Fig. 3 illustrates how the Abstraction Module uses two NMLmodels (system and path service) to detail a network topologyand to correlate physical and logical resources. Namely, this

Page 3: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

Fig. 2. SDN framework flowchart

Fig. 3. System, path service models (abstract topology, resource information)

diagram shows one domain hosting two switches. Here theBidirectionalPort and isAlias directives identify the other endof a physical link, while the flow rules directives (rule-match-[0-4] and rule-action-0) identify the logical communicationbetween nodes. Now in Fig. 3, each switch has N ports andone flow table, and this flow table has 4 flow rules. Alsoswitch 1 (domain1:sw1) is physically connected through itsport 1 (domain1:sw1:s1-eth1) to port 1 (domain1:sw2:s2-eth1)on switch 2 (domain1:sw2). The flow rules objects are detailedin Section III-B. Note that the hierarchical structure definedby using the Uniform Resource Identifier (URI) also playsan important role in defining the relations between networkresources.

B. Next-Generation Path Computation Element (NG-PCE)

The Abstraction Module allows the resilient SDN systemto visualize the whole network topology through its hierar-chical and correlating structure. This feature is imperativefor implementing the NG-PCE which is the main componentresponsible for performing path computation. Overall, manydifferent schemes have been studied to route paths betweenseparate node pairs. These solutions include basic shortestpath algorithms along with more accurate heuristic and evenoptimization schemes. Hence the proposed NG-PCE imple-ments both a lowest cost (shortest path) algorithm and a moregeneralized k-shortest path (K-SP) algorithm.

As Fig. 1 shows, the NG-PCE retrieves two models fromthe Abstraction Module: the system model and the SRLGservice model. The first is used to gather physical resourceinformation from switches and links (MAC address, inboundand outbound link ports). This information is then used tomodel the route as a graph, G(V,E), i.e., where V is theset of vertices (nodes) and E is the set of edges (links).Meanwhile, the latter is used to define risk regions, which arenecessary to compute constrained path routes between a sourceand a destination. Namely, the constrained K-SP algorithmutilizes the information defined in the SRLG model to improvesurvivable path computation, i.e., this algorithm incorporatesthis pre-defined information to compute a resilient primarypath or a primary/protection path pair. Carefully note that ifa primary and protection path pair is required, the protectionpath is selected from another K-SP set, i.e., apart from theK-SP set from which the primary path is determined.

Note that the SRLG concept has been widely used in manynetwork recovery schemes to group together network nodesand links with common risks. However, since the constrainedK-SP algorithm requires SRLG state information, it mustbe pre-defined. This information can then be leveraged bythe path computation scheme to achieve more efficient andresilient networks [3].

Now the protection path routing scheme must also definea recovery scheme as well. Hence the NG-PCE also imple-ments a dedicated backup path protection (DBPP) algorithm[13] which utilizes SRLG information to calculate a diverseprotection path that does not include any links and nodeswith common risks. This also implies that the protection pathdoes not share any links with the primary path. Therefore ifthe topology does not fulfill the link disjoint requirement, noprotection path can be determined. Also, a protection path isaffiliated with a single primary active path and is reserved forrecovery in case of failures. Overall, this pre-fault provisioningapproach results in network resources being left unused toguarantee availability during potential failure events. As aresult, only one path is active at any time, i.e., lower resourceutilization.

The NG-PCE uses a weighted K-SP algorithm to determinepath pairs, and not a greedy version. This means that thealgorithm averages the probability of failure of two pathsand picks a pair with the lowest failure probability as the

Page 4: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

primary/protection path pair. By contrast greedy algorithmsalways choose the shortest path, but the protection pathmay not be the second shortest. Overall, the weighted K-SPalgorithm works as follows:

1) Compute up to k-shortest paths for a given networktopology G(V,E) and requested source/destination nodepair.

2) Verify traffic engineering (TE) constraints for the candi-date paths from Step 1 and remove infeasible paths. If nocandidate path remains, return setup failure.

3) Compute the cost of each remaining candidate path basedupon the total SRLG failure probability.

4) Take average cost of two combined paths given thatthe paths cannot share common links. The resultingpath pair with the lower cost average is chosen as theprimary/protection pair. The lowest cost path from thesetwo is also defined as primary, and the second as theprotection path.

5) If no path pair is found, return setup failure.Once a single primary path or a primary/protection path pair

has been computed, the NG-PCE builds the respective pathservice model(s) and sends it (them) to the Abstraction Moduleto store it in the database. Note that the base NML schema [12]only has a limited number of classes and attributes. Therefore,additional new NML classes and attributes are also definedto hold path and SRLG information, and as a result, buildpath service and SRLG models. Namely, the path servicemodel defines the nodes, ports and flow rules associated witha specific source-destination pair. Meanwhile, a set of NMLrules are defined for each computed source-destination pathpair. Also, each specific path has a Unique IdentificationNumber (UUID).

C. FlowREST Driver

As shown in Fig. 1, the resilient SDN system uses theONOS REST API to interface with the ONOS SDN controller.Here several types of data can be pulled from or pushedto ONOS controllers, i.e., as defined by On.LAB in [14].In particular, the system here makes use of the followingresources: device, link and flow.

Now in order to implement single primary path or pri-mary/protection pair, the resilient system must be able tospecify (to the SDN controllers) how frames are to be pro-cessed and forwarded. To achieve this goal, the AbstractionModule retrieves the path service model associated with thesource-destination pair, which contains the set of flow rulesto be sent/pushed into the proper SDN controller(s). ThisNML model is parsed and coded into ONOS JavaScript ObjectNotation (JSON) format, and then pushed to the controller(s)via the ONOS REST API interface. For each pushed flow, theONOS REST Driver retrieves a Flow Identification number(FID), forwards it to the Abstraction Module, which thenstores that FID in the database. Storing these FIDs is necessaryin order to keep track of the deployed flow rules. Also notethat the SDN controller pushes flow rules to all nodes alongthe path in order to setup the end-to-end connection.

As an example, consider the simplistic path service modelin Fig. 3 (only a small part of the model is representedfor illustrative purposes). Here one-way communication isrequired between switch1 (sw1) and switch2 (sw2) via a setof rules. Note that the mentioned path service model allowsswitch1 (outport eth1) to forward flows to switch2 (inporteth1), but switch1 has no flow rule allowing ingress packetsoriginated from switch2 (and vice-versa). The reverse does notoccur. Obviously the ONOS REST Driver detects the need forthe reverse path, builds the JSON flow rule and pushes it tothe designated controller(s).

Now when a primary/protection path pair is computed, theAbstraction Module also stores two separate models, i.e., onefor each path. The FlowREST driver then retrieves and parsesonly one path service model, which can be the primary pathservice model or the protection one. Initially, the primary pathrules set is pushed to the controllers since these rules definethe shortest path. However, if a link failure is detected on thepath, the FlowREST driver retrieves the currently-deployedflows FIDs and instructs the ONOS controllers to drop thosespecific flow rules. Once the primary path rules set is removed,the driver sends the new flow rules set to the controllers, alsoimplementing the protection path. The same sequence of stepsis performed when all links failures belonging to the primarypath are solved.

IV. TESTBED & DEVELOPMENTS

To verify the overall automation capability and accuracy ofthe flow rules built and deployed by the resilient SDN system,a multi-area single domain topology is designed. This topologyis emulated using Mininet [16], which, among other features,emulates SDN network topologies.

Overall, the test topology is composed of 4 different net-work areas. Namely, Area 1 comprises of the DARPA CORO-NET Global Network topology, i.e., CORONET ContinentalUnited States (CONUS), with 75 nodes and 99 links [15].This topology represents a nationwide single area and singledomain large-scale topology with nodes spread across thecontinent. The other 3 areas (Metro1, Metro2, and Metro3)comprise of 7 nodes and 12 links, 7 nodes and 10 links,and 3 nodes and 3 links, respectively. These metro topologiesrepresent small city-wide networks. Furthermore, all 3 metrotopologies are interconnected to the CONUS topology, i.e.,each one is connected via 2 nodes, since 2 separate links arerequired for link-disjoint DBPP paths. For example, Metros1 and 2 are linked to Washington DC (East Coast), whereasMetro 3 is linked to Los Angeles (West Coast). Hence thelarge-scale topology comprises of 92 nodes and 130 links.

Next, 4 different virtual machines (VM) are also defined.Namely VM Controller 1 runs an instance of the ONOS con-troller to control Metros 1 and 3. Meanwhile VM Controller 2runs another instance of the ONOS controller to implement thecontrol plane for the CONUS topology. Also, VM Controller3 controls Metro 2, whereas VM Controller 1 also runs theMininet emulator for all network nodes. Finally, VM 4, namelyVM R-SDN, runs the resilient SDN system. Figure 4 depicts

Page 5: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

Fig. 4. Multi-area controllers for resilient SDN system communication

the virtual multi-area large-scale network topology and allphysical and virtual resources involved.

Three different testcases are designed to verify operationof the resilient SDN system for different scenarios. As shownin Table I, Testcase 1 performs primary and protection pathcomputation and flow instantiation for 2 closely-located intra-area nodes, i.e., Washington DC and New York City. Mean-while, Testcase 2 considers an inter-area source and destinationpair between two distant nodes, i.e., Los Angeles (switch7)and Metro2, with SRLG consideration. The risk area here isdefined as all links of the Abilene node (with a 50% probabilityof occurrence). Finally, Testcase 3 analyzes inter-area pathcomputation and implementation between switch3 in Metro3 and switch7 in Metro 2 with the same SRLG criteria asused in Testcase 2.

V. RESULTS

The resilient SDN system invokes the NG-PCE using pa-rameters defined in a path service model file (omitted dueto space restrictions). A different file is also defined for alltestcases. Hence once the file is read for Testcase 1, thePCE computes the K-SPs between NYC and Washington DC.Thereafter, two weighted shortest paths are chosen and twopath service models (similar to the one shown in Fig. 3) arebuilt by abstracting the paths, see Fig. 5. Overall, Fig. 5 can beinterpreted as follows: the green bullets denote the nodes alongthe primary path, whereas the blue bullets represent the nodesalong the protection path. The related numbers also identifythe sequence of hops, i.e., from the source to the destination.Similarly, Figs. 7 and 8 show the results for Testcases 2 and3, and in addition to the green and blue bullets, the silverbullets represent the primary path nodes which are part of adifferent metro topology. Meanwhile the red bullets representthe protection path nodes in the destination network

Overall, the NG-PCE took 50 seconds to compute the pathpair for Testcase 1, and it also took another 6 seconds topush all flows to the proper SDN controllers (to establish theprimary path). As shown in Fig 5, the weighted path pair isalso computed as follows: Washington DC, Baltimore, Long

Fig. 5. Testcase 1 - Primary/protection path pair in a single-domain topology

Fig. 6. Los Angeles-node flows

Island, Pittsburgh, Scranton and NYC (green bullets, primarypath), and Washington DC, Richmond, Greensboro, Raleigh,Norfolk, Wilmington and NYC (blue bullets, protection path).

Meanwhile the path service models abstract the paths, asdefined in the Abstraction module. However the flows to beinstantiated via the ONOS controller still have to be built.Thereafter, the ONOS REST driver reads the primary pathservice model and generates the corresponding JSON blocksand pushes them into the proper ONOS controllers. As anexample, Fig. 6 shows one of the 12 flows (2 for each node,6 nodes) pushed into Controller2 to define the Los Angelesnode flows.

Meanwhile, Fig. 7 shows the inter-area path pairs betweenLos Angeles (CONUS) and switch7 in Metro2. Here, theprimary path is comprised of 13 nodes and 26 flows, formingthe sequence Los Angeles, Fresno, Las Vegas, Albuquerque,Denver, Omaha, Kansas City, Saint Louis, Louisville, Cincin-nati, Washington DC. Specifically, this includes 11 nodes fromCONUSand 3 nodes from Metro2 (switch1, switch6, switch7).Meanwhile, the protection path consists of 19 nodes, i.e., 38flows, forming the sequence Los Angeles, San Diego, Phoenix,Tucson, El Paso, Abilene, Dallas, Little Rock, Memphis,Nashville, Greensboro, Richmond, Washington DC, switch2,switch1, switch6, switch5 and switch7. Although both pathspass through Washington DC, switch1 and switch6, these pathsuse separate links, i.e., due to the link disjoint requirement.Again, it took 6 minutes and 42 seconds to compute bothpaths, and 8 seconds to push all primary path flows.

Page 6: A Novel Automated SDN Architecture and Orchestration ...eng.usf.edu/~nghani/papers/ieee_secon2017.pdf · SDN frameworks must keep a track of network resource information in order

TABLE ITESTCASES DESCRIPTION

Testcase Source / Area Destination / Area SRLG Location / Area SRLG ProbabilityTestcase-1 Washington DC/CONUS NYC/CONUS None None

Testcase-2 Los Angeles/CONUS Switch7/Metro2 Abilene/CONUS 50%

Testcase-3 Switch3/Metro 3 Switch7/Metro 2 Abilene/CONUS 50%

Fig. 7. Testcase 2 - Primary/protection path pair in a 2-domain topology

Fig. 8. Testcase 3 - Primary/protection path pair in a 3-domain topology

Next, Fig. 8 shows the inter-area path pairs between switch3in Metro3 and switch7 in Metro2. Here the primary path iscomprised of 15 nodes and 30 flows, beginning with switch3,switch2 (both in Metro3), Los Angeles (CONUS) and then thesame sequence as the primary path for Testcase 2. Similarly,the protection path consists of 21 nodes, i.e., 42 flows, andstarts with switch3, goes through switch1 (both from Metro3),then Los Angeles (from CONUS), and continues with the sameprotection path sequence for Testcase 2. Overall, it took 13minutes and 10 seconds to compute both paths and 9 secondsto push the primary path flows.

Finally, to verify the capacity and accuracy of the protectionpaths for each testcase, one link is disrupted and the protectionpath invoked. As expected, the flows that were defined alongthe primary path are successfully removed and the flows thatare defined for the protection paths are pushed into the properONOS controller. These protection path deployments took 12,16 and 21 seconds, respectively.

VI. CONCLUSION

SDN technologies are gaining strong traction and allowingoperators to manage networks using abstraction models. How-ever managing SDN flow rules is a complex task, especially in

large-scale networks. Reliable path computation and deploy-ment is also one of the many challenges presented by SDN. Toaddress these concerns, this work introduces a novel manage-ment and orchestration system to implement an efficient andautomated Next-Generation Path Computation Element (NG-PCE). This entity computes and deploys link-disjoint primaryand protection paths based upon risk vulnerability information.A testbed is also built and verified using several testcases.Overall results confirm that the system is capable of computingand defining primary/backup paths.

ACKNOWLEDGMENT

This work has been supported by a Fundamental Researchaward from the United States Defense Threat ReductionAgency (DTRA). The authors are very grateful for this sup-port.

REFERENCES

[1] D. Kreutz, F. Ramos, P. Verissimo, C. Rothenberg, S. Azodolmolky,S. Uhlig, “Software-Defined Networking: A Comprehensive Survey”,Proceedings of the IEEE, Vol. 103, No. 1, Jan. 2015, pp. 14-76.

[2] A. Lara, A. Kolasani, B. Ramamurthy, “Network Innovation UsingOpenFlow: A Survey”, IEEE Communications Surveys & Tutorials, Vol.16, No. 1, Feb. 2014, pp. 493-512.

[3] E. Oki, N. Matsuura, K. Shiomoto, N. Yamanaka, “A Disjoint PathSelection Scheme With Shared Risk Link Groups in GMPLS Networks”,IEEE Communications Letters, Vol. 6, No. 9, Sept. 2002, pp. 406-408.

[4] L. Muller, R. Oliveira, M. Luizelli, L. Gaspary, “Survivor: An EnhancedController Placement Strategy for Improving SDN Survivability”, IEEEGLOBECOM 2014, Austin, Dec. 2014.

[5] S. Savas, M. Tornatore, M. Habib, P. Chowdhury, B. Mukherjee,“Disaster-Resilient Control Plane Design and Mapping in Software-Defined Networks”, IEEE HPSR 2015, Budapest, July 2015.

[6] S. Sun, L. Gong, B. Rong, “An Intelligent SDN Framework for 5GHeterogeneous Networks”, IEEE Communications Magazine, Vol. 53, No.11, Nov. 2015, pp. 142-147.

[7] J. Abe, H. Manter, A. Yayimli, “k-Maximally Disjoint Path RoutingAlgorithms for SDN”, CyberC 2015, Xian, China, Sept. 2015.

[8] P. Lin, J. Bi, S. Wolff, Y. Wang, A. Xu, Z. Chen, H. Hu, Y. Lin, “A West-East Bridge Based SDN Inter-Domain Testbed”, IEEE CommunicationsMagazine, Vol. 53, No. 2, 2015, pp. 190-197.

[9] Open Networking Lab. “ONOS - Open Network Operating System”,available online: http://www.onosproject.org

[10] . J. Medved, R. Varga, A. Tkacik, K. Gray, “OpenDaylight: Towardsa Model-Driven SDN Controller Architecture”, IEEE WoWMoM 2014,Sydney, AU, June 2014.

[11] Open Networking Lab, “ON.LAB - Bringing Openness and Innovationto the Internet and Cloud”, available online: http://onlab.us.

[12] J. Ham, F. Dijkstra, R. Lapacz, J. Zurawski, “Network Markup LanguageBase Schema Version 1”, Open Grid Forum, May 2013.

[13] S. Liew, M. Gan, “An Exact Optimum Paths-Finding Algorithm for a+1Path Protection”, ICIST 2012, Wuhan, China, Feb. 2012.

[14] A. Koshibe. “Appendix B: REST API (Draft)”, available online:https://wiki.onosproject.org/pages/viewpage.action?pageId=1048699.

[15] Monarch Network Architects, “Sample Optical Network Topologies”,available online: http://www.monarchna.com/topology.html.

[16] Mininet Team, “Mininet - An Instant Virtual Network on your Laptop”,available online: http://mininet.org.


Recommended