+ All Categories
Home > Documents > ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs...

ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs...

Date post: 15-Aug-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
63
Grant Agreement No.: 687871 ARCFIRE Large-scale RINA Experimentation on FIRE+ Instrument: Research and Innovation Action Thematic Priority: H2020-ICT-2015 D4.3 Design of experimental scenarios, selection of metrics and KPIs Due date of Deliverable: Month 12 Submission date: January 2017 Final version: May 5th 2017 Start date of the project: January 1st, 2016. Duration: 24 months version: V1.0 Project funded by the European Commission in the H2020 Programme (2014-2020) Dissemination level PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services)
Transcript
Page 1: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

Grant Agreement No.: 687871

ARCFIRE

Large-scale RINA Experimentation on FIRE+

Instrument: Research and Innovation ActionThematic Priority: H2020-ICT-2015

D4.3 Design of experimental scenarios, selection of metrics and KPIs

Due date of Deliverable: Month 12Submission date: January 2017

Final version: May 5th 2017Start date of the project: January 1st, 2016. Duration: 24 months

version: V1.0

Project funded by the European Commission in the H2020 Programme (2014-2020)

Dissemination level

PU Public X

PP Restricted to other programme participants (including the Commission Services)

RE Restricted to a group specified by the consortium (including the Commission Services)

CO Confidential, only for members of the consortium (including the Commission Services)

Page 2: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

H2020 Grant Agreement No. 687871

Project Name Large-scale RINA Experimentation on FIRE+

Document Name Deliverable 4.3

Document Title Design of experimental scenarios, selection of metrics andKPIs

Workpackage WP4

Authors Dimitri Staessens (imec)

Sander Vrijders (imec)

Eduard Grasa (i2CAT)

Leonardo Bergesio (i2CAT)

Miquel Tarzan (i2CAT)

Bernat Gaston (i2CAT)

Sven van der Meer (LMI)

John Keeney (LMI)

Liam Fallon (LMI)

Vincenzo Maffione (NXW)

Gino Carrozzo (NXW)

Diego Lopez (TID)

John Day (BU)

Editor Dimitri Staessens

Delivery Date May 5th 2017

Version v1.1

2

Page 3: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Abstract

This deliverable details the preparatory work done in ARCFIRE WP4 before the actual experimen-tation can begin. It takes the converged network operator scenarios for a traditional network designand the RINA designs that were developed in WP2 together with the testbed overview that wascompiled by T4.1 and extracts interesting reference scenarios for the 4 main experiment objectives.This document will thus serve as a crutch for the experimenters during experimentation, providingall necessary information on the experiment objectives and tools in one location, and providingreferences to the sections of the WP2 deliverables where more details can be found.

First, this document briefly summarises the WP2 reference scenarios for the converged networkoperator, focusing on the architecture of the access network, since that is where the technologydiversification is most apparent. The first experiment will investigate the differences between aRINA and an evolutionary 5G network in terms of management complexity, targeting configurationmanagement deploying a new service. The experiment has been divided into 4 sub-experimentswith green and brown field starting points for the service. The second experiment will performnetwork performance oriented measurements, investigating the delivery of multimedia servicesover heterogenous networks. Here, ARCFIRE will evaluate network parameters such as packetoverhead with respect to data transport, resiliency to failures and maintenance and scalability withrespect to the number of users (flows), services and network elements. The third experiment turnsour sight towards multi-provider networks. RINA will be evaluated as an alternative for MPLStowards its capability for maintaining end-to-end QoS guarantees with respect to delay, jitter andbandwidth. A fourth experiment will evaluate how renumbering end-users addresses with respectto the location in the network improves overall scalability. The fifth and final experiment will alsodelve into the OMEC scenario for RINA, keeping applications reachable while they move throughthe network.

3

Page 4: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Table of Contents

1 Reference scenario 81.1 Baseline reference for traditional networks . . . . . . . . . . . . . . . . . . . . . 81.2 RINA network design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Experiment 1: Management of multi-layer converged service provider networks 142.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.1 The Current Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.2 Multi-layer Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . 152.1.3 The resulting Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 162.1.4 Scope: Configuration Management . . . . . . . . . . . . . . . . . . . . 162.1.5 Aspects, KPIs, Scenarios and Use Cases . . . . . . . . . . . . . . . . . . 19

2.2 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.1 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.2 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.3 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.4 Experiment 1-1: Deploy Network from Zero . . . . . . . . . . . . . . . 252.2.5 Experiment 1-2: Deploy new Service in Existing Network . . . . . . . . 272.2.6 Experiment 1-3: Deploy Network and Service from Zero . . . . . . . . . 282.2.7 Experiment 1-4: Add new Network Node - Zero touch . . . . . . . . . . 28

2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Experiment 2: Scaling up the deployment of resilient multimedia services over het-erogeneous physical media 313.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.5 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.6 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4 Experiment 3: RINA as an alternative to MPLS 374.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4

Page 5: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

5 Experiment 4: Dynamic and seamless DIF renumbering 455.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6 Experiment 5: Application discovery, mobility and layer security in support of OMEC 536.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

7 Software 60

8 Conclusion 61

5

Page 6: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

List of Figures

1 Converged service providers’ network . . . . . . . . . . . . . . . . . . . . . . . 82 Residential fixed Internet access: data plane (up) and control plane (down) . . . . 93 4G Internet access: data plane (up) and control plane (down) . . . . . . . . . . . 104 Layer 3 VPN between customer sites: data plane (up) and control plane (down) . . 115 Fixed access network (RINA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Cellular access network (RINA) . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Enterprise VPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Management Points in a Protocol Stack Pattern . . . . . . . . . . . . . . . . . . 149 Management Points in a Layered Domain Pattern . . . . . . . . . . . . . . . . . 1710 FCAPS, Strategies, and RINA Network . . . . . . . . . . . . . . . . . . . . . . 1711 DMS: Stand-alone System for Benchmark Experiments . . . . . . . . . . . . . . 2212 DMS: Full System for Experiments with a RINA Network . . . . . . . . . . . . 2313 Experiment 1: Minimal RINA System for simple Experiments . . . . . . . . . . 2314 Extended experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3315 Data plane of 2-layer hierarchical LSP scenario for a core service provider network 3716 Control plane of 2-layer hierarchical LSP scenario for a core service provider network 3817 2-layer service provider core implemented with a RINA over Ethernet configuration 3918 Mid-scale ladder topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4119 Large-scale ladder topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4120 Connectivity graph of the backbone L1 DIF (left) and the backbone L2 DIF (right) 4221 Renumbering experiment: small single DIF scenario, layering structure (up), and

DIF connectivity (down) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4822 Renumbering experiment: large single DIF scenario, DIF connectivity . . . . . . 4923 Renumbering experiment: DIF structure for the multi-DIF scenario . . . . . . . . 5024 Renumbering experiment: systems for small scale multi-DIF scenario . . . . . . 5025 Renumbering experiment: systems for large scale multi-DIF scenario . . . . . . . 5126 Categorisation of scenarios and configurations for the renumbering experiments . . 5127 OMEC experiment: physical systems involved in the scenario . . . . . . . . . . 5528 OMEC experiment: DIF configurations: UE to server on the public Internet . . . 5629 OMEC experiment: DIF configurations: UE to server on the provider’s cloud . . 57

6

Page 7: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

List of Tables

1 Milestones for experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 KPIs for experiment 1-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 KPIs for experiment 1-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 KPIs for experiment 1-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 KPIs for experiment 1-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 KPIs for experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Testbeds for experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Planning of experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 KPIs for guaranteed QoS levels experiment . . . . . . . . . . . . . . . . . . . . 4010 KPIs for guaranteed QoS levels experiments . . . . . . . . . . . . . . . . . . . . 4311 Milestones for guaranteed QoS levels experiment . . . . . . . . . . . . . . . . . 4412 KPIs for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . . . . 4613 Testbeds for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . . 4714 Milestones for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . 5215 KPIs for OMEC experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5416 Testbeds for OMEC experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 5517 Milestones for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . 5918 Software to be used in ARCFIRE experiments . . . . . . . . . . . . . . . . . . . 60

7

Page 8: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

1 Reference scenario

All experiments in WP4 are grafted in the converged service provider network design as researchedin WP2. This section provides a quick summary of the core designs that will be used in theexperiments. More details can be found in deliverables D2.1 [1] and D2.2 [2].

1.1 Baseline reference for traditional networks

xDSL FTTH WiFi 4G

National DC Regional DC(s)

Metropolitan aggregation

Metropolitan aggregation

… …

Metro DC

Metro DC

Access

Internet Border

Private peering with other operators , or Internet transit

To Internet eXchange Point (IXP)

IP eXchange border To IPX network (IMS traffic)

micro DC

Metropolitan aggregation Metropolitan

aggregation

Metropolitan aggregation

Metropolitan aggregation

Metropolitan aggregation

Core/backbone

Figure 1: Converged service providers’ network

Figure 1 recaps the main parts of a converged service provider network. The network ispartitioned in three: several types of access networks allow the provider to reach its customersvia wired and wireless technologies. The traffic from these access networks is aggregated bymetropolitan area networks (MANs), which aggregate the traffic towards the network core.

Customers and services are identified and authenticated in the access or MAN. At the core, thetraffic is either forwarded to a DC or an Interconnect edge router (e.g. Internet edge) otherwise.The service provider may have different datacentres attached to different parts of the network:

• Micro data centres attached to the access networks, supporting the Mobile Edge Computingconcept by running very latency-critical services very close to its customers. Micro-data-centers may also be used to support C-RAN (Cloud RAN) deployments.

• Metro data centres attached to the metropolitan networks. These DCs could host serviceproviders’ network services such as DHCP, DNS or authentication servers; but also ContentDelivery Networks (CDNs) or even provide cloud computing services to customers.

• Regional or national data centres attached to the core networks. Could run the sameservices as metro data-centers at a national scale, as well as mobile network gateways and/orthe Network Operations Center (NOC).

8

Page 9: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

DSLAM

XDSL

AAL5/ATM

CPE BRAS/Edge router Carrier Eth.

PE Carrier Eth.

PE

802.3

802.1ah

802.1q/p or 802.1ad

Eth PHY Eth PHY

Eth PHY Eth PHY

Carrier Eth. P

802.3

Eth PHY

PPP and PPPoE

Host L2/L1

IP (Internet)

Core router Core router

L2/L1 L2/L1

MPLS (LSP)

Internet Border Router

Internet Border Router

802.3

Eth PHY

Customer network Service Prov. 2 Service Prov. 1 network

Access

Aggregation Service Edge Core (BGP-free) Internet Edge

Internet eXchange Point Core PoP, city B Core PoP, city A City A Ethernet-based MAN City A Cabinets

MPLS (VPN/Pseudo Wire)

TCP/UDP

DSLAM

XDSL

AAL5/ATM

CPE BRAS/Edge router Carrier Eth.

PE Carrier Eth.

PE

802.3

IEEE 802.1aq (SPB)

802.1q/p or 802.1ad

Eth PHY Eth PHY

Eth PHY Eth PHY

Carrier Eth. P

802.3

Eth PHY

PPP and PPPoE

Host

L2/L1

Core router Core router

L2/L1 L2/L1

IP (operator)

Internet Border Router

Internet Border Router

802.3

Eth PHY

Customer network Service Prov. 2 Service Prov. 1 network

Access

Aggregation Service Edge Core (BGP-free) Internet Edge

Internet eXchange Point Core PoP, city B Core PoP, city A City A Ethernet-based MAN City A Cabinets

TCP

IS-IS IS-IS IS-IS

iBGP

RSVP (TE) IP

TCP

eBGP

Figure 2: Residential fixed Internet access: data plane (up) and control plane (down)

Figure 2 shows the data and control plane for fixed access. The design shows a Carrier-Ethernetbased MAN aggregating traffic and forwarding it to one or more BRAS routers located at a corePoP. In the control plane, a BRAS (connected over PPP) authenticates customers. Shortest PathBridging (SPB) in the MAN enables traffic engineering. eBGP is used by the provider border routerto exchange traffic with its peer or upstream routers. Routes are disseminated to the BRAS(es) viaiBGP. The BGP-free core runs an MPLS control plane.

9

Page 10: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

DSLAM eNodeB

Multi Service Edge Carrier Eth.

PE Carrier Eth.

PE

802.1ah

802.1q/p or 802.1ad

Eth PHY Eth PHY

Eth PHY Eth PHY

Carrier Eth. P

MAC

PHY

UE

802.3

Eth PHY

P-GW

Service Prov. 1 network

Access Core (BGP-free) Internet Edge

Core PoP, city B Core PoP, city A

City A Ethernet-based MAN City A Antenna sites

RLC

PDCP

S-GW

IP (provider)

UDP

GTP-U

Eth PHY

802.3

UDP

GTP-U

IP (Internet)

Aggregation

L2/L1

Core router Core router

L2/L1 L2/L1

MPLS (LSP)

Internet Border Router

MPLS (VPN/Pseudo Wire)

802.3

Eth PHY

Service Edge Mobile gateways

DSLAM

eNodeB Multi Service Edge Carrier Eth.

PE Carrier Eth.

PE

802.1aq (SPB)

802.1q/p or 802.1ad

Eth PHY Eth PHY

Eth PHY Eth PHY

Carrier Eth. P

MAC

PHY

UE 802.3

Eth PHY

P-GW

Service Prov. 1 network

Access Core (BGP-free) Internet Edge

Core PoP, city B Core PoP, city A

City A Ethernet-based MAN City A Antenna sites

RLC

PDCP

S-GW

IP (provider)

Eth PHY

802.3

UDP

GTP-C

Aggregation Service Edge Mobile gateways

RRC

802.3

Eth PHY

MME

S1-AP

NAS

SCTP GTP-C

UDP

L2/L1

Core router Core router

L2/L1 L2/L1

IP (operator)

Internet Border Router

802.3

Eth PHY

TCP

IS-IS IS-IS IS-IS

iBGP

RSVP (TE) IP

TCP

eBGP

Figure 3: 4G Internet access: data plane (up) and control plane (down)

Figure 3 shows the data (user) plane and control plane for wireless (4G) access. The eNodeB(base station) is attached to the aggregation network, which aggregates the traffic from multiplebase stations into a Core PoP. The core PoP contains a Multi Service Edge and forwards the trafficto the mobile network gateways forming the EPC (Evolved Packet Core). In the example, the S-GWand the P-GW are located at the core PoP. Internet traffic reaching the P-GW is forwarded to one ofthe provider’s Internet border routers through the core network. In the control plane. the UE runsthe NAS protocol against the Mobility Management Element (MME), which allows the MME toauthenticate the user and negotiate handovers. The P-GW runs iBGP with Internet border routers,allowing it to route UE traffic to the Internet.

10

Page 11: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

CPE MS Edge router

Carrier Eth. PE

Carrier Eth. PE

802.1ah

802.1q/p or 802.1ad

Eth PHY

Eth PHY Eth PHY Eth PHY

Carrier Eth. P

802.3

Eth PHY

Host L2/L1

IP (VPN)

Core router Core router

L2/L1 L2/L1

MPLS (LSP)

MS Edge Router

Customer network Service Prov. 1 network

Aggregation Service Edge Core (BGP-free)

Core PoP, city B Core PoP, city A City A Ethernet-based MAN

MPLS (VPN/Pseudo Wire)

TCP/UDP

Carrier Eth. PE

Carrier Eth. PE

802.1ah

Eth PHY Eth PHY

Carrier Eth. P

802.1q/p or 802.1ad

Eth PHY

802.3

Eth PHY Eth PHY

Aggregation Service Edge

Customer network

City B Ethernet-based MAN

Host

CPE

CPE

MS Edge router

Carrier Eth. PE

Carrier Eth. PE

802.1aq (SPB)

802.1q/p or 802.1ad

Eth PHY

Eth PHY Eth PHY Eth PHY

Carrier Eth. P

Core router Core router MS Edge Router

Customer network Service Prov. 1 network

Aggregation Service Edge Core (BGP-free)

Core PoP, city B Core PoP, city A City A Ethernet-based MAN

Carrier Eth. PE

Carrier Eth. PE

802.1aq (SPB)

Eth PHY Eth PHY

Carrier Eth. P

802.1q/p or 802.1ad

Eth PHY Eth PHY

Aggregation Service Edge

Customer network

City B Ethernet-based MAN

CPE L2/L1 L2/L1 L2/L1

IP (operator)

TCP

IS-IS IS-IS IS-IS

iBGP

RSVP (TE)

IP

TCP

eBGP

IP

TCP

eBGP

Figure 4: Layer 3 VPN between customer sites: data plane (up) and control plane (down)

Figure 4 illustrates data and control plane for a Layer 3 VPN service between two businesssites. The customer CPE router is directly attached to the MAN. The L3 VPN service is carriedover a VLAN to the MS Edge router, which is running a Virtual Routing and Forwarding (VRF)instance to forward the VPN traffic through the MPLS core. When VPN traffic exits the MPLScore, another VRF instance forwards it through another VLAN over the MAN delivering it to theCPE at the other business site. The control plane runs eBGP between the CPE and the MS Edgerouters to exchange VPN route information. These VPN routes are disseminated over the core viaiBGP, allowing all VPN locations to learn the routes required to forward the VPN traffic across allsites.

11

Page 12: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

1.2 RINA network design

This section summarises the RINA network design for the 3 scenarios above.

Host

CPE

PtP DIF

Home DIF

Public Internet or VPN or app-specific DIF

Access Router

Edge Service Router

PtP DIF . . . PtP DIF

Host

Service provider top level DIF

PtP DIF Aggregation

DIFs

Customer network

Provider network

Local loop Aggregation

. . .

Figure 5: Fixed access network (RINA)

Figure 5 shows a RINA network structure for a fixed access network. The CPE is connected tothe access router via a point-to-point DIF. This DIF provides IPC services to one or more serviceDIFs, which are used to authenticate the customer and support access to one or more utility DIFs,such as a public Internet DIF or VPN or application-specific DIFs. The traffic from multiple accessrouters is multiplexed over the aggregation network into an edge service router, which forwards itfurther towards its final destination.

IP (e.g. Internet)

TCP or UDP

PDCP GTP-U

Protocol conversion

GTP-U

RLC

MAC

L1

UDP

IP (LTE transport)

MAC MAC . . .

L1 . . . L1

UDP

IP (LTE transport)

MAC MAC . . .

L1 . . . L1 UE

eNodeB S-GW P-GW

EPS bearer EPS bearer

LTE-Uu

S1-U S5/S8

MAC

L1

SGi

Public Internet DIF

Radio DIF . . .

. . . . . . UE

eNodeB S-GW P-GW

LTE-Uu

S1-U S5/S8 SGi

PtPDIF

Metro DIF

PtPDIF PtPDIF

PtPDIFMobile Network Top Level DIF

Backbone DIF

App

App

PtPDIF

Figure 6: Cellular access network (RINA)

Figure 6 shows a possible RINA network for mobile access. A radio multi-access DIF managesthe radio resource allocation and provides IPC over the wireless medium. A Mobile networktop-level DIF provides flows spanning the scope of the mobile network, where the Metro and

12

Page 13: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Backbone DIFs multiplex and transport the traffic of the Mobile Network Top-Level DIF. Finally,the public Internet DIF allows applications in the UE to connect to other applications available onthe public Internet.

PE-rs 1

IPCP

PE-rs 2 P2 P1 CE 2 CE 1 H1 H2

Customer 1 Site 1 Network

Service Provider Network

Customer 1 Site 2 Network

MTU-s 1

IPCP IPCP

MTU-s 2

IPCP IPCP

P (metro) P (metro)

IPCP

App App

PtP DIF

Metro DIF Backbone DIF Metro DIF

Green VPN DIF

VPN Service DIF

PtP DIF PtP DIF PtP DIF PtP DIF PtP DIF

PtP DIF

PtP DIF

PtP DIF PtP DIF PtP DIF

PE-rs 1

VPLS

PE-rs 2 P2 P1 CE 2 CE 1 H1 H2

Customer 1 Site 1 Network

Service Provider Network

Customer 1 Site 2 Network

MTU-s 1

VPLS VPLS

MTU-s 2

VPLS VPLS

P (metro) P (metro)

VPLS

App App TCP

IP

802.3

Eth/Net Eth/Net

MPLS

MPLS

Eth/Net Eth/Net Eth/Net

MPLS

MPLS

Eth/Net Eth/Net

MPLS

MPLS

802.3 802.1q

PBB (802.1ah)

Protocol conversion

Figure 7: Enterprise VPNs

Figure 7 shows a RINA configuration for providing Enterprise VPN services. The variousrouters are connected over various Point-to-Point DIFs. The Metro DIF provides IPC over metroaggregation networks, multiplexing the traffic of the different types of services the operator providesover the metropolitan segment. A backbone DIF provides IPC over the core segment of the network,interconnecting PoP in different cities. A VPN service DIF on top of the Metro and backbone DIFallocates resources to different VPN DIFs over the entire scope of the provider network.

13

Page 14: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

2 Experiment 1: Management of multi-layer converged service pro-vider networks

2.1 Introduction

The main goal of experiment 1 is to demonstrate the difference between a (protocol) stack and aDIF. Traditionally, in the literature as well as in commercial environments, the network is seen asa stack or protocol stack, realised by all components in the network. Figure 8 shows an exampleof this view using the resource abstraction introduced in [2]. Each vertical in the figure showsa network component, for instance an end system, a border router, or an interior router. Eachpair of communication components must implement entire stacks that are compatible with eachother, where most components must implement the whole stack (left and right) or parts of thestack (middle) depending on what network functionality it provides. In the figure, the left and rightcomponent could be end systems and the middle component could be a switch or router.

Layer n

Layer n+1

Layer n-1

Figure 8: Management Points in a Protocol Stack Pattern

2.1.1 The Current Problem

The management of stacks is complex because it is necessary to coordinate vertical as well ashorizontal aspects, each separated per component and provided stack element. On the horizontalaspect, each part of a communication service (each stack element on the same layer in eachcomponent), needs to be configured individually. A reconfiguration often requires to reconfigure allstack elements of all components. This situation is even more complicated when we apply a real

14

Page 15: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

word scenario, in which the ownership of the components varies. Any initial configuration and anyfurther reconfiguration must now be coordinated amongst all components and their respective owner(assuming that the owner has administrative permissions to perform the configuration action).

On the vertical, the configuration of each stack element must be configured in a way that thecomponent overall can provide the advertised and required service. Here, ownership is not theproblem. Instead the multi-vendor, multi-protocol, and multi-technology aspect of each individualstack layer provides for extensive complexity.

In a 5G converged service provider network, we must also consider a separation of technologyand ownership across the whole network, for components (here called nodes) as well as the“protocol zoo” dealing with various aspects of the network (and service) operation. Radio nodesform part of the radio network (RAN). Nodes aggregating radio access traffic and managing usersessions are found in the core network. Long distance communication and links to the publicInternet belong to the transport network. Each of those networks is often owned by different partsof a provider’s organisation.

This mix makes coordinated management of (protocol stacks) virtually impossible, mainlybecause the state space that needs to be coordinated is too large. The current solution is twofold.First, a large amount of standards is produced as normative source for management activities (theactivities themselves as well as for the underlying stack). Those standards provide interoperabilitybut limit flexibility. Second, management is often based on management models using managedobjects (with standardised access in form of a defined protocol and standardised information models,often in terms of a management information base). A large number of models exists and needs tobe considered for management activities. The figure 8 also shows the management points (blueinterfaces) and the internal management or configuration policies that need to be coordinated. Whilethe figure suggests a largely normalised environment, reality is much more heterogeneous.

2.1.2 Multi-layer Coordination

In RINA, the situation is very different. Using a common API (the IPC API) and building immutableinfrastructure used by each and every IPCP means that there are no longer an technologicaldifferences among different layers in the network. There are also no differences between verticalelements in a component, everything is an IPC process (IPCP).

Next, the concept of separation of mechanism and policy, and the definition of policies forall aspects of an IPCP (and thus inter-process communication), makes all available policies ex-plicit. There is no need anymore for separate management model interface definitions, everythingmanagement requires is already defined by the architecture.

Next, the concept of a DIF as a layer, and not a stack, with autonomic functions for layermanagement (as part of every IPCP) and with an explicit definition of the scope of the layer (thescope of the DIF, realised by its policies), provides for a very new concept with regard to multi-layermanagement.

Last but not least, the RIB in the IPCPs (and thus in the DIF) provides for the required

15

Page 16: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

information base for all of the above. The RIB maintains all shared state of all IPCPs in a DIF. Thedefined processes also realise an automatic update of this shared state amongst all IPCPs in a DIF.This means that changes to a DIF’s configuration will propagate from one IPCP to another until theentire DIF is reconfigured.

2.1.3 The resulting Objectives

The objectives of experiment 1 are

1. to demonstrate that the simplification RINA provides (extensively discussed in [2] andsummarised above) lead to a significant simplification of the required management definitionsand implementations,

2. to demonstrate that, since the manager can use the same model with a few variations based ondifferent RINA policies (DIF policies) to configure multiple layers, coordinated managementcan evolve from a complex management task (often provided by workflows with associatedmanagement policies) into a strategy-oriented management task in which the managementactivities are solely described by management strategies (as discussed in [2]),

3. to demonstrate that the manager can perform an adequate number of strategies at the sametime (single manager deployment), and

4. to demonstrate that the manager can be scaled up and scaled down in case a single managerdeployment is not able to handle the load of executed strategies.

Multi-layer management can now evolve from stack management (vertical and horizontal)towards an actual coordination function in a multi-domain model (here the domains are DIFs).Figure 9 shows this new situation provided by RINA using the resource abstraction introduced in[2], including the management points (per domain element - IPCP, per domain - DIF, multi-domain,and for the management system on the right). Most of the control functions are realised in theIPCPs and the DIFs. DIF interaction is also provided by RINA. What is left for the coordination isto provide for a coordinated initial configuration, and later for possible re-configuration as part ofthe management activity (monitor and repair, see [2]).

2.1.4 Scope: Configuration Management

Management activities cover a wide range of functional aspects. Since this experiment cannot coverall aspects, we will narrow the scope to a representative set of those aspects.

Network management activities are often called management functions. The functional scope ofthose management functions is commonly described in terms of Fault, Configuration, Accounting,Performance, and Security (FCAPS), also known as management areas [3]. On top of network

16

Page 17: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Domainn

Domainn-1

Domainn+1

Mul�-Domain

Figure 9: Management Points in a Layered Domain Pattern

management (including layer management) system management functions are standardised tomanage a single system, see for instance ITU-T recommendations X.730 to X.799.

A management system, including the DMS, follows the common manager-agent paradigm,where a manager executes the management functions and a managed agent controls resources. Thecommunication between manager and agent is then the management protocol. Communication isrestricted to operations from manager to agent and notification from agent to manager. The agentuses a Management Information Base (MIB) as a standardised knowledge base of the resources itcontrols. The resources commonly provide a standardised interface to the agent.

Fault

Strategy

Strategy

Strategy

Strategy

CDAP

Opera�ons

No�fica�ons

Manager ManagementAgent

Distributed IPC Facility (DIF)

Border RouterInterior Router

Host Host

App A App B

DIF DIF

DIFDIF

Border Router

DIF

IPCPIPCP

IPCP

IPCP

IPCPIPCPIPCPIPCPIPCP

IPCP

IPCP IPCPIPCPIPCP

IPCP

Configura�on

Performance

Accoun�ng

Security

Figure 10: FCAPS, Strategies, and RINA Network

Figure 10 shows an example of the manager/agent paradigm in a RINA context. On the managerside, the FCAPS management areas are used to define the management functions. Strategies realisethose functions, so they are the management activities. The management protocol is CDAP. TheManagement Agent (MA) then controls a RINA network, i.e. a deployment of DIFs with IPC

17

Page 18: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

processes. The MIB in RINA is superset of all the RINA Information Bases (RIB) of the controlledIPC processes. The common (standardised) management interface of the resources (DIFs, IPCprocesses) is the common IPC API.

As stated above, this experiment cannot cover all functional areas. For the FCAPS, the followingassumptions can be made to narrow the scope for the experiment:

• Security - the management of security aspects is covered in the ARCFIRE experiment 4.

• Accounting - while the collection of data for resource use can be realised. Since a DIF has awell-defined scope, such a collection can be done per DIF (and then per IPC process in theDIF) with the full understanding of the DIFs scope. Call Data Records (CDRs) or other baseinformation for accounting can then be produced. However, to demonstrate full AccountingManagement (AM), we would need to associate the usage data (e.g. CDRs) to user session,then individual users, and finally to the contract those users have with the service provider.This means building such a system requires a significant amount of resources outside theoriginal scope of the project yet with (very likely) little added value for the experiment.

• Performance - collection of performance data is very similar to collection of accountingdata. However, to demonstrate full Performance Management (PM) requires evaluating thecollected performance counters against the KPIs a network operator (or service provider) setfor its operation. Those goals differ from operator to operator and from provider to provider.They also depend on the optimisation strategy for a network (for instance optimise for voice,or data, or coverage, or against connection loss / call drop, etc.). There is little discussion inthe literature on common performance KPIs, beside some higher level goals such as optimalperformance. Since general performance tests of a RINA network have been done in otherprojects, performance management for this experiment will not add much value (besidesthe resource heavy and largely arbitrary design and implementation of several optimisationstrategies to test).

• Fault - faults are problems in the network that can be observed either directly (the resource andthen the Management Agent send an error in form of a notification) or indirectly (by observingthe behaviour of the network comparing it to the anticipated or required behaviour). Anexample of a directly observed fault is the loss of connectivity due to a hardware problem (portbroken, physical node down). Examples for an indirectly observed fault is congestion control(monitor the network for indicators of congestion) or the classic sleeping cell scenario (aradio node is alive, not down or broken, but does not accept any further traffic or connectionsfrom mobile equipment). While Fault Management (FM) is often seen as the most importantmanagement activity, it is very hard to define fault scenarios for a repeatable experimentationwith measurable KPIs. On one side, we can of course design an experiment in which anIPC process is deliberately broken, creating a fault, resulting in the DMS executing a faultmitigation strategy. On the other side, RINA is an autonomic network, that realises much

18

Page 19: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

control (and management functionality, in the IPC process layer management) locally withoutnotifying the DMS. The actual possible fault scenarios in such an environment have not yetbeen studied in detail, thus making it very hard to design fault management scenarios.

• Configuration - every network operation starts with planning (design a network for givenrequirements) followed by deployment (of network nodes and other physical resources andsoftware components and other virtual resources). Once in operation, a network might(and usually does) require re-configuration. This requirement can be the result of a changein requirements (up to business goals set by the operator), faults in the network (faultswhich can be mitigated by re-configuration), a required shift in the operation due to changedtraffic (traffic behaviour or traffic mix impeding for instance required performance), theintroduction of new accounting measurements required for new accounting (and finallycharging) strategies, or changes for the security of the network operation (or parts of it).From this viewpoint, configuration management is the facilitator of all other functionalmanagement areas. Thus the main target for this experiment is Configuration Management(CM). Another reason to focus on CM is the current situation of support for deploying aRINA network. The RINA Demonstrator is the most efficient way to define a RINA networkand deploy it. The Demonstrator automates a large amount of the underlying configurationspecification and the deployment actions. In defining strategies for CM for this experiment,we can (i) benefit from the developed Demonstrator (specification requirements and workflowhave been developed, tested, and are in use for demonstrations for a long time) and (ii)provide a simpler, more dynamic, and more flexible solution than the demonstrator is today.A DMS with a set of CM strategies can supplement the Demonstrator and help to increasethe automated deployment of RINA networks. Adding reconfiguration (in the DMS) willfurther enhance demo capabilities.

2.1.5 Aspects, KPIs, Scenarios and Use Cases

Measuring a network management system for performance, cost, and quality is a very difficult task.All three aspects depend on many variables, e.g. network topology, network size, operational opti-misation strategies, available resources (compute, storage, network) for the network managementsystem, different deployment options (e.g. clustering) of the network management system, andmore. Beside measurable technical KPIs, a network management system has to support KPIs thatdepend on operator-specific goals and metrics, because a network management system will be usedin a specific environment of a particular operator, often a Network Operation Centre (NOC). SomeKPIs used are:

• network server availability

• server (NMS) to administration (NOC personal) ratio

19

Page 20: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

• network availability

• capacity of network path (to and from the NMS and in the managed network)

• Critical outage time (how long does it take the NMS to mitigate critical outages (i) in thenetwork and (ii) for itself)

• Frequency of monitoring checks (how often is the NMS required to pull monitoring informa-tion for its own operation)

• Number of affected users (NOC users) on an average basis, i.e. how many people are requiredto use the NMS to operate a network

• Indices of smoothness of network changes, service introduction, etc.

• Complexity of network management operations, e.g. how many different actions need to beinvoked to solve a problem

• Performance of network management operations, e.g. how many similar actions are need tobe invoked solve a problem

• Workflow complexity, e.g. how complex are workflows for problem solving strategies

• Touchpoints, e.g. how many touchpoints are required to realise a network managementoperation (zero-touch here means none, as the ultimate goal and key indicator for a highdegree of automation in the network management system)

Detailed statistics and performance of Operation Support Systems (OSS) used by mobileoperators cannot be obtained. This information is often considered to be privileged by operators aswell as OSS vendors. The same situation applies to metrics on NMS ease of use, involved personaland other costs, and quality.

The closet equivalent to the ARCFIRE DMS in the real world are SNMP based managementsystem for mobile, core, and transport networks as well as for LAN/MAN deployments. However,as [4] states, the performance of SNMP (and related NMSs) has never been fully and properlystudied. There is no commonly agreed measurement methodology to assess network management(and SNMP) performance). In addition, similar to OSS, little is known about SNMP (and SNMP-based NMS) usage patterns in operational networks, which impedes design of realistic scenariosfor an NMS analysis.

The search for relevant, measurable, and comparable KPIs for this experiment is furthercomplicated by the radical new value propositions of a RINA network. For the first time we have afully autonomic network available, including explicit specifications of the scope of layers (DIFs),automatic mechanisms for sharing state between layer elements (the IPC process with their RIBinside a DIF), fully policy-based local control (data transfer and data transfer control), and the

20

Page 21: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

availability of a complete set of layer management functions (the layer management functions insidean IPC process). Multi-layer (multi-DIF) coordination now becomes a very different and simplertask (compared to current OSS and NMS systems). Even comparing it to cross-layer optimisationis difficult, since this optimisation assumes a role of control over the layers while the ARCFIREDMS operates in terms of monitoring and repair. The starting point of management activities isvery different, making comparisons extremely difficult (or arbitrary).

To provide a consistent, measurable, and comparable set of performance indicators for the DMSwe will focus on three aspects:

• Speed: the speed of management operations can be measured by the number of collected,required, and adjusted variables (policies and configurations of DIFs); the number of notifica-tions for a management activity; and experienced delays (in the management activity).

• Cost: can be measured by CPU usage (or wider usage of compute resources in a cloudenvironment), memory usage (or the usage of storage resources in a wider cloud environment),and bandwidth usage (how much overhead in required bandwidth does management create).

• Quality: quality can be measured in a spatial-temporal context. Spatial here refers todifferences in particular variables (configuration and policies) between the DMS and theRINA networks. Temporal refers to errors due to temporal aspects, such as monitoringroundtrip times. Loss can be added to evaluate if information is lost in transport.

For these three aspects, we can now look into relevant KPIs. We have identified the followinggeneral KPIs for this experiment:

• Speed: the speed of management strategies

• Scale: the required scale of the DMS for speedy operation

• Time and cost for scale-in and scale-out: how timely can the DMS be scaled out (extended)in case of cascading event storms, how costly is this change of scale

• Touch: how many touches are required for management strategies to succeed (this can beused as an indicator for cost since it determines the extent of human input required for anoperation)

• Complexity: how complex are the used management strategies, i.e. how many differentoperations do they require (internally to make decisions and externally to realise them)

• Degree of automation: using the touch KPI we can estimate the degree of automation theDMS will provide in a network operation

21

Page 22: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Aspects and general KPIs need to be measured in experiments. Those experiments can beclassified as a combination of a scenario with a use case. There are many possible scenarios anduse cases to select from. The following scenarios are considered:

• No network: this scenario can be used to benchmark the DMS and all developed strategies.Strategy triggers (and responses) need to be simulated but actions are not performed. Thefocus is on the DMS performance only. The DMS configuration is shown in Figure 11.

• Small (minimal) RINA network: a simple deployment of two hosts, each connected to adedicated border router, both border routers connected to the same interior router. The DMSconfiguration is shown in Figure 12. The RINA network setup is shown in Figure 13.

• Medium RINA network: a medium size RINA network as for instance used in the renumber-ing demonstration (30 nodes to show a network on European scale as used in experiment 3,cf Figure 21) or the data center demonstration (38 nodes for a medium size data center usinga spine-leaf configuration). The DMS configuration is shown in Figure 12.

• Large RINA network: a large RINA network like the network used in the experimentresembling the AT&T transport network for managed services (as used in experiment 3, cfFigure 22) The DMS configuration is shown in Figure 12.

DMS

DMSStrategy Engine

RIB

ManagementShell / GUI

OperatorOSS/NMS and

OpenNMS

Message Bus (W3C WebSocket)

DMSStrategy Engine

Figure 11: DMS: Stand-alone System for Benchmark Experiments

Each these scenarios can be combined with each of the following use cases:

• Deploy network from zero: no RINA network exists, the DMS is triggered by an operatormechanism and deploys a complete RINA network.

• Deploy a new service (DIF) in existing network: A RINA network exists, the DMS managesit, now add services and applications to the network by creating a new DIF (service) anddeploying applications.

22

Page 23: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

RINA

DMS

ManagedResource

(RINA)

ManagementAgent(MA)

DMSStrategy Engine

API Calls, etc.CDAP

RIB RIB RIBSynchroniza�on Synchroniza�on

ManagementShell / GUI

OperatorOSS/NMS and

OpenNMS

Message Bus (W3C WebSocket) CDAPConnector

DMSStrategy Engine

Figure 12: DMS: Full System for Experiments with a RINA Network

Distributed IPC Facility (DIF)

Border RouterInterior Router

Host Host

App A App B

DIF DIF

DIFDIF

Border Router

DIF

IPCPIPCP

IPCP

IPCP

IPCPIPCPIPCPIPCPIPCP

IPCP

IPCP IPCPIPCPIPCP

IPCP

Figure 13: Experiment 1: Minimal RINA System for simple Experiments

23

Page 24: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

• Deploy network and service from zero: Deploy the network nodes, all DIFs, and an applica-tion.

• Add a new node to network: how many touch points are required to add a new node, e.g. is itpossible to create a zero touch strategy for adding new nodes?

The resulting 4-tuple for this experiment then is aspects, general KPIs, scenarios, and use cases.For concrete experiments we should define concrete KPIs and design the required strategies. It isalso clear that a separation into sub-experiments is required. This separation can be done usingthe scenarios or the use cases. We have decided to take the use cases as key separator and run allscenarios in each use case.

The following sub sections detail the resulting 4 sub experiments. Each sub experimentaddresses one use case for all scenarios using all KPIs covering all aspects. Therefore, the subexperiment description is in parts repetitive. We chose to leave the repetition in this documentation,because different teams may only work a subset of the sub experiments.

2.2 Specifications

2.2.1 Testbeds

The Virtual Wall will be the main testbed of reference as reported in D4.2 [5]. The Virtual Wallcan provide access to bare metal hardware to run the RINA implementation and the applicationsrequired for the experiment, providing access to an adequate number of resources.

Initially the jFed experimenter GUI will be used to reserve and setup the resources in theFED4FIRE+ testbeds. Once it becomes available, the experimentation and measurement frameworkunder development in WP3 will provide a more automated front-end for jFed as well as other IRATIdeployment and configuration tools such as the Demonstrator.

2.2.2 Scenarios

All experiments will be run in four different scenarios.No RINA Network: This scenario uses triggers from the operators OSS/NMS (here simulated

by a skeleton application) to benchmark the DMS strategy execution. All developed strategies willbe tested for speed and scale. The DMS will be a stand-alone DMS as shown in Figure 11.

Minimum RINA Network: This scenario uses the minimum RINA network (2 hosts, 2 borderrouters, 1 interior router, cf. Figure 13) with an associated strategy for the experiment. The DMSwill be a full configuration as shown in Figure 12.

Medium Size RINA Network: This scenario uses a medium size RINA network, for instance theEuropean network used in experiment 3 as shown in Figure 21 and an associated strategy for theexperiment. The DMS will be a full configuration as shown in Figure 12.

24

Page 25: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Milestone Month Description

MS1 M16 DMS Software with Strategy Executor, OSS/NMS trigger, MA/Demonstrator Integration

MS2 M17 Strategy for experiment defined and tested

MS3 M18 Strategy for experiment defined and tested

MS4 M19 Strategy for experiment defined and tested

MS5 M20 Strategy for experiment defined and tested

MS6 M21 reference experiment with measurements on LMI reference server

MS7 M22 continuous experiments for scenario 1 (benchmarking)

MS8 M24 continuous experiments for scenario 2 (minimal RINA network)

MS9 M26 continuous experiments for scenario 3 (medium size RINA network)

MS10 M28 continuous experiments for scenario 4 (large size RINA network)

Table 1: Milestones for experiment 1

Large Size RINA Network: This scenario uses a large size RINA network, for instance the USnetwork used in experiment 3 as shown in Figure 22 and an associated strategy for the experiment.The DMS will be a full configuration as shown in Figure 12.

2.2.3 Planning

The milestones for the design and specification of experiment 1 are shown in Table 1.

2.2.4 Experiment 1-1: Deploy Network from Zero

Objectives This experiment assesses the DMS’s capability to build a network from scratch usingthe specification of an operator’s network planning, e.g. topology and required network services asinput. At the start, there is no network, i.e. no network node is up and running. The first step is tostart the first network node. Each following step adds a new network node according to the giventopology.

The configurations of all nodes is handled by the DMS. A single strategy should be specified,which takes a topology (and all required information for specific nodes, e.g. required services andDIFs) and once triggered, builds the network step by step (i.e. node by node).

The experiment is finalised by a second strategy, which deploys a number of tests in the newlybuild RINA network to evaluate the correctness of the configuration. Those tests can be active(deploy a test service and evaluate correctness) or passive (analyse the configuration of each nodeas represented in the RIB against the given topology).

The strategy which builds the network can be configured for the three RINA network scenarios.The experiment can then be run for the benchmark scenario and all three RINA network scenarios.We do not anticipate different strategies for the different scenarios.

25

Page 26: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

The initial trigger for building the RINA network comes from the operator’s OSS/NMS. Thiswill be simulated by a skeleton OSS/NMS trigger application in the DMS. Evaluation of the builtnetwork in the benchmark scenario can only be performed passively.

Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,scale, time and cost, touch, complexity (of the strategy), and degree of automation. They are furtherspecialised for this experiment as shows in Table 2.

KPI Metric Current state of the art ARCFIRE Objective

E1-1.1 Speedy node add ms or s Node creation and config-uration is often realised byrather complex workflowsthat can take several minutesup to an hour (for automatedcreation)

The strategy should run insub-second speed for singlenode creation

E1-1.2 Speedy networkcreation

ms or s or min Creation and configurationof a whole network can,depending on the networkcomplexity, take severalhours to several weeks

Even the large RINA net-work should be created inminutes, rather than hours

E1-1.3 DMS scale Number of strategy execu-tors

Multiple workflows executedin parallel (or realised byteams working in parallel)

Ideally only 1 executorrequired (sufficient speed),more (with parallelisedexecution) if speed KPIcannot be met

E1-1.4 DMS Scale Out s and CPU usage Scaling out a managementsystem that is in operationis often impossible or ex-tremely costly (takes hours,requires multiple touches, isprocessing intensive)

Scaling out the DMS shouldbe done in seconds, with 1or zero touch, and minimalCPU costs (for the scalingitself)

E1-1.5 Strategy Complexity number of different opera-tions required

Currently, CM operationscreate large sets of CMprofiles, one per node

From early experiments, weestimate 4-5 different oper-ations per node replicatedfor each node to create thenetwork

E1-1.6 Touches / Degree ofAutomation

number of touches Beside the (not counted)original touch (installationof hardware node or triggerfor software installation),adding a node often requiresmultiple additional touchesfor configuration. Early pro-totypes for VNF deploymentcan operate on a minimum(potentially zero) touchbasis.

Ideally zero touch

Table 2: KPIs for experiment 1-1

26

Page 27: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

2.2.5 Experiment 1-2: Deploy new Service in Existing Network

Objectives This experiment assesses the DMS’s capability to add a new network service toan existing RINA network. The starting point is a RINA network, which can be built using thestrategies of experiment 1-1. Once the network is built, a new network service (a new DIF) isinjected into the existing network. The configuration and scope of this DIF is provided by theoperator, e.g. a planning task. This experiment has only 1 step: add a new DIF to an existingnetwork.

The configuration and scope of the new DIF is handled by the DMS. A single strategy shouldbe specified, which takes the operator’s planning specification and, once triggered, creates the DIF.

The experiment is finalised by a second strategy, which deploys a number of tests in the newlycreated DIF to evaluate the correctness of its configuration and scope. Those tests can be active(deploy a test service and evaluate correctness) or passive (analyse the configuration of the DIFagainst the given planning specifications).

The strategy which adds a new DIF can be configured for the three RINA network scenarios.The experiment can then be run for the benchmark scenario and all three RINA network scenarios.We do not anticipate different strategies for the different scenarios.

The initial trigger for adding a new DIF comes from the operator’s OSS/NMS. This will besimulated by a skeleton OSS/NMS trigger application in the DMS. Evaluation of added DIF in thebenchmark scenario can only be performed passively.

Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,touch, complexity (of the strategy), and degree of automation. They are further specialised for thisexperiment as shown in Table 3.

KPI Metric Current state of the art ARCFIRE Objective

E1-2.1 Speedy DIF add ms or s Service creation and config-uration is often realised byrather complex workflowsthat can take several min-utes or hours (for automatedcreation)

The strategy should run insub-second speed

E1-2.2 Strategy Complexity number of different opera-tions required

Currently, CM operationscreate large sets of CMprofiles, one per node

1 operation replicated percreation of IPC process onparticipating nodes

E1-2.3 Touches / Degree ofAutomation

number of touches Creation of new services inthe network is an extremelycomplex and complicatedprocess, with multiple teamsand touches

Ideally zero touch

Table 3: KPIs for experiment 1-2

27

Page 28: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

2.2.6 Experiment 1-3: Deploy Network and Service from Zero

Objectives This experiment assesses the DMS’s capability to build a new RINA network and addall required network services, application services, and applications to it. The starting point for thisexperiment is a topology based on the operator’s topology for the network, the requirements fornetwork and application services based on operator’s specifications, and a set of applications thatmust be supported by the network as defined by the operator.

At the start, there is no network, i.e. no network node is up and running. The first step is tostart the first network node. Each following step adds a new network node according to the giventopology. Then, the required services are added (e.g. related DIFs created). Finally, all requiredinfrastructure for the applications are deployed in the network. For the applications, we will use thestandard IRATI applications rina-echo-time and rina-tgen.

All configurations are handled by the DMS. A single strategy should be specified that createsthe network, adds all required services, and finally deploys the application infrastructure.

The experiment is finalised by a second strategy, which deploys a number of tests in the newlycreated network to evaluate the correctness of its configuration and scope. Those tests can be active(deploy a test application and evaluate correctness) or passive (analyse the network’s configurationagainst the given operator’s specifications).

The creating strategy can be configured for the three RINA network scenarios. The experimentcan then be run for the benchmark scenario and all three RINA network scenarios. We do notanticipate different strategies for the different scenarios.

The initial trigger comes from the operator’s OSS/NMS. This will be simulated by a skeletonOSS/NMS trigger application in the DMS. Evaluation the network in the benchmark scenario canonly be performed passively.

Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,scale, time and cost, touch, complexity (of the strategy), and degree of automation. They are furtherspecialised for this experiment as shown in Table 4.

2.2.7 Experiment 1-4: Add new Network Node - Zero touch

Objectives This experiment assesses how many touch points are required to add a new node toan existing RINA network. We consider any required manual action or necessary interaction ofthe DMS with an administrator as a touch point. Ideally, the DMS can add a new node with zerotouches, i.e. fully automated. It is important for this experiment to establish how close the DMScan be to a zero touch management system and what information is required to achieve that. Oncedescribed, it will also be of value to minimise the information required.

The starting point for this experiment is a network of at least 1 node, since the first node mightrequire more configuration than any following node. The experiment can then run for every nodedescribed in a given topology. Ideally, the DMS can add all new nodes automatically.

28

Page 29: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

KPI Metric Current state of the art ARCFIRE Objective

E1-3.1 Speedy node add ms or s Node creation and config-uration is often realised byrather complex workflowsthat can take several minutesup to an hour (for automatedcreation)

The strategy should run insub-second speed for singlenode creation

E1-3.2 Speedy networkcreation

ms or s or min Creation and configurationof a whole network can,depending on the networkcomplexity, take severalhours to several weeks

Even the large RINA net-work should be created inminutes, rather than hours

E1-3.3 Speedy DIF add ms or s Service creation and config-uration is often realised byrather complex workflowsthat can take several minutesor an hours (for automatedcreation)

the strategy should run insub-second speed

E1-3.4 DMS scale Number of strategy execu-tors

Multiple workflows executedin parallel (or realised byteams working in parallel)

Ideally only 1 executorrequired (sufficient speed),more (with parallelisedexecution) if speed KPIcannot be met

E1-3.5 DMS Scale Out s and CPU usage Scaling out a managementsystem that is in operationis often impossible or ex-tremely costly (takes hours,requires multiple touches, isprocessing intensive)

Scaling out the DMS shouldbe done in seconds, with 1or zero touch, and minimalCPU costs (for the scalingitself)

E1-3.6 Strategy Complexity number of different opera-tions required

Currently, CM operationscreate large sets of CMprofiles, one per node andservice

From early experiments, weestimate 4-5 different oper-ations per node replicatedfor each node to create thenetwork plus deploying therequired network services

E1-3.7 Touches / Degree ofAutomation

number of touches Adding nodes and servicesoften requires multipletouches, realised by multipleteams. Early prototypes forVNF deployment can operateon a minimum (potentiallyzero) basis.

Ideally zero touch

Table 4: KPIs for experiment 1-3

29

Page 30: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

All configurations are handled by the DMS. A single strategy should be specified that addsa new node to the network. After adding a new node, a second strategy should be triggered toevaluate the correct configuration of the newly added node against the given topology.

The initial trigger comes from the operator’s OSS/NMS. This will be simulated by a skeletonOSS/NMS trigger application in the DMS. Evaluation of the network in the benchmark scenariocan only be performed passively.

Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of touchand complexity (of the strategy). They are further specialised for this experiment as shown in Table5. This experiment should quantify:

KPI Metric Current state of the art ARCFIRE Objective

E1-4.1 Touches / Degree ofAutomation

number of touches Beside the (not counted)original touch (installation ofhardware node or trigger forsoftware installation), addinga node often requires multi-ple touches for configuration.Early prototypes for VNFdeployment can operate ona minimum (potentially 0)touch basis.

zero touch

E1-4.1 Strategy Complexity number of different opera-tions required

Currently, CM operationscreate large sets of CMprofiles per node

From early experiments, weare estimate 4-5 differentoperations per node

Table 5: KPIs for experiment 1-4

2.3 Summary

In this section we have detailed the ARCFIRE experiment 1 focusing on multi-layer coordinationin a converged service provider network. We have defined the main objective, the current problem,and the solution that a multi-layer approach should provide in the context of RINA. We havealso detailed why we are focusing on configuration management. The experiment then is basedon a 4-tuple of aspects, KPIs, scenarios, and use cases. This tuple lead to the definition of 4sub-experiments covering the four described scenarios, providing quantitative results for all fouruse cases, specifying individual KPIs and addressing all described aspects.

30

Page 31: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

3 Experiment 2: Scaling up the deployment of resilient multimediaservices over heterogeneous physical media

3.1 Introduction

Deploying resilient multimedia services is one of the most challenging aspects that networkoperators are currently facing. The large number of nodes and users, together with the strict QoSrequirements of nowadays network services, makes the network design phase a complex one. Theresiliency aspect add further complexity, as the network operator must cope with different kind ofsoftware and hardware failures, both in the network and in the service applications.

A first observation is the generality of the RINA architecture leads to the overall expectationthat the effectiveness of RINA will increase the further we move up the network hierarchy (i.eaway from physical layer constraints) and the larger we consider the network, in terms of end userapplications, deployed routers, management domains, and so on. From this general observation,we expect RINA to outperform 5G proposals built on SDN solutions due to a general reduction ofcomplexity, reducing the number of elements necessary to operate the network to the same level ofefficiency and performance.

Another area where we expect RINA to perform very well is reliability. Using a routedsolution in small DIFs, we can tailor routing updates to react very fast to changes in networkconnectivity. This work has already been investigated in the PRISTINE project at a smallerscale [6]. Whatevercast naming has also the potential to simplify multicast delivery and mitigateconnection interruptions that cannot be handled within a DIF (when the graph becomes separatedinto different components). Combined with the RINA application naming scheme, we expectmeasurable decreases in connection downtime in the presence of adverse network conditions.

Following these directions, experiment 2 aims at exploring how to deploy resilient multimediaservices over scaled-up RINA networks, when different types of access networks are used. Theexperiment will show how RINA can operate services over heterogeneous physical media, such asFixed access, LTE Advanced and Wifi. These access technologies are described in depth in [1].

3.2 Objectives

This experiment will focus on delivering multicast multimedia services at scale to users connectingover different access technologies. The most important aspects considered are scalability, in termsof network elements (nodes such as routers and end-user devices), user connections, services.The objectives are minimizing overhead between the end user and the base station for LTE users,optimising overall network resource usage (bandwidth, routing tables) and providing resilienceto node and link outages due to failures and planned maintenance. Multicast and anycast are notpart of the RINA prototypes yet, and have not been investigated experimentally before. With theimproved software frameworks that are developed by ARCFIRE, we will be able to provide resultsin much more detailed scenarios and at larger scale.

31

Page 32: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Sticking points for a RINA deployment will be where hardware constraints will be felt themost, close to the transmission equipment. LTE has a very efficient solution for reducing overheadbetween the eNodeB and the UE in the RObust Header Compression (ROHC) technology used byPDCP, replacing L3 and L4 headers (IP/UDP/RTP) by a very small token, reducing 40 (or even 60bytes in case of IPv6) to 2-3 bytes [7]. RINA’s layered structure, with encapsulation at each layer tomaintain transparency may introduce additional overhead at this point. As part of this experiment,we will investigate policies for header compression to reduce the Protocol Control Information inthe PDUs on the eNodeB-UE link.

3.3 Experiment scenario

Experiment 2 considers fixed and mobile end users accessing a variety of services. There will be 4services available in the experiments (Web, Multimedia, Gaming and File Transfer). These userswill reside across Europe, connected with each other over a single core network. The referenceapplications for the services are the nginx web server, the Midori web browser, VLC for themultimedia service, ioquake3 for the gaming service and Filezilla for the file transfer. Ports ofthese applications to the RINA POSIX-like API are already available for nginx (the port will bereleased as open-source in the next month) and ioquake3 [8]. The other services will be validatedby simulating traffic with the RINA tools like rina-tgen and rinaperf (described in [9], Section 3.1).If ports of the other applications should become available (e.g. a browser or a video streamingserver/client), they will be adopted in the experiments as well. All these services can be started andstopped by means of the Rumba framework.

We will look into the steps required to provide multicast in RINA networks, since it is commonlyused in provider networks. If the solution can be implemented within the project timeframe wewill add it to one of the prototypes. We can then perform measurements relevant for multicastservices, such as video streaming, which will be simulated if a ported application is not available.Relevant metrics will be the overhead to setup a multicast flow, the bandwidth consumption and thereliability of the multicast flow.

The physical network graph of the experiment is shown in Figure 14. The access networksare servicing users and are connected to metropolitan networks. The metropolitan networks areconnected to each other via a core network. In the access networks, we will investigate three accesstechnologies: Fixed (Gb Ethernet), LTE and Wi-Fi (not shown in the Figure). The LTE accessnetwork in the figure consists only of the User Equipment (UE) and the eNodeB, since this is theconnection we are mostly interested in, to validate that RINA can offer a similar solution to RObustHeader Compression (ROHC). Usage statistics will probably be monitored at the shim IPCPs onthe eNodeBs. In the metro and core network, we will build on physical machines with 1G and10G Ethernet links for bare-metal measurements. For scaled up experiments, we will extend theaccess networks by using virtual machines in GENI and iLab.t (interconnected wherever possible)to perform experiments that are larger in terms of hosts, but less demanding in terms of bandwidthusage. The VMs will run applications that can work also with low bandwidth, like web browsing

32

Page 33: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Figure 14: Extended experiment

and measurement tools (ping, netperf, rina-tgen, rinaperf, ...). More demanding host applicationswill also run on bare metal.

The initial scenario of the experiment will consist of different users arriving to use the networkso that they can use their services, and leaving when they are done. This arrival and departure ratewill be Poisson distributed. Of course not all users share the same intensity of service usage, so wewill also use a Poisson distributed model to model the usage of the different services. For RINAthis means that the number of flows that will be setup will vary depending on the user.

We will investigate any possible problems while the network is operating (either architecturalproblems, or implementation problems). If everything is operating as expected, then we will gatherstatistics while the experiment is running. On the one hand, we will get output from some ofthe applications that were launched. On the other hand, we will gather interesting data from theIPCPs itself. As an example, the different routing table sizes in the network may be an interestingobservation. We will also look into other events, such as resiliency of the network. We will firstinvestigate what the consequence is of link failures in a RINA network. Later on we will extendthis to nodes, and of course to application failures, most notably IPCP failures.

When we obtained results from this original setup we will focus on scaling up the RINAscenario to more connected access networks (6 or 7), serving a combined total of 1000 users. The 3metropolitan networks will be scaled up as well by first adding more nodes, and later on by addingmore metro networks. Obviously we will also scale up the number of core nodes to support all theextra traffic. This of course will depend also on the available resources as the nodes in the corenetwork are preferably interconnected by 10G links, something which is a limited resource in most

33

Page 34: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

testbeds.

3.4 Metrics and KPIs

The objectives of this experiment are to quantify how the overhead due to Protocol ControlInformation (PCI) - the network headers - can be reduced in the network when contraint linkresources are encountered. Routing scalability will be investigated by dumping routing tablesto storage at certain intervals and assessing the number of routing entries. These functionalitiesare already supported by the available RINA implementations. Applications such as netperf orrina-tgen can measure peak, instantaneous and average bit rates and the total time to completea file transfer. By using tcpdump and RINA tools we can measure gaps in packet transfers toassess failure recovery times and the packet loss during recovery. By logging all traffic usingtcpdump/wireshark, we can analyse the total bandwidth consumed on each network link to assessnetwork resource usage efficiency.

These KPIs are summarised in Table 6.

KPI Metric Current state of the art ARCFIRE Objective

PCI overhead bits RObust Header Compression (ROHC) comparable to ROHC

Routingscalability

entries in a routing table logarithmic in theory, superlinear in practice logarithmic in practice

Failure in-terruptiontime

ms sub 50 ms restoration sub 20 ms measured in the testbed

Applicationgoodput

b/s application-dependent application-dependent

Link resourceutilisation

% scenario-dependent scenario-dependent

Packet lossduring failures

number of SDUs 0 0

Table 6: KPIs for experiment 2

These KPIs will be measured under different circumstances and traffic loads. The traffic will begenerated by gradually scaling up the number of end user devices, the number of network nodes(routers), the number of deployed services and the total number of user connections. These resultswill show how each of these KPIs scales, and, when taken as a whole provide insights in overallnetwork scalability.

3.5 Testbeds

The experiment will use the testbeds in Table 7 from the selection made in D4.2 [5]. These testbedshave been chosen taking into account previous know-how of T4.3 partners, and to support the

34

Page 35: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

number of nodes required during the various phases of the experiment (i.e. ranging from ∼30to ∼1000 nodes). Moreover, running the experiment on both a VM-based testbed (GENI) and abaremetal-based testbed (Virtual wall) has been considered an important factor for the evaluation ofRINA software stability, as the sensitive difference in I/O timings 1 can result into a larger softwaretesting coverage.

Testbed Purpose

iLab.t Virtual wall Host the first experiments up to large scale testing. Measurements of bandwidth, delay, jitter andburst characteristics on dedicated links and bare metal hardware

w-iLab.t Validation of the LTE setup and measurements of eNodeB-UE bandwidth usage

GENI / planetlab Europe Emulation of a continent-wide core network, evaluation of scalability (routing table sizes) on VMs

Table 7: Testbeds for experiment 2

3.6 Planning

Table 8 details the expected milestones for experiment 2 throughout the execution of ARCFIRE.The experiment will start with small-scale setups (∼ 30 nodes) and with the end goal of scaling itup to ∼ 1000 nodes.

Milestone Month Description

Deploy RINA software in the initial scenario and detectany problems

M16 This will provide feedback to WP3.

Obtain measurements from RINA networks in normaloperating conditions

M20 This will conclude the first small scale RINA experi-ment.

Scale up RINA deployment to mid-scale experiment M22 5 connected access networks, more nodes in metronetworks, and an overall increase in the number of usersand services

Document measurements from the mid-scale experi-ments

M24 This will conclude the mid-scale experiments

Scale up RINA deployment to large-scale experiment M26 7 connected access networks, more metro networks,more nodes in the core network, and a huge increase innumber of users and services used

Wrap up experimentation M29 Detailed analysis of the results and preparation for publi-cation of the final results.

Table 8: Planning of experiment 2

The risk factors for experiment 2 are mostly related to the maturity of the experimentationsoftware. The used RINA prototypes do not (and cannot) have the same degree of maturity as

1I/O operations involving Virtual Machine can be an order of magnitude slower in terms of throughput and/or latencywhen compared to I/O operations on baremetal

35

Page 36: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

well-established IP-based software. This relative lack of maturity can reveal itself in two ways.Moving an implementation from its development environment to the testbeds may reveal cornercases (or in general code paths never taken before) that are related to the specific hardware it isrunning on. Machines with different number of processors, processor microarchitecture, memoryspeed, caches size and NIC models can reveal timing issues and data races that were previouslyless visible. Scaling up the implementation will increase the rate at which any deficiency revealsitself. In addition to that, the likelihood of software failures increases (or at least does not decrease)as the duration of experiment increases.

As a quantitative indication of the largest and longest RINA experiments conducted to dateon the available implementations, IRATI has been scaled up on networks of Virtual Machinescontaining about 40 nodes; rlite has been deployed on networks of VMs with about 90 nodes.Moreover, these test networks have been operational only for some hours, while ARCFIRE targetsat least a week of uninterrupted operation.

These risks are mitigated in two ways:

• T3.4 (which runs until the end of the project) is dedicated to fixing bugs, memory leaks andother problems found during the various planned phases of the experiment.

• Three independent RINA implementations are available to be used for experiment 2, sothat fallback options are available if major issues should arise with either one of theseimplementations.

36

Page 37: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

4 Experiment 3: RINA as an alternative to MPLS

4.1 Objectives

This experiment will explore a couple of configurations in which RINA is used in an equivalentrole to that of MPLS (Multi Protocol Label Switching) and its associated control plane protocols.MPLS is widely used in the industry as a network substrate that can transport a diverse set of IP andEthernet services for the residential, business and mobile markets over a consolidated infrastructure.RINA can play the same role as MPLS, providing a more flexible, scalable yet simpler frameworkthat can be tailored to different deployment scenarios. Experiment 3 will explore a BGP-free serviceprovider core network, supporting layer 3 VPNs and transport of Internet traffic. This scenario hasbeen analysed in deliverable D2.1 [1], specifically in sections 2.3, and 2.6.1.

MPLS networks forward traffic based on fixed labels present in the header of MPLS packets.Each MPLS router looks up the incoming packet’s label in an internal database and obtains anoutput interface and an outgoing label (routers swap the labels in the packet header). Ingress MPLSrouters push an initial label according to a classification of the higher-level protocols transported bythe MPLS networks (IPv4 or v6 Internet, VPLS instances, Layer 3 VPNs, etc.). This classificationis known as FEC - Forwarding Equivalent Class - in MPLS terms. This initial label determines howthe flow of packets belonging to the higher-level protocols will be forwarded through the MPLSnetwork. A specific path through the MPLS network is called Label Switched Path (LSP), and isdefined by a set of labels at the MPLS routers traversed by the LSP.

P1

L2/L1

L2/L1 PE PE

P1

IP (Internet) or IP (VPN) or Ethernet (VPN)

Core Network (MPLS-based)

MPLS (LSP)

L2/L1

L2/L1 L2/L1

MPLS (service – VPN/PseudoWire)

L2/L1 L2/L1

P2 P2

MPLS (LSP)

VPN label LSP2 label LSP1 label L2 header IP header Headers in the packet

Figure 15: Data plane of 2-layer hierarchical LSP scenario for a core service provider network

MPLS packets can carry stacks of labels that are used to multiplex various instances of higher-layer services over the same LSP, as it is the case of IP VPNs: one label identifies the VPN instance,while the other label is used to forward the MPLS packet through a particular LSP. Label stacks arealso used in the case of hierarchical LSPs that help scaling up large MPLS networks: in this case

37

Page 38: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

the MPLS network is divided in two or more domains; e.g. metro and core. Core routers setup amesh of LSPs between them, while LSPs between metro routers are multiplexed over a core LSP atthe metro/core edge [10]. An example of such configuration is depicted in Figure 15.

Multiple control plane protocols are used to choose, distribute and maintain the label setsrequired for the MPLS forwarding plane. LDP, the Label Distribution Protocol, is typically used tosetup LSPs with no hard QoS guarantees in an automated fashion. LDP requires the use of a routingprotocol within the MPLS network (IGP or Interior Gateway Protocol), since it just negotiateslabels locally between adjacent nodes. RSVP-TE is used to signal a traffic engineered LSP across aset of MPLS routers. RSVP-TE can provide hard QoS guarantees and fast restoration times, butrequires a manual setup and keeps more state than LDP in the MPLS routers. iBGP (interior BorderGateway Protocol) or T-LDP (Targeted LDP) are typically used to distribute the MPLS labels thatdifferentiate service instances (e.g. VPNs).

P1

L2/L1 L2/L1

PE PE P1

Core Network (MPLS-based)

L2/L1 L2/L1 L2/L1 L2/L1 L2/L1

P2 P2

IP (private provider network) IS-IS IS-IS IS-IS IS-IS

RSVP (TE) LDP LDP

LDP

iBGP

Figure 16: Control plane of 2-layer hierarchical LSP scenario for a core service provider network

Segment routing [11] is the latest attempt to simplify the MPLS control plane as well asto facilitate integration with centrally-manage Software Defined Networking (SDN) approaches.Segment routing provides a tunnelling mechanism that leverages source routing. Paths are encodedas sequences of topological sub-paths called segments, which are advertised by link-state routingprotocols. Segments can be thought as a set of instructions from the ingress router such as go tonode X using the shortest path and then go to node Y using this explicit route. Using segmentrouting state information is pushed off the network core towards the edges (where the FEC to listof segments mapping is kept) and to each packet (increasing the header overhead). An externalcontroller is required to instruct ingress routers which paths to use through the network, and forwhich services.

RINA can be seen as a generalisation of MPLS in which both the data plane as well as the controlplane / layer management machinery can recurse to facilitate scaling up networks. RINA networksprovide a series of flows to higher layer protocols, with certain QoS characteristics. Higher-layer

38

Page 39: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

flows can be multiplexed into lower-layer flows. The layer management policies at each layer(routing, resource allocation, congestion management) can be tailored fulfil the QoS characteristicsof the flows offered to higher layers. Contrary to MPLS, the layer management protocols are not flatbut also recurse, improving the isolation between each other and contributing towards simplifyingthe configuration of the network and improving its scalability. No new protocols or concepts arerequired, just the usual RINA machinery with the appropriate policies.

P1

L2/L1

L2/L1

PE PE

P1

IP (Internet) or IP (VPN) or Ethernet (VPN)

Core Network (RINA-based)

Backbone L2 DIF

L2/L1

L2/L1 L2/L1

L2/L1 L2/L1

P2 P2

EFCP PCI EFCP PCI L2 header IP header Headers in the packet

Region 1

Region 2

Region 3

Region 4

Region 5

Region 6

Region 7

Backbone L2 DIF

Backbone L1 DIF

EFCP

CDAP CDAP CDAP

Backbone L1 DIF

EFCP

CDAP CDAP CDAP

Figure 17: 2-layer service provider core implemented with a RINA over Ethernet configuration

Figure 17 shows an equivalent RINA configuration to that of figures 15 and 16. Both thedata transfer functions and the layer management functions associated to them recurse, providingisolation between its naturally separated scopes. Each layer just has the same two protocols: EFCPfor data transfer and CDAP for layer management. The lower the layers the more stable the trafficbecomes, making connection-oriented like resource allocation policies more effective. Note thatthis configuration is also equivalent to that of service provider Ethernet technologies discussedin D2.1 (PBB, hierarchical PBB, etc.), with the advantage of minimising the number of requiredprotocols due to the generality of RINA. The policies used in this experiment shall allow the RINAnetwork to provide at least equivalent levels of service to that of MPLS networks and also providefor an efficient management of the resources in the provider network. Such requirements includethe provision of QoS guarantees, leveraging multiple paths, managing congestion, minimisingnetwork state (e.g. forwarding table sizes) and minimising protocol header overhead. Securityrequirements are not the focus of this experiment, but any RINA DIF can be configured with the

39

Page 40: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

right authentication, access control and SDU protection policies to deal with different threats [12].

4.2 Metrics and KPIs

The KPIs for experiment 3 are given in Table 9.

KPI Metric Current state of theart

ARCFIRE Objective

Complexity of configuration Number of protocolsand parameters thatneed to be configured

Depends on specificscenario (will be cal-culated for experimentconfiguration)

Simpler configura-tion for equivalentscenarios

Protocol header overhead in data transfer PDUs Number of protocolheader bytes

20 bits per MPLSlabel, at least 2 or 3labels used in typicalconfigurations

Less protocol headeroverhead for equiva-lent configurations

Network state: forwarding table size Size of forwardingtables as a function ofnumber of flows andnodes

Depends on specificscenario (will be cal-culated for experimentconfiguration)

Equal or less networkstate for equivalentconfigurations

QoS: Delays Delays measured perQoS class

Hard to guaranteedifferential delaybetween QoS classesat high loads [13]

Statistical bounds ondelay per QoS class,even when offeredload is equal to 100%

QoS: Packet Loss Losses measured perQoS class

Hard to guaranteedifferential packet lossbetween QoS classesat high loads [13]

Statistical boundson packet loss perQoS class, even whenoffered load is equal to100%

QoS: Capacity Utilisation of N-flows(Mbps)

Hard guarantees usingtraffic-engineeredLSPs

Capacity isolationbetween QoS classes,even when offeredload is equal to 100%

Table 9: KPIs for guaranteed QoS levels experiment

4.3 Testbeds

The performance-sensitive nature of this experiment requires the use of bare metal hardwareresources if possible. A moderate number of resources is required to carry out the tests, without theneed of supporting wireless interfaces. Hence, the Virtual Wall is the best testbed option from theselection made in D4.2 [5]:

Testbed Purpose

Virtual Wall Provides access to enough bare metal servers to carry out the experiments described in this scenario

40

Page 41: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

4.4 Experiment scenario

Experiment 3 will analyse the DIF configuration depicted by Figure 17 using a ladder networktopology as described in [10]. This topology offers a realistic deployment for a service provider usecase, in which the network design seeks to provide a scalability vs. cost-efficiency trade-off. Theladder topology will be tested in two scenarios: mid-scale (52 nodes arranged as depicted in Figure18) and large-scale (126 nodes arranged as depicted in Figure 19).

P1

P1

P2 P2

PE PE PE

P2

PE PE PE PE PE PE

P1

P2 P2

PE PE PE

P2

PE PE PE PE PE PE

P2 P2 P2

PE PE PE PE PE PE PE PE PE

P1

P2 P2 P2

PE PE PE PE PE PE PE PE PE

Figure 18: Mid-scale ladder topology

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

P1

P2 P2 P2

Figure 19: Large-scale ladder topology

The connectivity graph of the 2 DIFs in the mid-scale scenario is depicted by Figure 20. The

41

Page 42: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

backbone L1 DIF provides direct connectivity to all P2 routers via N-1 flows, therefore all theN-flows provided by the backbone L2 DIF either follow the path PE-P2-PE or PE-P2-P2-PE,simplifying routing in the backbone L2 DIF if a proper addressing policy is utilised.

P1

P1

P2 P2 P2

P2 P2 P2

P1

P1

P2 P2 P2

P2 P2 P2

P2 P2 P2

PE PE PE PE PE PE PE PE PE

PE PE PE PE PE PE PE PE PE

P2 P2 P2

P2

PE

PE

PE

P2

PE

PE

PE

P2

PE

PE

PE

P2

PE

PE

PE

P2

PE

PE

PE

P2

PE

PE

PE

Backbone L2 DIF Provides full mesh of N-1 flows

between all P2 routers

Figure 20: Connectivity graph of the backbone L1 DIF (left) and the backbone L2 DIF (right)

The characteristics of the policies for the backbone DIFs are summarised in the followingparagraphs.

• Data transfer: No retransmission control policies will be considered. Sliding window flowcontrol, with ECN based congestion control will be used.

• Support for multiple QoS classes: In these DIFs, traffic from different applications or DIFs,with different QoS requirements will make necessary to offer several QoS classes. For thescenarios considered here, it will be necessary to offer the same classes as the layer on top,the VPN DIF or app-specific DIF or public internet DIF.

• Multiplexing and scheduling policies: For these DIFs it will be necessary to use theQTAMux policies, given the multiple QoS classes to be supported.

Table 10 shows the different types of traffic that will be considered in this scenario. Thesetraffic profiles will be translated into configuration parameters for the QTAMux.

Thus, three types of VPN traffic will be used to gather the results: real-time voice, video ondemand, file transfer; all of them will be generated with iperf. To carry out the experiments, manyflows for each type of application will be allocated between all possible node pairs (in the case of

42

Page 43: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Urgency Type of application Flows allocated (%) Cherish level

Urgent traffic Real-time voice 20% Medium

Medium urgency traffic Video on demand, web brows-ing

40% High

Low urgency traffic File transfer 40% Low

Table 10: KPIs for guaranteed QoS levels experiments

the small scenario), and between nodes that are not too close in the case of the larger one. Thedelays, goodput, and packet losses will be measured by the applications.

Each experiment run will be executed with the following steps:

• Configuration of L1 backbone DIF. All IPCPs will be instantiated and enrol to the L1backbone DIF. After the routing policy has converged, the size of forwarding tables will bemeasured.

• Verification of connectivity and performance through the L1 backbone DIF. The rina-perftool will be used to allocate a number of flows between P2 nodes at the L1 backbone DIFwith different QoS requirements. Goodput, delay and packet loss will be measured betweeneach pair of P2 IPC Processes.

• Configuration of L2 backbone DIF. All IPCPs will be instantiated and enrol to the L2backbone DIF. As part of the procedure a mesh of N-1 flows between P2 IPCPs at the L2backbone DIF will be formed. After the routing policy has converged, the size of forwardingtables will be measured.

• Verification of connectivity and performance through the L2 backbone DIF. The rina-perftool will be used to allocate a number of flows between PE nodes at the L2 backbone DIFwith different QoS requirements. Goodput, delay and packet loss will be measured betweeneach pair of PE IPC Processes.

• Setup of L3 VPNs. A number of IP VPNs (depending on the experiment run) will be setupover the service provide core network. Setting up the VPNs involves the creation of N-1flows provided by the L2 backbone DIF, as well as the proper update of the IP forwardingtables so that IP traffic is forwarded over RINA flows. IP routing tables at each PE will beverified after the VPN setup.

• Verification of L3 VPN connectivity and performance. Several scripts will start a number ofiperf sessions at each PE, in order to generate the traffic that will be transported by the DIFsand collect the statistics of the observed performance.

43

Page 44: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Milestone Month Description

MS 1 M21 All software for experiments works on demonstrator, initial results using mid-scale deployments

MS 2 M24 Experiments carried out at VWall testbed using large-scale deployment for one network service provi-der

MS 3 M28 Experiments carried out at VWall testbed using large-scale deployment for two network serviceproviders

Table 11: Milestones for guaranteed QoS levels experiment

4.5 Planning

We foresee the milestones reported in Table 11 in the development of the guaranteed QoS levelsexperiment.

44

Page 45: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

5 Experiment 4: Dynamic and seamless DIF renumbering

5.1 Objectives

One of the requirements for achieving seamless mobility and at the same time keep routing efficientand forwarding tables small in DIFs is the ability to renumber IPC Processes as they move throughthe network. An effective and scalable addressing strategy assigns addresses to IPCPs that arelocation dependent (that is, the address reflects the position of the IPCP with respect to an abstractionof the DIF connectivity graph). As the IPCP moves through the DIF, it will reach a point whereits current address no longer reflects his position accurately (hence it is no longer aggregatable).The IPCP needs to get a new address assigned in a dynamic fashion, following a procedure thatguarantees that the flows supported by the IPCP are not impacted.

The procedure to change the address of a network entity is usually known as renumbering, andis not a seamless one in IP networks: IP addresses need to be assigned to interfaces on switches androuters, routing information must be propagated, ingress and egress filters must be updated - aswell as firewalls and access control lists, hosts must get new addresses and DNS entries have to beupdated.

An overview of the problems associated to renumbering of IP networks is provided in [14].Since TCP and UDP connections are tightly bound to a pair of IP addresses, changing any of themwill destroy the flow. Since DNS is an external directory - not part of the network layers - therenumbering process usually leads to stale DNS entries pointing to deprecated addresses. Evenworse, applications may operate through the direct use of IP addresses, which will require anupdate in the application code, its configuration or both. Router renumbering usually requires anexhaustive manual and error-prone procedure for updating control plane Access Control Lists orfirewall rules. Moreover, static IP addresses are usually embedded in numerous configuration filesand network management databases [15].

Dynamic renumbering in RINA networks is possible because none of the previous issues canhappen. Flows provided by RINA DIFs are between application process names; since IPC Processesnever expose addresses outside of the DIF renumbering events are not externally visible. All DIFshave internal directories that match names of the applications registered to that DIF to the addressesof the IPC Processes they are currently attached to; therefore renumbering requires the update ofthis mapping, which is just a normal operating procedure in the DIF (no special function required).Access control rules are also setup according to application names, and therefore need not to beupdated when IPCPs change addresses. Last but not least, Network Manager applications interactwith Management Agents in the managed systems via its application name, and hence are notexposed to IPCP address changes.

The main goal of this experiment is to understand the behaviour of dynamic renumbering onRINA networks, analysing what are the limitations and tradeoffs to make renumbering events in aDIF invisible to the applications using it. This experiment will explore dynamic renumbering inseveral configurations of DIFs extracted from D2.2 [2], using this technique in two main use cases:

45

Page 46: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

• Hosts moving through a provider’s access network. Addresses in the IPCPs of the mobilehost will need to be updated as it changes its point of attachment. In this scenario onlya subset of the addresses of IPCPs in one or more DIFs need to be updated, but that mayhappen rather frequently depending on the speed of the mobile hosts.

• A provider updates the addressing plan of one or more DIF(s) in its network. This mayhappen because the current addressing plan no longer scales, or because the provider wantsto change to an addressing plan that is more efficient given the network connectivity graph.In this scenario all the addresses of IPCPs in one or more DIFs need to be updated, but veryinfrequently.

5.2 Metrics and KPIs

The objectives of the experiment are to quantify i) the degradation in the level of service perceivedby applications using flows when renumbering takes place in a DIF and ii) the overhead andperformance of the renumbering procedure. As later explained in section 4.3.5, experiments willtake into account realistic scenarios but also more extreme scenarios to understand the limitationsof dynamic renumbering (e.g. in which a network is continuously being renumbered). That is whythe seamless renumbering KPI targets in table 12 refer to the realistic use cases.

KPI Metric Current state of the art ARCFIRE Objective

Seamless renumbering: Latency Increased latency while thenetwork is being renum-bered

Application flows breakwhen renumbering

Less than 5% in realisticuse cases

Seamless renumbering: Goodput Decreased goodput whilethe network is beingrenumbered

Application flows breakwhen renumbering

Less than 5% in realisticuse cases

Seamless renumbering: packet loss Packet loss due to renum-bering events

Application flows breakwhen renumbering

Zero in realistic use cases

Renumbering overhead Average extra entries inIPCP forwarding tablesdue to renumbering events,as a function of renum-bering period and DIFsize

Renumbering cannotbe fully automated andseamless

Understanding the trade-offs between renumberingperiod, DIF size and for-warding table size

Renumbering speed Time to complete a renum-bering event (until oldaddress is deprecated)

Renumbering cannotbe fully automated andseamless

Understanding the perfor-mance of the renumberingprocedure, and how it mayimpact seamless renumber-ing metrics

Table 12: KPIs for renumbering experiments

• IRATI RINA implementation. It will be used to implement multiple RINA-enabled systems

46

Page 47: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

and run a variety of DIFs according to the experiment configuration depicted in section 4.3.5.Policies already present in the IRATI repository will be used to carry out standard DIFprocedures. A few new namespace management policies will be developed to trigger therenumbering events according to different strategies (periodically, whole DIF at once, onlymobile hosts).

• rina-echo-time. The rina-echo-time application will be used to measure the latency andpacket loss perceived by a flow user while renumbering takes place.

• rina-tgen. The rina-tgen application will be used to measure the goodput perceived by a flowuser while renumbering takes place.

• Demonstrator. The demonstrator will be used to test the correct behaviour of the softwarebefore deploying it on FED4FIRE+ testbeds. The demonstrator allows the experimenter tocarry out multiple experiment runs in a controlled environment with enough scale (up to 100RINA-enabled Virtual Machines).

• Rumba. Rumba is the experimentation and measurement framework tailored to RINAexperiments. It allows the definition of complex experiments involving multiple testbeds,being able to interact with testbed federation/automation tools such as jFed.

5.3 Testbeds

We do not foresee the use of wireless resources to carry out the renumbering experiment, thereforethe Virtual Wall will be its main testbed of reference as reported in D4.2 [5]. The Virtual Wallcan provide access to bare metal hardware to run the RINA implementation and the applicationsrequired for the experiment, providing access to an adequate number of resources.

Initially the jFed experimenter Graphical User Interface (GUI) will be used to reserve and setupthe resources in the FED4FIRE+ testbeds. Once it becomes available, the experimentation andmeasurement framework under development in WP3 will provide a more automated front-end forjFed as well as other IRATI deployment and configuration tools such as the Demonstrator.

In summary, this experiment may use the testbeds in Table 5.3.

Testbed Purpose

Virtual Wall Provides access to enough bare metal servers to carry out renumbering experiments

Table 13: Testbeds for renumbering experiments

47

Page 48: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

TALOslo STO

COP

DUB

LISMAD

Bern

Rome

SOF

VIE BUC

ATH

VIL

NIC

WAR

PRA

LUX

Riga

BER

LONAMS

BRU

PAR

LJU ZAG

BUD

ANK

VAL

MOSOslo

DUB

LIS

NIC

BRU

ANK

VAL

Riga

Figure 21: Renumbering experiment: small single DIF scenario, layering structure (up), and DIFconnectivity (down)

48

Page 49: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

5.4 Experiment scenario

Renumbering experiments will be based on two scenarios: single DIF scenarios, designed totest how renumbering behaves within a single DIF; and multi-DIF scenarios, designed to testrenumbering across a whole network (set of DIFs). We foresee two versions of the single DIFscenario: a small one and a large one. The first one, depicted in Figure 21, is modelled after theGEANT network backbone and has 32 IPC Processes. The second one, depicted in Figure 22, ismodelled after the AT&T network layer 3 backbone and has 80 nodes.

DAL

OKH

ORL

SJUMIA

FW

ALB

COL

PHO

DEN

COL

ATL

LA

LVG

SDG

SLC

AUS

SAN HOU

NOR

GAR

HON

SJO

SFR

SCR

DC

NYCDET

CAM

CHIOMH

KAN

SEA

ANC

POR

SLO

NORANH

RED

OAK

TMPFLA

PMJAK

SPO

MAD

MIL

DAV

DMO

NASHTUL

LRO

GLE

GRA

SPA

MIN

MEM

IND

LOU

BIR

CIN

CHA

RAL

PHIPIT

BAL

NEW

BOH

SYR

BRI

CHE

ALB

WOR

PORMAN

FAR

PRO

Figure 22: Renumbering experiment: large single DIF scenario, DIF connectivity

Multi-DIF experiments will leverage the ideal RINA converged service provider designsdescribed in D2.2 [2]. In particular we will setup small and large multi-DIF scenarios for the twouse cases considered in the experiment: mobility and change of addressing plan. Figure 23 showsthe DIF structure for the change of addressing plan scenario. The experiment will focus on theresidential customer DIF and below. Several tests will be carried on two versions of the scenario.The first one is illustrated by Figure 24, featuring 41 systems. The second one is depicted in Figure25, and uses 112 systems in a larger scale configuration. The scenario where renumbering supportsmobile hosts will be tested using a similar configuration, in which mobile hosts will be used insteadof CPE routers. Since we are only interested in analysing the change of address events, there isno need to have real mobility using wireless technologies in the experiment setup; mobility eventswill be simplified down to renumbering events occurring in a pattern equivalent to that of multiple

49

Page 50: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

mobile hosts roaming through the network.

Access Router

PtP DIF

CPE

Edge Service Router

Metro Edge

Router

Metro Edge

Router

Metro BB DIF

Metro service DIF

PtP DIF PtP DIF

PtP DIF PtP DIF

Metro P router

PTP DIF

Residential customer service DIF

Host PtP DIF

Public Internet or App-specific or VPN DIF

Backbone Router

Backbone router

PtP DIF PtP DIF

Backbone DIF

Provider Edge

Router

Provider Edge

Router

PtP DIF

Customer network Service Prov. 2 Service Prov. 1 network

Access Aggregation Service Edge Core Internet Edge

Public Internet or App-specific or VPN DIF

Home DIF

Figure 23: Renumbering experiment: DIF structure for the multi-DIF scenario

Customer Premises Equipment

Access Router

MAN Access Router

MAN Core Router

Edge Services Router

Backbone router

Figure 24: Renumbering experiment: systems for small scale multi-DIF scenario

A namespace management policy will be used to trigger different network renumbering situ-ations. With this policy every IPCP in the DIF periodically changes its address (walking a poolof addresses assigned reserved for each IPC Process). The period is a random variable uniformlydistributed between a minimum and a maximum values (configurable when specifying the policy).Playing with the configuration of the policy it is possible to experiment with DIFs where all itsmembers continuously renumber at different times, or DIFs where some of its members changeits address, or DIFs where all its members change addresses at the same time just once. Figure 26shows the different scenarios and configurations that will be taken into account for the renumberingexperiment.

50

Page 51: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Customer Premises Equipment

Access Router

MAN Access Router

MAN Core Router

Edge Services Router

Backbone router

Figure 25: Renumbering experiment: systems for large scale multi-DIF scenario

Continuous renumbering

Mobile host moves through the DIF

Renumbering experiments

Change of addressing plan

Single DIF

Multiple DIFs

Small scale

Large scale

Small scale

Large scale

Single DIF

Multiple DIFs

Small scale

Large scale

Small scale

Large scale

Small scale

Large scale Multiple DIFs

Figure 26: Categorisation of scenarios and configurations for the renumbering experiments

51

Page 52: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Milestone Month Description

MS 1 M16 All software for experiments with continuous renumbering scenario works on demonstrator, initialresults using small-scale and large-scale deployments

MS 2 M18 Continuous renumbering experiments carried out at VWall testbed using small-scale deployment

MS 3 M22 Change of addressing plan and mobile hosts scenario experiments carried out at VWall testbed usingsmall-scale deployment (maps to ARCFIRE project milestone MS8)

MS 4 M26 Continuous renumbering experiments carried out at VWall testbed using large-scale deployment

MS 5 M28 Change of addressing plan and mobile hosts scenario experiments carried out at VWall testbed usinglarge-scale deployment (maps to ARCFIRE project milestone MS9)

Table 14: Milestones for renumbering experiments

5.5 Planning

We foresee the milestones reported in Table 14 in the development of the renumbering experiments.

52

Page 53: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

6 Experiment 5: Application discovery, mobility and layer securityin support of OMEC

6.1 Objectives

OMEC can be defined as “An open cloud platform that uses some end-user clients and located atthe ‘mobile edge” to carry out a substantial amount of storage (rather than stored primarily in clouddata centres) and computation (including edge analytics, rather than relying on cloud data centres)in real time, communication (rather than routed over backbone networks), and control, policy andmanagement (rather than controlled primarily by network gateways such as those in the LTE core).”[16]

D2.2 [2] introduced the Open Mobile Edge Computing Scenario as one of the 5G scenarioswhere RINA could deliver significant value. This experiment analyses the buit-in capabilities inRINA that facilitate the realisation of OMEC use cases. In particular:

• Automatic location of applications regardless of the DIF(s) they are available through.In mobile edge computing scenarios storage, compute, and networking of edge servicesare provided at the edge of the mobile network. In a RINA network, compute and storageresources can be placed wherever needed. RINA can then discover distributed applications(edge services), locate processes and allocate flows to (between) them independent of theirnetwork location.

• Slicing: security and performance isolation. Network slices are isolated with guaranteedsecurity and performance, while sharing the same underlying infrastructure. All of that isoptimised for the delivery of a set of applications (or 5G verticals). RINA already has anative support for scope, slicing, and virtualisation as discussed in sub-section 3.8.1 of D2.2[2]. In essence, the concept of a DIF provides for securable layers (any number of them)whose policies can be tailored to the needs of each tenant and/or application (e.g. virtualisednetwork function).

• Distributed mobility management. Distributed mobility management avoids centralisedmobility anchors providing efficient routing and traffic management (including handovers).Furthermore, it can be used to remove tunnels if possible. RINA already supports (natively)multi-homing and mobility, including multiple possible handover scenarios. Other RINAcore features (scoped routing, topological address and routing, and seamless renumbering)allow a RINA network to efficiently support any type of mobility and distributed mobilitymanagement. Experiment 2 will analyse this topic in depth, Experiment 3 only investigateshow distributed mobility management is used as an enabler of OMEC scenarios.

53

Page 54: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

6.2 Metrics and KPIs

The objectives of this experiment are to evaluate the behaviour of several built-in RINA featuresfacilitating OMEC use cases, by verifying their correct operation and measuring the performanceof its current implementation. In particular we are interested in the DIF Allocator work to locatea destination application and configure DIFs to access it if needed; as well as in the impact ofhandovers in terms of performance (packet loss, delay, goodput) in a RINA over WiFi scenario(Table 15).

KPI Metric Current state of the art ARCFIRE Objective

DIF Allocator performance Extra delay incurred inflow allocation due toapplication discovery andjoining relevant DIF

This capability is notsupported by the currentInternet protocol suite

Understanding the perfor-mance of the DIF Allocatorunder load, identifyingtrade-offs in its design

Impact of handover on packet loss Increased packet loss dueto handover (WiFi AP toWiFi AP)

To be measured on testbed Equivalent or less than inthe IP case (understandthe tradeoffs of RINA overWiFi)

Impact of handover on delay Increased delay due tohandover (WiFi AP to WiFiAP)

To be measured on testbed Equivalent or less than inthe IP case (understandthe tradeoffs of RINA overWiFi)

Impact of handover on goodput Loss of application good-put (Mbps) due to handovereffects

To be measured on testbedfor TCP and UDP

Equivalent or less than inthe IP case (understandthe tradeoffs of RINA overWiFi)

Table 15: KPIs for OMEC experiments

6.3 Testbeds

The w-iLab.t wireless testbed provides an ideal environment for the wireless segment of theexperimental scenario depicted in Figure 27. Such a facility provides access to a large number ofWiFi access points - which will be used to model the access routers in the experiment - as well asaround 20 hosts mounted on robots that can move through the facility - used to model the mobilehosts. The Virtual Wall will be used to provide resources for all the fixed routers (core routers, DCgateway, Top of Rack, ISP routers, provider border router) as well as the servers in the experiment.There is connectivity between the w-iLab.t and the Virtual Wall, therefore it is possible to carry outexperiments that use resources from both testbeds.

Table 6.3 summarises the testbeds that will be used by the OMEC experiment.

54

Page 55: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Testbed Purpose

w-iLab.t Hardware resources for the wireless access routers and the mobile hosts, connectivity to Virtual Wall

Virtual Wall Hardware resources for the fixed routers (core, border, ISP, DC) and servers

Table 16: Testbeds for OMEC experiments

6.4 Experiment scenario

The experiment scenario features a provider hosting applications belonging to two different en-terprises in its own datacentre. Several mobile hosts (UEs) belonging to employees of each oneof the two companies are serviced by the provider. UEs connect to applications that are eitheravailable through the public Internet or through the provider’s edge datacentre. UEs are completelyunaware of the location of such services, which are dynamically discovered by the RINA network.Employees can move and attach to different provider access routers without service disruption.Figure 27 shows the configuration of the physical systems in the experiment scenario. The providernetwork consists in 6 wireless access routers, interconnected between them via 2 core routers,which in turn interconnect to the provider network border router. This border router is connected totwo ISPs (modelled with a single router since the details of the ISP network are not important forthis experiment), each one are attached to a server. A datacentre owned by the network provider isattached to one of the core routers.

UE 1

UE 2

AR 1

AR 2

AR 3

AR 4

AR 5

AR 6

CR 1

CR 2

GW 1

ISP 1

ISP 2

SRV 5

SRV 6

SRV 1

SRV 2

SRV 3

SRV 4

ToR 2

ToR 1

DC GW

Small DC

Service Provider net

Data Center Gateway

User Equipment

Provider Access Router

Core Router

Provider 1 Border Router

ISP Router

Server

Top of Rack Router

Figure 27: OMEC experiment: physical systems involved in the scenario

55

Page 56: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

The DIF structure of the experiment for the communication between applications in UEs andapplications in public Internet facing DIFs is shown in Figure 28. The first hop between mobilehosts and the access network is serviced by shim DIFs over WiFi, which allow higher layer DIFsand local RINA management to interact with the WiFi protocols (triggering access point scanning,association and dissociation). On top of that the mobile network DIF provides distributed mobilitymanagement across the service provider network. Finally, a public Internet DIF allows applicationsin UEs to reach applications hosted at servers outside of the service provider network.

UE Access 1 Core 1 Gateway

ISP1 Server6 Mobile network DIF

Internet DIF

DAF (rina-tgen or rina-echo-time)

Shim DIF WiFi Shim DIF Eth Shim DIF Eth

Shim DIF Eth Shim DIF Eth

UE A1

A4

A2 C1

GW

C2

Mobile Network DIF

I1

UE GW

S1

S2 I2

Internet DIF

A3

A5

A6

DC

Figure 28: OMEC experiment: DIF configurations: UE to server on the public Internet

Figure 29 shows the DIF structure for the communication between applications in UEs andapplication hosted at the service provider private cloud. A DC Fabric DIF connects all the servers inthe service provider datacentre to the datacentre border router (labelled gateway in the Figure). TwoVPN DIFs float on top of the DC Fabric DIF, each one providing private access to the applicationsof a different company. These VPN DIFs span all the way to the UEs when those need to connectto applications hosted at the provider’s datacentre.

The experiment will analyse the following scenario:

1. Once a given mobile host has joined the Mobile Network DIF, the rina-echo-time applicationin the UE will request a flow to a rina-echo-time server hosted in one of the servers outsideof the provider’s network.

56

Page 57: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

UE Access 2 Core 1 DC Gateway

ToR 1 Server 1

Mobile network DIF

Enterprise 1 VPN DIF

DAF (Any demo app)

Shim DIF WiFi Shim DIF Eth Shim DIF Eth Shim DIF Eth Shim DIF Eth

S2

GW

S4 Enterprise 2 VPN DIF

DC Fabric DIF

S1

UE 1

GW

S3 Enterprise 1 VPN DIF

UE 2 S2

S1

S3

S4

GW

ToR1

ToR2

Figure 29: OMEC experiment: DIF configurations: UE to server on the provider’s cloud

57

Page 58: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

2. Then the DIF Allocator will start the search for the DIF that brings to the rina-echo-timeserver, locating it and figuring out that the destination application is available through thepublic Internet DIF.

3. After that the DIF Allocator instance at the UE will create an IPC Process and instructs it tojoin the public Internet DIF.

4. Once it has joined the flow allocation request will be passed to the public Internet DIF, whowill allocate the flow.

5. A second flow to another application hosted in a public Internet facing server will be requested(now by the rina-tgen application). Time to flow setup in this case should be lower, since themobile host is already a member of the public Internet DIF.

6. While the two applications are executing, the UE moves and changes its attachment to theservice provider’s access routers.

7. Now the rina-echo-time client requests a flow to a rina-echo-time server hosted at theprovider’s private cloud.

8. The DIF Allocator will look for the destination application, and will realise it is availablethrough the Enterprise 1 VPN DIF.

9. Then the DIF Allocator instance at the UE creates an instance of an IPC Process and instructsit to join the Enterprise 1 VPN DIF (requires authentication), who later processes the flowrequest and allocates the flow.

The experiment will be carried out with a varying number of concurrent requests, starting froma single one (to verify the correct operation of all the procedures involved in the scenario). Wewill measure how the increased load in the Flow Allocator increases its response time, and howhandover performance is degraded. This data will measure the maturity and performance of theIRATI implementation - not of RINA per se, but this is also an important goal of IRATI.

6.5 Planning

We foresee the milestones reported in Table 17 in the development of the OMEC experiments.

58

Page 59: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Milestone Month Description

MS 1 M18 Initial version of the experiment works with pre-allocated DIFs (no DIF Allocator) and 2 concurrentmobile hosts in local testbed

MS 2 M22 Experiment works with pre-allocated DIFs and 10 concurrent mobile hosts in w-iLab.t and VWalltestbeds (maps to ARCFIRE project milestone MS8)

MS 3 M26 Experiment works with DIF Allocator (dynamic DIF creation) and 2 concurrent mobile hosts in localtestbed

MS 5 M28 Experiment works with DIF Allocator (dynamic DIF creation) and 10 concurrent mobile hosts in w-iLab.t and VWall testbeds (maps to ARCFIRE project milestone MS9)

Table 17: Milestones for renumbering experiments

59

Page 60: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

Software package Description License Experiments

IRATI [17] RINA Implementation for Linux OS GPL and LGPL 1, 2, 3, 4, 5

rlite [18] A lightweight Free and Open Source RINA implementation forGNU/Linux operating systems, focusing on stability, ease ofuse, and performance

GPLv2, LGPLv2.1 2

Ouroboros POSIX-compliant RINA implementation GPLv2, LPGLv2.1 2

rina-echo-time [17] Measure latency and packet loss GPL 4

rina-tgen [19] Traffic generator, measure goodput for different traffic distribu-tions

GEANT OS License 4

DMS Management system with strategy executor, OSS/NMS trigger,management shell, event visualiser - the core system underevaluation

TBD 1

DIF Allocator [17] Locates applications over a variety of DIFs and collaborateswith NMS(s) to create new DIFs

TBD 5

jFed [20] Java-based framework for testbed federation MIT 1, 2, 3, 4, 5

Rumba [21] jFed compatible experimentation framework LGPLv2.1 1, 2, 3, 4, 5

Demonstrator [22] Tool to setup self-contained VM-based IRATI testbeds GPL 1, 2, 4

Rumba [21] Tool to setup RINA implementation experiments onFED4FIRE+ testbeds

LGPL 1, 2, 3, 4, 5

iperf [23] Traffic generator, measure goodput iperf license 3, 4, 5

nginx [24] Free, open-source, high-performance HTTP server BSD 2-clause 2

tcpdump [25] Capture network traffic on a certain interface BSD 3-clause 2

Wireshark [26] GUI network protocol analyzer GPLv2 2

Table 18: Software to be used in ARCFIRE experiments

7 Software

The ARCFIRE experiments will require the following software items for their execution, sum-marised in Table 18.

60

Page 61: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

8 Conclusion

This deliverable details the experimentation plans for ARCFIRE to be conducted during the secondpart of the project. Five experiments were drafted, spanning different aspects such as manageability,scalability, robustness and efficiency, with relevant KPIs identified for each of these aspects per usecase. On the basis of the use case, testbeds were selected for each experiment and the necessarysoftware to execute the experiments were identified. A planning for each of the experiments hasbeen document to assess progress. As such, this document will guide the experimenters to achievemeaningful results and aid them towards a successful and timely conclusion.

61

Page 62: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

References

[1] ARCFIRE consortium. (2016, September) H2020 arcfire deliverable d2.1: Convergednetwork operational environment analysis report. [Online]. Available: http://ict-arcfire.eu

[2] ——. (2016, December) H2020 arcfire deliverable d2.2: Converged service provider networkdesign report. [Online]. Available: http://ict-arcfire.eu

[3] International Telecommunication Unit, “Management framework for open systemsinterconnection (osi) for ccitt applications,” ITU-T Recommendation X.700 (09/92), March1992. [Online]. Available: www.itu.int/rec/T-REC-X.700

[4] L. Andrey, O. Festor, A. Lahmadi, A. Pras, and J. Schonwalder, “Survey of snmp performanceanalysis studies,” International Journal of Network Management, vol. 19, pp. 527–548, 2009.

[5] ARCFIRE consortium. (2016, December) H2020 ARCFIRE deliverable D4.2: Experimentalinfrastructure available for experimentation report. [Online]. Available: http://ict-arcfire.eu

[6] PRISTINE Consortium. (2016, June) FP7 PRISTINE, deliverable D4.3: Final specificationand consolidated implementation of security and reliability enablers. [Online]. Available:http://ict-pristine.eu

[7] E. C. Bormann, “Robust header compression (rohc): Framework and four profiles: Rtp, udp,esp, and uncompressed,” IETF Standards track, RFC 3095, 2001.

[8] (2017, Feb.) The IRATI ioq3 port. [Online]. Available: https://github.com/irati/ioq3

[9] ARCFIRE consortium. (2016, December) H2020 ARCFIRE deliverable D3.1: Integratedsoftware ready for experiments: RINA stack, Management System and measurementframework. [Online]. Available: http://ict-arcfire.eu

[10] S. Yakusawa, A. Farrell, and O. Komolafe, “An analysis of scaling issues in mpls-te corenetworks,” IETF Network Working Group draft-ietf-mpls-te-scaling-analysis-05, December2008.

[11] C. Filsfils, S. Previdei, B. Decraene, S. Litkowski, and R. Shakir, “Segment routing archi-tecture,” IETF Network Working Group Internet Draft draft-ietf-spring-segment-routing-11,February 2017.

[12] E. Grasa, O. Rysavy, O. Lichtner, H. Asgari, J. Day, and L. Chitkushev, “From protecting pro-tocols to protecting layers: designing, implementing and experimenting with security policiesin rina,” IEEE ICC 2016, Communications and Informations Systems Security Symposium,2016.

62

Page 63: ARCFIRE · 2020. 9. 9. · D4.3: Design of experimental scenarios, selection of metrics and KPIs Document: ARCFIRE D4.3 Date: May 5th, 2017 Abstract This deliverable details the preparatory

D4.3: Design of experimentalscenarios, selection of metricsand KPIs

Document: ARCFIRE D4.3

Date: May 5th, 2017

[13] S. Leon, J. Perello, D. Careglio, E. Grasa, M. Tarzan, N. Davies, and P. Thompson., “Assuringqos guarantees for heterogeneous services in rina networks with δq,” Proceedings of the 6thWorkshop on Network Infrastructure Services as part of Cloud Computing (NetCloud), 2016.

[14] B. Carpenter, R. Atkinson, and B. Flink, “Renumbering still needs work,” IETF Informationaldraft, RFC 5887, May 2010.

[15] D. Leroy and O. Bonaventure, “Preparing network configurations for renumbering,” Inter-national Journal of Network Management, vol. 19, no. 5, pp. 415–426, September/October2009.

[16] C. Buyukkoc. (2016, Mar.) Edge definition and how it fits with5g era networks. [Online]. Available: http://sdn.ieee.org/newsletter/march-2016/edge-definition-and-how-it-fits-with-5g-era-networks

[17] IRATI Github site. [Online]. Available: https://github.com/irati/stack

[18] rlite github site. [Online]. Available: https://github.com/vmaffione/rlite

[19] Rina traffic generator. [Online]. Available: https://github.com/IRATI/traffic-generator

[20] jFed website. [Online]. Available: http://jfed.iminds.be

[21] Rumba measurement framework. [Online]. Available: https://gitlab.com/arcfire/rumba

[22] Rina demonstrator github site. [Online]. Available: https://github.com/IRATI/demonstrator

[23] (2014, Aug.) iperf. [Online]. Available: http://code.google.com/p/iperf/

[24] Nginx website. [Online]. Available: http://hg.nginx.org/nginx.org

[25] Network Packet Analyzer. [Online]. Available: http://www.tcpdump.org/

[26] Network Protocol Analyzer. [Online]. Available: https://www.wireshark.org

63


Recommended